[jira] [Commented] (HDFS-13750) RBF: Router ID in RouterRpcClient is always null

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579284#comment-16579284
 ] 

genericqa commented on HDFS-13750:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
7s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13750 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935475/HDFS-13750.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8f34f8c13a68 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4023eeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24768/testReport/ |
| Max. process+thread count | 944 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24768/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router ID in RouterRpcClient is always null
> 
>
> Key: HDFS-13750
> URL: 

[jira] [Commented] (HDFS-13750) RBF: Router ID in RouterRpcClient is always null

2018-08-13 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579240#comment-16579240
 ] 

Takanobu Asanuma commented on HDFS-13750:
-

Thanks [~elgoiri] for your review. Uploaded the 4th patch using 
{{LambdaTestUtils}}.

> RBF: Router ID in RouterRpcClient is always null
> 
>
> Key: HDFS-13750
> URL: https://issues.apache.org/jira/browse/HDFS-13750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13750.1.patch, HDFS-13750.2.patch, 
> HDFS-13750.3.patch, HDFS-13750.4.patch
>
>
> {{RouterRpcClient}} is always initialized with {{routerId=null}} because it 
> is called before Router ID is determined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-347) Fix : testCloseContainerViaStandaAlone fails sometimes

2018-08-13 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579239#comment-16579239
 ] 

LiXin Ge commented on HDDS-347:
---

[~xyao] Thanks for reviewing this, I'm sorry that I didn't make it clear. 
Please take a look at the comments I added in the code below.
{code:java|title=KeyValueContainer.java|borderStyle=solid}
  public void close() throws StorageContainerException {
try {
  writeLock();
  containerData.closeContainer();<--- container state will changes to 
CLOSED from CLOSING
  File containerFile = getContainerFile();
  // update the new container data to .container File
  updateContainerFile(containerFile);<--- may take hundreds of 
milliseconds to process containerFile.
} catch (StorageContainerException ex) {
} finally {
   ... <--- out of this close() funtion, 
will printf the 'LOG' which the test expected
   ... 
{code}
The 'LOG' may delay several hundred milliseconds to appear after the condition 
_containerData.isClosed_ been satisfied. So, IMO, the sleep is still necessary 
to wait for the appearance of the 'LOG'.

> Fix : testCloseContainerViaStandaAlone fails sometimes
> --
>
> Key: HDDS-347
> URL: https://issues.apache.org/jira/browse/HDDS-347
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-347.000.patch
>
>
> This issue was finded in the automatic JenKins unit test of HDDS-265.
>  The container life cycle state is : Open -> Closing -> closed, this test 
> submit the container close command and wait for container state change to 
> *not equal to open*, actually even when the state condition(not equal to 
> open) is satisfied, the container may still in process of closing, so the LOG 
> which will printf after the container closed can't be find sometimes and the 
> test fails.
> {code:java|title=KeyValueContainer.java|borderStyle=solid}
> try {
>   writeLock();
>   containerData.closeContainer();
>   File containerFile = getContainerFile();
>   // update the new container data to .container File
>   updateContainerFile(containerFile);
> } catch (StorageContainerException ex) {
> {code}
> Looking at the code above, the container state changes from CLOSING to CLOSED 
> in the first step, the remaining *updateContainerFile* may take hundreds of 
> milliseconds, so even we modify the test logic to wait for the *CLOSED* state 
> will not guarantee the test success, too.
>  These are two way to fix this:
>  1, Remove one of the double check which depends on the LOG.
>  2, If we have to preserve the double check, we should wait for the *CLOSED* 
> state and sleep for a while to wait for the LOG appears.
>  patch 000 is based on the second way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13750) RBF: Router ID in RouterRpcClient is always null

2018-08-13 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13750:

Attachment: HDFS-13750.4.patch

> RBF: Router ID in RouterRpcClient is always null
> 
>
> Key: HDFS-13750
> URL: https://issues.apache.org/jira/browse/HDFS-13750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13750.1.patch, HDFS-13750.2.patch, 
> HDFS-13750.3.patch, HDFS-13750.4.patch
>
>
> {{RouterRpcClient}} is always initialized with {{routerId=null}} because it 
> is called before Router ID is determined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13817) RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't work properly

2018-08-13 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579197#comment-16579197
 ] 

Yiqun Lin commented on HDFS-13817:
--

[~Harsha1206], did you see other error log when executing command '/hdfs dfs 
-ls /apps5'?

> RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't 
> work properly 
> -
>
> Key: HDFS-13817
> URL: https://issues.apache.org/jira/browse/HDFS-13817
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {{Scenario:-}} 
> # Create a mount point with RANDOM policy and with 2 Nameservices .
> # List the target mount path of the Global path.
> Actual Output: 
> === 
> {{ls: `/apps5': No such file or directory}}
> Expected Output: 
> =
> {{if the files are availabel list those files or if it's emtpy it will disply 
> nothing}}
> {noformat} 
> bin> ./hdfs dfsrouteradmin -add /apps5 hacluster,ns2 /tmp10 -order RANDOM 
> -owner securedn -group hadoop
> Successfully added mount point /apps5
> bin> ./hdfs dfs -ls /apps5
> ls: `/apps5': No such file or directory
> bin> ./hdfs dfs -ls /apps3
> Found 2 items
> drwxrwxrwx   - user group 0 2018-08-09 19:55 /apps3/apps1
> -rw-r--r--   3   - user group  4 2018-08-10 11:55 /apps3/ttt
>  {noformat}
> {{please refer the bellow image for mount inofrmation}}
> {{/apps3 tagged with HASH policy}}
> {{/apps5 tagged with RANDOM policy}}
> {noformat}
> /bin> ./hdfs dfsrouteradmin -ls
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /apps3hacluster->/tmp3,ns2->/tmp4 securedn
>   users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /apps5hacluster->/tmp5,ns2->/tmp5 securedn
>   users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13738) fsck -list-corruptfileblocks has infinite loop if user is not privileged.

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579092#comment-16579092
 ] 

Hudson commented on HDFS-13738:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14763 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14763/])
HDFS-13738. fsck -list-corruptfileblocks has infinite loop if user is (weichiu: 
rev 4023eeba05aefe48384e870da3c95bb3af474514)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java


> fsck -list-corruptfileblocks has infinite loop if user is not privileged.
> -
>
> Key: HDFS-13738
> URL: https://issues.apache.org/jira/browse/HDFS-13738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0, 3.0.0
> Environment: Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13738.001.patch, HDFS-13738.002.patch, 
> HDFS-13738.003.patch, HDFS-13738.test.patch
>
>
> Found an interesting bug.
> Execute following command as any non-privileged user:
> {noformat}
> # run fsck
> $ hdfs fsck / -list-corruptfileblocks
> {noformat}
> {noformat}
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 0 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> {noformat}
> Reproducible on Hadoop 3.0.0 as well as 2.6.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13738) fsck -list-corruptfileblocks has infinite loop if user is not privileged.

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13738:
---
   Resolution: Fixed
Fix Version/s: 3.1.2
   3.0.4
   3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~study] for contributing the patch. I've pushed the commit into 
branch-3.0, branch-3.1 and trunk.

> fsck -list-corruptfileblocks has infinite loop if user is not privileged.
> -
>
> Key: HDFS-13738
> URL: https://issues.apache.org/jira/browse/HDFS-13738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0, 3.0.0
> Environment: Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13738.001.patch, HDFS-13738.002.patch, 
> HDFS-13738.003.patch, HDFS-13738.test.patch
>
>
> Found an interesting bug.
> Execute following command as any non-privileged user:
> {noformat}
> # run fsck
> $ hdfs fsck / -list-corruptfileblocks
> {noformat}
> {noformat}
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 0 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> {noformat}
> Reproducible on Hadoop 3.0.0 as well as 2.6.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13738) fsck -list-corruptfileblocks has infinite loop if user is not privileged.

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579072#comment-16579072
 ] 

Wei-Chiu Chuang commented on HDFS-13738:


+1

> fsck -list-corruptfileblocks has infinite loop if user is not privileged.
> -
>
> Key: HDFS-13738
> URL: https://issues.apache.org/jira/browse/HDFS-13738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0, 3.0.0
> Environment: Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Major
> Attachments: HDFS-13738.001.patch, HDFS-13738.002.patch, 
> HDFS-13738.003.patch, HDFS-13738.test.patch
>
>
> Found an interesting bug.
> Execute following command as any non-privileged user:
> {noformat}
> # run fsck
> $ hdfs fsck / -list-corruptfileblocks
> {noformat}
> {noformat}
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 0 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds
> Access denied for user systest. Superuser privilege is required
> Fsck on path '/' FAILED
> {noformat}
> Reproducible on Hadoop 3.0.0 as well as 2.6.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13758) DatanodeManager should throw exception if it has BlockRecoveryCommand but the block is not under construction

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579071#comment-16579071
 ] 

Wei-Chiu Chuang commented on HDFS-13758:


+1. Test failure doesn't appear related.

> DatanodeManager should throw exception if it has BlockRecoveryCommand but the 
> block is not under construction
> -
>
> Key: HDFS-13758
> URL: https://issues.apache.org/jira/browse/HDFS-13758
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: chencan
>Priority: Major
> Attachments: HDFS-10240 scenarios.jpg, HDFS-13758.001.patch, 
> HDFS-13758.branch-2.patch
>
>
> In Hadoop 3, HDFS-8909 added an assertion assumption that if a 
> BlockRecoveryCommand exists for a block, the block is under construction.
>  
> {code:title=DatanodeManager#getBlockRecoveryCommand()}
>   BlockRecoveryCommand brCommand = new BlockRecoveryCommand(blocks.length);
>   for (BlockInfo b : blocks) {
> BlockUnderConstructionFeature uc = b.getUnderConstructionFeature();
> assert uc != null;
> ...
> {code}
> This assertion accidentally fixed one of the possible scenario of HDFS-10240 
> data corruption, if a recoverLease() is made immediately followed by a 
> close(), before DataNodes have the chance to heartbeat.
> In a unit test you'll get:
> {noformat}
> 2018-07-19 09:43:41,331 [IPC Server handler 9 on 57890] WARN  ipc.Server 
> (Server.java:logException(2724)) - IPC Server handler 9 on 57890, call 
> Call#41 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.sendHeartbeat from 
> 127.0.0.1:57903
> java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getBlockRecoveryCommand(DatanodeManager.java:1551)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.handleHeartbeat(DatanodeManager.java:1661)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.handleHeartbeat(FSNamesystem.java:3865)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendHeartbeat(NameNodeRpcServer.java:1504)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.sendHeartbeat(DatanodeProtocolServerSideTranslatorPB.java:119)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31660)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> I propose to change this assertion even though it address the data 
> corruption, because:
> # We should throw an more meaningful exception than an NPE
> # on a production cluster, the assert is ignored, and you'll get a more 
> noticeable NPE. Future HDFS developers might fix this NPE, causing 
> regression. An NPE is typically not captured and handled, so there's a chance 
> to result in internal state inconsistency.
> # It doesn't address all possible scenarios of HDFS-10240. A proper fix 
> should reject close() if the block is being recovered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579056#comment-16579056
 ] 

genericqa commented on HDFS-13746:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935434/HDFS-13746.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ae12f457a703 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 74411ce |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24767/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24767/testReport/ |
| Max. process+thread count | 3195 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-13813) Exit NameNode if dangling child inode is detected when saving FsImage

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13813:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~smeng] for the patch. Committed rev 002 patch in several branches from 
2.9.x to trunk.

> Exit NameNode if dangling child inode is detected when saving FsImage
> -
>
> Key: HDFS-13813
> URL: https://issues.apache.org/jira/browse/HDFS-13813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13813.001.patch, HDFS-13813.002.patch
>
>
> Recently, the same stack trace as in -HDFS-9406- appears again in the field. 
> The symptom of the problem is that *loadINodeDirectorySection()* can't find a 
> child inode in inodeMap by the node id in the children list of the directory. 
> The child inode could be missing or deleted.
> As for now we didn't have a clear trace to reproduce the problem. Therefore, 
> I'm proposing this improvement to detect such corruption (data structure 
> inconsistency) when saving the FsImage, so that we can have the FsImage and 
> Edit Log to hopefully reproduce the problem stably.
>  
> In a previous patch HDFS-13314, [~arpitagarwal] did a great job catching 
> potential FsImage corruption in two cases. This patch includes a third case 
> where a child inode does not exist in the global FSDirectory dir when saving 
> (serializing) INodeDirectorySection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13813) Exit NameNode if dangling child inode is detected when saving FsImage

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13813:
---
Fix Version/s: 2.9.2
   2.10.0

> Exit NameNode if dangling child inode is detected when saving FsImage
> -
>
> Key: HDFS-13813
> URL: https://issues.apache.org/jira/browse/HDFS-13813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13813.001.patch, HDFS-13813.002.patch
>
>
> Recently, the same stack trace as in -HDFS-9406- appears again in the field. 
> The symptom of the problem is that *loadINodeDirectorySection()* can't find a 
> child inode in inodeMap by the node id in the children list of the directory. 
> The child inode could be missing or deleted.
> As for now we didn't have a clear trace to reproduce the problem. Therefore, 
> I'm proposing this improvement to detect such corruption (data structure 
> inconsistency) when saving the FsImage, so that we can have the FsImage and 
> Edit Log to hopefully reproduce the problem stably.
>  
> In a previous patch HDFS-13314, [~arpitagarwal] did a great job catching 
> potential FsImage corruption in two cases. This patch includes a third case 
> where a child inode does not exist in the global FSDirectory dir when saving 
> (serializing) INodeDirectorySection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13813) Exit NameNode if dangling child inode is detected when saving FsImage

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579037#comment-16579037
 ] 

Hudson commented on HDFS-13813:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14762 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14762/])
HDFS-13813. Exit NameNode if dangling child inode is detected when (weichiu: 
rev 23854443efa62aa70a1c30c32c3816750e5d7a5b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java


> Exit NameNode if dangling child inode is detected when saving FsImage
> -
>
> Key: HDFS-13813
> URL: https://issues.apache.org/jira/browse/HDFS-13813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13813.001.patch, HDFS-13813.002.patch
>
>
> Recently, the same stack trace as in -HDFS-9406- appears again in the field. 
> The symptom of the problem is that *loadINodeDirectorySection()* can't find a 
> child inode in inodeMap by the node id in the children list of the directory. 
> The child inode could be missing or deleted.
> As for now we didn't have a clear trace to reproduce the problem. Therefore, 
> I'm proposing this improvement to detect such corruption (data structure 
> inconsistency) when saving the FsImage, so that we can have the FsImage and 
> Edit Log to hopefully reproduce the problem stably.
>  
> In a previous patch HDFS-13314, [~arpitagarwal] did a great job catching 
> potential FsImage corruption in two cases. This patch includes a third case 
> where a child inode does not exist in the global FSDirectory dir when saving 
> (serializing) INodeDirectorySection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-341) HDDS/Ozone bits are leaking into Hadoop release

2018-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-341:
--
Fix Version/s: (was: 0.2.1)

> HDDS/Ozone bits are leaking into Hadoop release
> ---
>
> Key: HDDS-341
> URL: https://issues.apache.org/jira/browse/HDDS-341
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Blocker
>
> [~aw] in the Ozone release discussion reported that Ozone is leaking bits 
> into Hadoop. This has to be fixed before  Hadoop 3.2 or Ozone 0.2.1 release. 
> I will make this a release blocker for Ozone.
>  
> {noformat}
> >Has anyone verified that a Hadoop release doesn't have _any_ of the extra 
> >ozone bits that are sprinkled outside the maven modules?
> [aengineer] : As far as I know that is the state, we have had multiple Hadoop 
> releases after ozone has been merged. So far no one has reported Ozone bits 
> leaking into Hadoop. If we find something like that, it would be a bug.
> [aw]: There hasn't been a release from a branch where Ozone has been merged 
> yet. The first one will be 3.2.0.  Running create-release off of trunk 
> presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in 
> the Hadoop source tar ball.
>   So, consider this as a report. IMHO, cutting an Ozone release prior to 
> a Hadoop release ill-advised given the distribution impact and the 
> requirements of the merge vote.  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13813) Exit NameNode if dangling child inode is detected when saving FsImage

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13813:
---
Fix Version/s: 3.1.2
   3.0.4
   3.2.0

> Exit NameNode if dangling child inode is detected when saving FsImage
> -
>
> Key: HDFS-13813
> URL: https://issues.apache.org/jira/browse/HDFS-13813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13813.001.patch, HDFS-13813.002.patch
>
>
> Recently, the same stack trace as in -HDFS-9406- appears again in the field. 
> The symptom of the problem is that *loadINodeDirectorySection()* can't find a 
> child inode in inodeMap by the node id in the children list of the directory. 
> The child inode could be missing or deleted.
> As for now we didn't have a clear trace to reproduce the problem. Therefore, 
> I'm proposing this improvement to detect such corruption (data structure 
> inconsistency) when saving the FsImage, so that we can have the FsImage and 
> Edit Log to hopefully reproduce the problem stably.
>  
> In a previous patch HDFS-13314, [~arpitagarwal] did a great job catching 
> potential FsImage corruption in two cases. This patch includes a third case 
> where a child inode does not exist in the global FSDirectory dir when saving 
> (serializing) INodeDirectorySection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13813) Exit NameNode if dangling child inode is detected when saving FsImage

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579014#comment-16579014
 ] 

Wei-Chiu Chuang commented on HDFS-13813:


+1. will commit shortly.

> Exit NameNode if dangling child inode is detected when saving FsImage
> -
>
> Key: HDFS-13813
> URL: https://issues.apache.org/jira/browse/HDFS-13813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13813.001.patch, HDFS-13813.002.patch
>
>
> Recently, the same stack trace as in -HDFS-9406- appears again in the field. 
> The symptom of the problem is that *loadINodeDirectorySection()* can't find a 
> child inode in inodeMap by the node id in the children list of the directory. 
> The child inode could be missing or deleted.
> As for now we didn't have a clear trace to reproduce the problem. Therefore, 
> I'm proposing this improvement to detect such corruption (data structure 
> inconsistency) when saving the FsImage, so that we can have the FsImage and 
> Edit Log to hopefully reproduce the problem stably.
>  
> In a previous patch HDFS-13314, [~arpitagarwal] did a great job catching 
> potential FsImage corruption in two cases. This patch includes a third case 
> where a child inode does not exist in the global FSDirectory dir when saving 
> (serializing) INodeDirectorySection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Status: In Progress  (was: Patch Available)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch, HDFS-13746.005.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Attachment: HDFS-13746.005.patch
Status: Patch Available  (was: In Progress)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch, HDFS-13746.005.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578916#comment-16578916
 ] 

Siyao Meng commented on HDFS-13746:
---

[~templedf] Thanks!

Ah! Sorry I just realized that I was looking at the wrong variable(str_group). 
Yes .toString() would work for gN, thanks for pointing that out!

Great. I have rewritten the 4th test with GenericTestUtils.waitFor(). But I 
have to convert long to int for the 3rd parameter. The resulting code is neat.

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578908#comment-16578908
 ] 

Xiaoyu Yao commented on HDDS-98:


[~dineshchitlangia], thanks for working on this, my comments inline..
{quote}bq. In methods like setOwner or setQuota we don't have access to the old 
owner or old quota values. Ideally, we would want to log old and new values in 
audit log
{quote}
We could live without original values for now like HDFS audit does today. To 
add the original value in the future, we could either 1) make change to 
volumeManager to return the original value as part of the operation result or 
2) use a separate call (overhead) to retrieve the original value. 
{quote}bq. In methods like createVolume we will not have a valid value for 
creationTime since that is populated in the underlying implementation layer.
{quote}
This can be handled similar to two approaches above. 

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13786) EC: Display erasure coding policy for sub-directories is not working

2018-08-13 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578904#comment-16578904
 ] 

Xiao Chen commented on HDFS-13786:
--

Cherry-picked this to branch-3.1 and branch-3.0. Thanks for the work here!

> EC: Display erasure coding policy for sub-directories is not working
> 
>
> Key: HDFS-13786
> URL: https://issues.apache.org/jira/browse/HDFS-13786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: Display_EC_Policy_Missing_Sub_Dir.png, 
> HDFS-13786-01.patch
>
>
> EC: Display erasure coding policy for sub-directories is not working
> - Create a Directory 
>  - Set EC policy for the Directory
>  - Create a file in-side that Directory 
>  - Create a sub-directory inside the parent directory
>  - Check the EC policy set for the files and sub-folders of the parent 
> directory with command 
>  "hadoop fs -ls -e /ecdir" 
>  EC policy will be displayed only for files and missing for 
> sub-directories,which is wrong behavior
>  - But if you check the EC policy set of sub-directory with "hdfs ec 
> -getPolicy " ,it will show
>  the ec policy
>  
>  Actual ouput :-
>  
>  Display erasure coding policy for sub-directories is not working with 
> command "hadoop fs -ls -e "
> Expected output :-
> It should display erasure coding policy for sub-directories also with command 
> "hadoop fs -ls -e "



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-13 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575282#comment-16575282
 ] 

Dinesh Chitlangia edited comment on HDDS-98 at 8/13/18 8:48 PM:


[~xyao] - I started implementing the changes in OzoneManager as discussed.

I see certain roadblocks:
 # In methods like setOwner or setQuota we don't have access to the old owner 
or old quota values. Ideally, we would want to log old and new values in audit 
log
 # In methods like createVolume we will not have a valid value for creationTime 
since that is populated in the underlying implementation layer.

Could you please share your thoughts on this?

 

cc: [~anu]


was (Author: dineshchitlangia):
[~xyao] - I started implementing the changes in OzoneManager as discussed.

I see certain roadblocks:
 # In methods like setOwner or setQuota we don't have access to the old owner 
or old quota values. Ideally, we would want to log old and new values in audit 
log
 # In methods like createVolume we will not have a valid value for creationTime 
since that is populated in the underlying implementation layer.

Could you please share your thoughts on this?

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-13 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-98 stopped by Dinesh Chitlangia.
-
> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578890#comment-16578890
 ] 

genericqa commented on HDFS-13746:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935411/HDFS-13746.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4badae43254f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b94c887 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24766/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24766/testReport/ |
| Max. process+thread count | 3324 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578876#comment-16578876
 ] 

Daniel Templeton commented on HDFS-13746:
-

I was just reminded on another JIRA that there's a better way to do the 
retries: {{GenericTestUtils.waitFor()}}. Sorry I didn't think of that earlier.

On printing the arrays, {{gN}} is a {{List}}, not an array, so calling 
{{gN.toString()}} will return a string with the contents of the list. If you 
trace the code deep enough (or use a debugger), you'll see that it's 
specifically a {{LinkedList}}.

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578860#comment-16578860
 ] 

genericqa commented on HDDS-179:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
38s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-179 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935412/HDDS-179.10.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8b58f0fc1fbb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality 

[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578857#comment-16578857
 ] 

Hudson commented on HDDS-324:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14760 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14760/])
HDDS-324. Use pipeline name as Ratis groupID to allow datanode to report (xyao: 
rev b4031a8f1b2c81249ec24167e38679a775c09214)
* (edit) hadoop-hdds/common/src/main/java/org/apache/ratis/RatisHelper.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientSpi.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/replication/TestReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/PipelineID.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerSpi.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestDeletedBlockLog.java
* (edit) hadoop-hdds/common/src/main/proto/hdds.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineClose.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkContainerStateMap.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 

[jira] [Commented] (HDFS-13735) Make QJM HTTP URL connection timeout configurable

2018-08-13 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578846#comment-16578846
 ] 

Chen Liang commented on HDFS-13735:
---

Thanks for pointing out [~shv]. I've committed to branch-3.0 and branch-3.1.

> Make QJM HTTP URL connection timeout configurable
> -
>
> Key: HDFS-13735
> URL: https://issues.apache.org/jira/browse/HDFS-13735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: qjm
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13735.000.patch, HDFS-13735.001.patch
>
>
> We've seen "connect timed out" happen internally when QJM tries to open HTTP 
> connections to JNs. This is now using {{newDefaultURLConnectionFactory}} 
> which uses the default timeout 60s, and is not configurable.
> It would be better for this to be configurable, especially for 
> ObserverNameNode (HDFS-12943), where latency is important, and 60s may not be 
> a good value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-324:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~msingh] for the contribution and all for the reviews. I've committed 
the fix to the trunk. 

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch, HDDS-324.009.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578823#comment-16578823
 ] 

genericqa commented on HDDS-324:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 4s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
32s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdds/server-scm generated 0 new + 0 unchanged 
- 

[jira] [Commented] (HDDS-268) Add SCM close container watcher

2018-08-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578797#comment-16578797
 ] 

Ajay Kumar commented on HDDS-268:
-

[~xyao],  CommandStatusReportHandler already publishes 
{{SCMEvents.CLOSE_CONTAINER_STATUS}}. New watcher will be listening to this. In 
case of Failure it will send a event to CloseContainerCommandHandler which may 
resend the command to datanodes. For all other cases watcher will remove it 
from its internal queue and consider the event as completed. I think we need to 
handle PENDING status separately as well.

{code}@Override   protected synchronized void 
handleCompletion(CloseContainerStatus status,
  EventPublisher publisher) throws LeaseNotFoundException {
CloseContainerRetryableReq closeCont = 
getTrackedEventbyId(status.getId());
super.handleCompletion(status, publisher);  if 
(status.getCmdStatus().getStatus().equals(Status.FAILED) && closeCont   
 != null) {this.resendEventToHandler(closeCont.getId(), publisher);
} }{code}

Had a discussion regarding this with [~nandakumar131]. If we don't consider 
Container to be closed until we receive ack from all related DN's than we need 
to add DN id to CloseContainerRetryableReq to track command to every datanode. 
If you agree i will submit a new patch with both the changes.

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch, 
> HDDS-268.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578795#comment-16578795
 ] 

Hudson commented on HDFS-13823:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14759 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14759/])
HDFS-13823. NameNode UI : "Utilities -> Browse the file system -> open a (arp: 
rev f760a544a74d3341f665f03c83402d65c5c2a8cd)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13823:
-
  Resolution: Fixed
   Fix Version/s: 3.1.2
  3.2.0
Target Version/s: 3.0.4  (was: 3.2.0, 3.0.4, 3.1.2)
  Status: Resolved  (was: Patch Available)

Committed to branch-3.1 and trunk.

I hit a merge conflict for branch-3.0, not sure this issue exists in 3.0.3.

Thanks for the contribution [~nandakumar131] and thanks for the code review and 
verification [~ajayydv].

> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578764#comment-16578764
 ] 

genericqa commented on HDFS-13790:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
25s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13790 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935404/HDFS-13790.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 655c4d0bd044 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 11daa01 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24765/testReport/ |
| Max. process+thread count | 1346 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24765/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Move ClientProtocol APIs to its own 

[jira] [Commented] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578761#comment-16578761
 ] 

Ajay Kumar commented on HDFS-13823:
---

[~nandakumar131] thanks for fixing this. Tested it locally. 

+1 (non-binding)

> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578754#comment-16578754
 ] 

Anu Engineer commented on HDDS-214:
---

bq. This pretty much answers all of my questions. 

maybe this is an opportunity for us to look at the release process scripts and 
document them? I am willing to document both Hadoop release and Ozone release 
process. [~aw] I don't have your level of expertise on Hadoop release or the 
history, would you be able to review and correct the mistakes that I make? Then 
we can use this as an opportunity to address this issue.

> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-345) Upgrade RocksDB version from 5.8.0 to 5.14.2

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578755#comment-16578755
 ] 

genericqa commented on HDDS-345:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-345 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935406/HDDS-345.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux bd39a7dd4596 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 11daa01 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/760/testReport/ |
| Max. process+thread count | 409 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/760/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Comment Edited] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578753#comment-16578753
 ] 

Arpit Agarwal edited comment on HDDS-333 at 8/13/18 6:23 PM:
-

Created a surveymonkey poll -> https://www.surveymonkey.com/r/DJ5HBPX

My votes in order of preference: 2, 4.

This is a fairly low tech poll - no ballot stuffing please. :)


was (Author: arpitagarwal):
Created a surveymonkey poll -> https://www.surveymonkey.com/r/DJ5HBPX

My votes in order of preference: 2, 4.

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578753#comment-16578753
 ] 

Arpit Agarwal commented on HDDS-333:


Created a surveymonkey poll -> https://www.surveymonkey.com/r/DJ5HBPX

My votes in order of preference: 2, 4.

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578742#comment-16578742
 ] 

Anu Engineer edited comment on HDDS-333 at 8/13/18 6:13 PM:


 bq. How about a yellow lightning over a green ring?  

IMHO,that would be too much Awesome, and we would have difficulty looking at 
the logo  with too much brightness. This reminds me of kungfu panda "There is 
no charge for awesomeness...or attractiveness", but we will not be able to look 
at it. :)

 


was (Author: anu):
 bq. How about a yellow lightning over a green ring?  

IMHO,that would be too much Awesome, and we would have difficulty looking at 
the logo on with too much brightness. This reminds me of kungfu panda "There is 
no charge for awesomeness...or attractiveness", but we will not be able to look 
at it. :)

 

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578741#comment-16578741
 ] 

Allen Wittenauer commented on HDDS-214:
---

bq.  if you can point us to the correct Hadoop release process document

This pretty much answers all of my questions.  :( 


> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578742#comment-16578742
 ] 

Anu Engineer commented on HDDS-333:
---

 bq. How about a yellow lightning over a green ring?  

IMHO,that would be too much Awesome, and we would have difficulty looking at 
the logo on with too much brightness. This reminds me of kungfu panda "There is 
no charge for awesomeness...or attractiveness", but we will not be able to look 
at it. :)

 

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-347) Fix : testCloseContainerViaStandaAlone fails sometimes

2018-08-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578737#comment-16578737
 ] 

Xiaoyu Yao commented on HDDS-347:
-

Thanks [~GeLiXin] for working on this. The patch looks good to me. I think the 
key to address the issue is wait for containerData.isClosed instead of 
!isOpened with the introduction of the closing state.

Can you clarify if the sleep is still necessary? 

> Fix : testCloseContainerViaStandaAlone fails sometimes
> --
>
> Key: HDDS-347
> URL: https://issues.apache.org/jira/browse/HDDS-347
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-347.000.patch
>
>
> This issue was finded in the automatic JenKins unit test of HDDS-265.
>  The container life cycle state is : Open -> Closing -> closed, this test 
> submit the container close command and wait for container state change to 
> *not equal to open*, actually even when the state condition(not equal to 
> open) is satisfied, the container may still in process of closing, so the LOG 
> which will printf after the container closed can't be find sometimes and the 
> test fails.
> {code:java|title=KeyValueContainer.java|borderStyle=solid}
> try {
>   writeLock();
>   containerData.closeContainer();
>   File containerFile = getContainerFile();
>   // update the new container data to .container File
>   updateContainerFile(containerFile);
> } catch (StorageContainerException ex) {
> {code}
> Looking at the code above, the container state changes from CLOSING to CLOSED 
> in the first step, the remaining *updateContainerFile* may take hundreds of 
> milliseconds, so even we modify the test logic to wait for the *CLOSED* state 
> will not guarantee the test success, too.
>  These are two way to fix this:
>  1, Remove one of the double check which depends on the LOG.
>  2, If we have to preserve the double check, we should wait for the *CLOSED* 
> state and sleep for a while to wait for the LOG appears.
>  patch 000 is based on the second way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578714#comment-16578714
 ] 

Anu Engineer edited comment on HDDS-214 at 8/13/18 6:07 PM:


{quote}Has there been any attempt to actually cut the two releases to see if 
this plan is even feasible?
{quote}
Looks like you are trying to voice some concerns in a voiceless manner. If you 
do see any issue we would be more than willing to address it. Please let us 
know what you are concerned about more clearly. We do build and deploy the bits 
regularly with *mvn package* and deploy both HDFS only and Ozone bits in an 
internal cluster. So yes, from a pragmatic point of view, we think this is 
feasible. I realize that we may not share the same point of view when looking 
at these issues, so please let us know if you see any issues, so we can address 
them.
{quote}Where are the proposed changes to the Hadoop release process documented
{quote}
Good catch, we should document the Hadoop release process changes. I remember 
there was a thread some time back that the release process is *not* correct. if 
you can point us to the correct Hadoop release process document, we would be 
more than happy to document the release changes and process needed.

 
{quote}Where are the actual steps to build a release?
{quote}
This is the Jira that is proposing the actual release and hopefully the 
conversation in this Jira will allow us to nail it down. Please take a look at 
the attached design document of what we want to achieve, if there are no 
concerns, we will document how to get there.

 

 


was (Author: anu):
{quote}Has there been any attempt to actually cut the two releases to see if 
this plan is even feasible?
{quote}
Looks like you are trying to voice some concerns in a voiceless manner. If you 
do see any issue we would be more than willing to address it. Please let us 
know what you are concerned about more clearly. We do build and deploy the bits 
regularly with *mvn package* and deploy both HDFS only and Ozone bits in an 
internal cluster. So yes, from a pragmatic point of view, we think this is 
feasible. I realize that we may not share the same point of view when looking 
at these issues, so please let us know if you see any issues, so we can address 
them.
{quote}Where are the proposed changes to the Hadoop release process documented
{quote}
Good catch, we should document the Hadoop release process changes. I remember 
there was a thread some time back that the release process is *not* correct. if 
you can point us to the correct Hadoop release process document, we would be 
more than happy to document the release changes and process needed.

 
{quote}Where are the actual steps to build a release?
{quote}
This is Jira that is proposing the actual release and hopefully the 
conversation in this Jira will allow us to nail it down. Please take a look at 
the attached design document of what we want to achieve, if there are no 
concerns, we will document how to get there.

 

 

> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578731#comment-16578731
 ] 

Tsz Wo Nicholas Sze commented on HDDS-333:
--

How about a yellow lightning over a green ring?  

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578728#comment-16578728
 ] 

Dinesh Chitlangia commented on HDDS-333:


Voting in order of preference:
 * Option 2 (black lightning on green ring)
 * Option 4 (green lightning on black ring)
 * Option 1 (black lightning on blue ring)

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-08-13 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578717#comment-16578717
 ] 

Zsolt Venczel edited comment on HDFS-13697 at 8/13/18 6:00 PM:
---

Test failures seem to be unrelated, I could not reproduce locally with or 
without my patch: 
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.276 
s - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Running 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.574 s 
- in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.645 
s - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Running org.apache.hadoop.tracing.TestTracing
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.591 s 
- in org.apache.hadoop.tracing.TestTracing
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 17, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
{code}


was (Author: zvenczel):
Test failures seem to be unrelated, I could not reproduce locally with or 
without my commit: 
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.276 
s - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Running 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.574 s 
- in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.645 
s - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Running org.apache.hadoop.tracing.TestTracing
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.591 s 
- in org.apache.hadoop.tracing.TestTracing
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 17, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
{code}

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch, HDFS-13697.07.patch, HDFS-13697.08.patch, 
> HDFS-13697.prelim.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> 

[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578720#comment-16578720
 ] 

Arpit Agarwal commented on HDDS-333:


My votes, in order:
- Option 2 (black lightning on green ring)
- Option 4 (green lightning on black ring)

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-08-13 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578717#comment-16578717
 ] 

Zsolt Venczel commented on HDFS-13697:
--

Test failures seem to be unrelated, I could not reproduce locally with or 
without my commit: 
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.276 
s - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Running 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.574 s 
- in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.645 
s - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Running org.apache.hadoop.tracing.TestTracing
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.591 s 
- in org.apache.hadoop.tracing.TestTracing
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 17, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
{code}

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch, HDFS-13697.07.patch, HDFS-13697.08.patch, 
> HDFS-13697.prelim.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
> [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
> [encrypted_key]!!
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> 

[jira] [Commented] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578714#comment-16578714
 ] 

Anu Engineer commented on HDDS-214:
---

{quote}Has there been any attempt to actually cut the two releases to see if 
this plan is even feasible?
{quote}
Looks like you are trying to voice some concerns in a voiceless manner. If you 
do see any issue we would be more than willing to address it. Please let us 
know what you are concerned about more clearly. We do build and deploy the bits 
regularly with *mvn package* and deploy both HDFS only and Ozone bits in an 
internal cluster. So yes, from a pragmatic point of view, we think this is 
feasible. I realize that we may not share the same point of view when looking 
at these issues, so please let us know if you see any issues, so we can address 
them.
{quote}Where are the proposed changes to the Hadoop release process documented
{quote}
Good catch, we should document the Hadoop release process changes. I remember 
there was a thread some time back that the release process is *not* correct. if 
you can point us to the correct Hadoop release process document, we would be 
more than happy to document the release changes and process needed.

 
{quote}Where are the actual steps to build a release?
{quote}
This is Jira that is proposing the actual release and hopefully the 
conversation in this Jira will allow us to nail it down. Please take a look at 
the attached design document of what we want to achieve, if there are no 
concerns, we will document how to get there.

 

 

> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-13 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578713#comment-16578713
 ] 

Shashikant Banerjee commented on HDDS-179:
--

Patch v10 adds some more detailed comments for ContainerStateMachine

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578711#comment-16578711
 ] 

Siyao Meng commented on HDFS-13746:
---

[~templedf] Thanks for the comments!

1. gN.toString() won't print the serialized content of the string array. 
Instead, it would just return a string representation of the object 
(essentially invoking Object.toString()).

2. Okay. Changed from int to long for _*maxTrials*_ and _*trial*_.

3. Removed GenericTestUtils.setLogLevel(LOG, Level.DEBUG) from setUp().

 

I have uploaded patch v004. Thanks!

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-179:
-
Attachment: HDDS-179.10.patch

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Attachment: HDFS-13746.004.patch
Status: Patch Available  (was: In Progress)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578706#comment-16578706
 ] 

Tsz Wo Nicholas Sze commented on HDDS-333:
--

+1 on 3rd logo – lightening is naturally yellow and it also matches the hadoop 
yellow elephant.

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578704#comment-16578704
 ] 

Nanda kumar commented on HDDS-333:
--

+1 on the black ring with green 3.

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-13 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Status: In Progress  (was: Patch Available)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578692#comment-16578692
 ] 

Allen Wittenauer commented on HDDS-214:
---

* Has there been any attempt to actually cut the two releases to see if this 
plan is even feasible?

* Where are the proposed changes to the Hadoop release process documented?

* Where are the actual steps to build a release?  


> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578689#comment-16578689
 ] 

Shashikant Banerjee commented on HDDS-333:
--

+1, 4th logo.

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578685#comment-16578685
 ] 

Anu Engineer edited comment on HDDS-333 at 8/13/18 5:44 PM:


+1, for the 3rd Logo, Black ring with the golden 3. Thx Green also looks quite 
good. So either of them is a go for me.

 


was (Author: anu):
+1, for the 3rd Logo, Black ring with the golden 3. Thx

 

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578685#comment-16578685
 ] 

Anu Engineer commented on HDDS-333:
---

+1, for the 3rd Logo, Black ring with the golden 3. Thx

 

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13800) Improve the error message when contacting an IPC port via a browser

2018-08-13 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned HDFS-13800:
---

Assignee: Vaibhav Gandhi  (was: Daniel Templeton)

> Improve the error message when contacting an IPC port via a browser
> ---
>
> Key: HDFS-13800
> URL: https://issues.apache.org/jira/browse/HDFS-13800
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Vaibhav Gandhi
>Priority: Major
>  Labels: newbie
>
> When I point a browser at {{http://:9000}}, get get back a 404 with 
> the following text: {quote}It looks like you are making an HTTP request to a 
> Hadoop IPC port. This is not the correct port for the web interface on this 
> daemon.{quote}  While accurate, that's not exactly helpful.  It would be 
> worlds more useful to include the URL for the web UI in the text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-314:
--
Status: Open  (was: Patch Available)

> ozoneShell putKey command overwrites the existing key having same name
> --
>
> Key: HDDS-314
> URL: https://issues.apache.org/jira/browse/HDDS-314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-314.001.patch, HDDS-314.002.patch, 
> HDDS-314.003.patch
>
>
> steps taken : 
> 1) created a volume root-volume and a bucket root-bucket.
> 2)  Ran following command to put a key with name 'passwd'
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/services -v
> 2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : 567c100888518c1163b3462993de7d47
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:18 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
>  
> {noformat}
> 3) Ran following command to put a key with name 'passwd' again.
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/passwd -v
> 2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : b056233571cc80d6879212911cb8e500
> 2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:42 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl 
> detectProxy{noformat}
>  
> key 'passwd' was overwritten with new content and it did not throw any saying 
> that the key is already present.
> Expectation :
> ---
> key overwrite with same name should not be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-346) ozoneShell show the new volume info after updateVolume command like updateBucket command.

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578673#comment-16578673
 ] 

Hudson commented on HDDS-346:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14757 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14757/])
HDDS-346. ozoneShell show the new volume info after updateVolume command 
(aengineer: rev 11daa010d218d340939d9ab05a946939f3292f5e)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/UpdateVolumeHandler.java


> ozoneShell show the new volume info after updateVolume command like 
> updateBucket command.
> -
>
> Key: HDDS-346
> URL: https://issues.apache.org/jira/browse/HDDS-346
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-346.001.patch
>
>
> ozoneShell  show nothing after UpdateVolume,we may list the new volume info 
> after update command.
> Like this:
> [root@localhost bin]# ./ozone oz -updateVolume /volume -quota 10GB
> 2018-08-10 09:40:02,241 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {
>   "owner" : {
>     "name" : "root"
>   },
>   "quota" : {
>     "unit" : "GB",
>     "size" : 10
>   },
>   "volumeName" : "volume",
>   "createdOn" : "Tue, 01 Jun +50573 08:11:18 GMT",
>   "createdBy" : "root"
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13819) TestDirectoryScanner#testDirectoryScannerInFederatedCluster is flaky

2018-08-13 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578669#comment-16578669
 ] 

Daniel Templeton commented on HDFS-13819:
-

We're not seeing the failure in the upstream test runs.  It's only appearing 
for us on our internal tests, and only rarely.  But it is the same test code as 
upstream:

{quote}java.lang.AssertionError: expected:<2> but was:<1>
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.verifyStats(TestDirectoryScanner.java:337)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testDirectoryScannerInFederatedCluster(TestDirectoryScanner.java:1074){quote}

I wasn't able to reproduce a failure in 500 runs on my laptop.  This is the 
problem with these integration tests pretending to be unit tests.  They're 
inherently flaky, but it takes the right environment.

I'll let it sit for another day and then commit.

> TestDirectoryScanner#testDirectoryScannerInFederatedCluster is flaky
> 
>
> Key: HDFS-13819
> URL: https://issues.apache.org/jira/browse/HDFS-13819
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-13819.001.patch, HDFS-13819.002.patch
>
>
> We're seeing the test fail periodically with:
> {quote}java.lang.AssertionError: expected:<2> but was:<1>{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-13 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578667#comment-16578667
 ] 

Xiao Chen commented on HDFS-13788:
--

Thanks [~knanasi] for the patch and [~zvenczel] for the review.
bq. For rack fault-tolerance, it is also important to have at least as many 
racks as the configured number of EC parity cells.
This is technically not correct. In case of RS(3,2), having 2 racks is not safe.
I suggest we word it something like: ... to have enough number of racks, so 
that on average, each rack holds number of blocks no more than the number of EC 
parity blocks. A formula to calculate this would be (data blocks + parity 
blocks) / parity blocks, rounding up.

Then in the 6,3 example, we add the example calculation: ... minimally 3 racks 
(calculated by (6 + 3) / 3 = 3) ...


It'd be great if we can add a note in the end as well, after:
bq. ...will still attempt to spread a striped file across multiple nodes to 
preserve node-level fault-tolerance.
For this reason, it is recommended to setup racks with similar number of 
DataNodes. 

> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-08-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578666#comment-16578666
 ] 

Xiaoyu Yao commented on HDDS-314:
-

Thanks [~nilotpalnandi] for working on this. I agree with [~anu] on the putkey 
semantics for object store.

At HDDS layer, we plan to support rewrite as a new version when we have full 
version support. The related code is in KeyManagerImpl#openKey around line 263.

If we really need to block this case in this release, the block should be in 
openKey instead of commitKey.

> ozoneShell putKey command overwrites the existing key having same name
> --
>
> Key: HDDS-314
> URL: https://issues.apache.org/jira/browse/HDDS-314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-314.001.patch, HDDS-314.002.patch, 
> HDDS-314.003.patch
>
>
> steps taken : 
> 1) created a volume root-volume and a bucket root-bucket.
> 2)  Ran following command to put a key with name 'passwd'
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/services -v
> 2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : 567c100888518c1163b3462993de7d47
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:18 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
>  
> {noformat}
> 3) Ran following command to put a key with name 'passwd' again.
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/passwd -v
> 2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : b056233571cc80d6879212911cb8e500
> 2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:42 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl 
> detectProxy{noformat}
>  
> key 'passwd' was overwritten with new content and it did not throw any saying 
> that the key is already present.
> Expectation :
> ---
> key overwrite with same name should not be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13767) Add msync server implementation.

2018-08-13 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578665#comment-16578665
 ] 

Chen Liang commented on HDFS-13767:
---

The failed test is unrelated, I've committed to the feature branch. Thanks for 
the review [~shv], [~xkrogen] and [~zero45]!

> Add msync server implementation.
> 
>
> Key: HDFS-13767
> URL: https://issues.apache.org/jira/browse/HDFS-13767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13767-HDFS-12943.001.patch, 
> HDFS-13767-HDFS-12943.002.patch, HDFS-13767-HDFS-12943.003.patch, 
> HDFS-13767-HDFS-12943.004.patch, HDFS-13767.WIP.001.patch, 
> HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, HDFS-13767.WIP.004.patch
>
>
> This is a followup on HDFS-13688, where msync API is introduced to 
> {{ClientProtocol}} but the server side implementation is missing. This is 
> Jira is to implement the server side logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13767) Add msync server implementation.

2018-08-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13767:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add msync server implementation.
> 
>
> Key: HDFS-13767
> URL: https://issues.apache.org/jira/browse/HDFS-13767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13767-HDFS-12943.001.patch, 
> HDFS-13767-HDFS-12943.002.patch, HDFS-13767-HDFS-12943.003.patch, 
> HDFS-13767-HDFS-12943.004.patch, HDFS-13767.WIP.001.patch, 
> HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, HDFS-13767.WIP.004.patch
>
>
> This is a followup on HDFS-13688, where msync API is introduced to 
> {{ClientProtocol}} but the server side implementation is missing. This is 
> Jira is to implement the server side logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-345) Upgrade RocksDB version from 5.8.0 to 5.14.2

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578663#comment-16578663
 ] 

Anu Engineer commented on HDDS-345:
---

+1, please feel free to commit at will. We already have test passes with this 
version of RocksDB.

 

> Upgrade RocksDB version from 5.8.0 to 5.14.2
> 
>
> Key: HDDS-345
> URL: https://issues.apache.org/jira/browse/HDDS-345
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager, SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-345.000.patch, HDDS-345.001.patch
>
>
> We have been using RocksDB version {{5.8.0}}, this can be upgraded to the 
> latest {{5.14.2}} version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578658#comment-16578658
 ] 

genericqa commented on HDFS-13823:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 39m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
53m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935391/HDFS-13823.000.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 0a06b2185169 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a13929d |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 317 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24764/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578654#comment-16578654
 ] 

Anu Engineer commented on HDDS-314:
---

{noformat}

Expectation :

---

key overwrite with same name should not be allowed.{noformat}
I submit that this expectation is not correct. Ozone's defined semantics is 
that a key will overwritten without any warning, the same is true for S3.

 

Here are the relevant references:

>From page *34* from *Book of Ozone*:
{noformat}

PUT /{volume}/{bucket}/{key}

When putting a key it is the responsibility of the user to ensure that the key 
does not exist, Otherwise put will overwrite an existing key. Key names can be 
from 3 bytes to maximum length of 1024 bytes. Maximum size of data that can be 
stored in a single key is 5 GB. All valid URI(as defined) characters can be 
used for keys.{noformat}
 

>From the following URL:

[https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html]
{noformat}
Amazon S3 is a distributed system. If it receives multiple write requests for 
the same object simultaneously, it overwrites all but the last object written. 
Amazon S3 does not provide object locking; if you need this, make sure to build 
it into your application layer or use versioning instead.{noformat}
 

I am more than happy to change the semantics if you think it is required. I am 
just mentioning that this is the current defined semantics. I am sorry I did 
not get back to earlier with this.

 

 

> ozoneShell putKey command overwrites the existing key having same name
> --
>
> Key: HDDS-314
> URL: https://issues.apache.org/jira/browse/HDDS-314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-314.001.patch, HDDS-314.002.patch, 
> HDDS-314.003.patch
>
>
> steps taken : 
> 1) created a volume root-volume and a bucket root-bucket.
> 2)  Ran following command to put a key with name 'passwd'
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/services -v
> 2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : 567c100888518c1163b3462993de7d47
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:18 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
>  
> {noformat}
> 3) Ran following command to put a key with name 'passwd' again.
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/passwd -v
> 2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : b056233571cc80d6879212911cb8e500
> 2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:42 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl 
> detectProxy{noformat}
>  
> key 'passwd' was overwritten with new content and it did not throw any saying 
> that the key is 

[jira] [Commented] (HDDS-345) Upgrade RocksDB version from 5.8.0 to 5.14.2

2018-08-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578651#comment-16578651
 ] 

Nanda kumar commented on HDDS-345:
--

Thanks [~elek] for the review, addressed the review comment in patch v001.

> Upgrade RocksDB version from 5.8.0 to 5.14.2
> 
>
> Key: HDDS-345
> URL: https://issues.apache.org/jira/browse/HDDS-345
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager, SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-345.000.patch, HDDS-345.001.patch
>
>
> We have been using RocksDB version {{5.8.0}}, this can be upgraded to the 
> latest {{5.14.2}} version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13668) FSPermissionChecker may throws AIOOE when check inode permission

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578649#comment-16578649
 ] 

Wei-Chiu Chuang commented on HDFS-13668:


{quote}sorry for missing this ungraceful usage yesterday.
{quote}
Sorry didn't make this clear. It's just a convention, nothing ungraceful. 
Thanks for your patch!

> FSPermissionChecker may throws AIOOE when check inode permission
> 
>
> Key: HDFS-13668
> URL: https://issues.apache.org/jira/browse/HDFS-13668
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.10.0, 2.7.7
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13668-trunk.001.patch, HDFS-13668-trunk.002.patch, 
> HDFS-13668-trunk.003.patch
>
>
> {{FSPermissionChecker}} may throw {{ArrayIndexOutOfBoundsException:0}} when 
> check if has permission, since it only check inode's {{aclFeature}} if null 
> or not but not check it's entry size. When it meets {{aclFeature}} not null 
> but it's entry size equal to 0, it will throw AIOOE.
> {code:java}
> private boolean hasPermission(INodeAttributes inode, FsAction access) {
>   ..
>   final AclFeature aclFeature = inode.getAclFeature();
>   if (aclFeature != null) {
> // It's possible that the inode has a default ACL but no access ACL.
> int firstEntry = aclFeature.getEntryAt(0);
> if (AclEntryStatusFormat.getScope(firstEntry) == AclEntryScope.ACCESS) {
>   return hasAclPermission(inode, access, mode, aclFeature);
> }
>   }
>   ..
> }
> {code}
> Actually if use default {{INodeAttributeProvider}}, it can ensure that when 
> {{inode}}'s aclFeature is not null and it's entry size also will be greater 
> than 0, but {{INodeAttributeProvider}} is a public interface, we could not 
> ensure external implement (e.g. Apache Sentry, Apache Ranger) also has the 
> similar constraint. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-345) Upgrade RocksDB version from 5.8.0 to 5.14.2

2018-08-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-345:
-
Attachment: HDDS-345.001.patch

> Upgrade RocksDB version from 5.8.0 to 5.14.2
> 
>
> Key: HDDS-345
> URL: https://issues.apache.org/jira/browse/HDDS-345
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager, SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-345.000.patch, HDDS-345.001.patch
>
>
> We have been using RocksDB version {{5.8.0}}, this can be upgraded to the 
> latest {{5.14.2}} version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578647#comment-16578647
 ] 

genericqa commented on HDFS-13697:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
21s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}299m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tracing.TestTracing |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Comment Edited] (HDDS-342) Add example byteman script to print out hadoop rpc traffic

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578639#comment-16578639
 ] 

Anu Engineer edited comment on HDDS-342 at 8/13/18 5:19 PM:


 I think we should commit the Script file locally as an example instead of 
hitting an URL. The items in some website has a tendency to disappear. +1 after 
that.

 


was (Author: anu):
+1. I will commit this shortly.

> Add example byteman script to print out hadoop rpc traffic
> --
>
> Key: HDDS-342
> URL: https://issues.apache.org/jira/browse/HDDS-342
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-342.001.patch, byteman.png, byteman2.png
>
>
> HADOOP-15656 adds byteman support to the hadoop-runner base image. byteman is 
> a simple tool to define java instrumentation. For example it's very easy to 
> print out the incoming and outgoing hadoop rcp messages or fsimage edits.
> In this patch I add one more line to the standard docker-compose cluster to 
> demonstrate this capability (print out rpc calls). By default it's turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-342) Add example byteman script to print out hadoop rpc traffic

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578639#comment-16578639
 ] 

Anu Engineer commented on HDDS-342:
---

+1. I will commit this shortly.

> Add example byteman script to print out hadoop rpc traffic
> --
>
> Key: HDDS-342
> URL: https://issues.apache.org/jira/browse/HDDS-342
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-342.001.patch, byteman.png, byteman2.png
>
>
> HADOOP-15656 adds byteman support to the hadoop-runner base image. byteman is 
> a simple tool to define java instrumentation. For example it's very easy to 
> print out the incoming and outgoing hadoop rcp messages or fsimage edits.
> In this patch I add one more line to the standard docker-compose cluster to 
> demonstrate this capability (print out rpc calls). By default it's turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-313) Add metrics to containerState Machine

2018-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-313:
--
Fix Version/s: 0.2.1

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-346) ozoneShell show the new volume info after updateVolume command like updateBucket command.

2018-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-346:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~candychencan] Thank you for the contribution. I have committed this to the 
trunk.

> ozoneShell show the new volume info after updateVolume command like 
> updateBucket command.
> -
>
> Key: HDDS-346
> URL: https://issues.apache.org/jira/browse/HDDS-346
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-346.001.patch
>
>
> ozoneShell  show nothing after UpdateVolume,we may list the new volume info 
> after update command.
> Like this:
> [root@localhost bin]# ./ozone oz -updateVolume /volume -quota 10GB
> 2018-08-10 09:40:02,241 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {
>   "owner" : {
>     "name" : "root"
>   },
>   "quota" : {
>     "unit" : "GB",
>     "size" : 10
>   },
>   "volumeName" : "volume",
>   "createdOn" : "Tue, 01 Jun +50573 08:11:18 GMT",
>   "createdBy" : "root"
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578632#comment-16578632
 ] 

Xiaoyu Yao commented on HDDS-324:
-

Thanks [~msingh] for the update. +1 for v9 patch pending Jenkins.

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch, HDDS-324.009.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578628#comment-16578628
 ] 

Chao Sun commented on HDFS-13790:
-

[~brahmareddy]: sure, attached patch v2 for that. 

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-13 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13790:

Attachment: HDFS-13790.001.patch

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-346) ozoneShell show the new volume info after updateVolume command like updateBucket command.

2018-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578623#comment-16578623
 ] 

Anu Engineer commented on HDDS-346:
---

[~candychencan] thanks for catching and fixing this. I will commit this shortly.

> ozoneShell show the new volume info after updateVolume command like 
> updateBucket command.
> -
>
> Key: HDDS-346
> URL: https://issues.apache.org/jira/browse/HDDS-346
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-346.001.patch
>
>
> ozoneShell  show nothing after UpdateVolume,we may list the new volume info 
> after update command.
> Like this:
> [root@localhost bin]# ./ozone oz -updateVolume /volume -quota 10GB
> 2018-08-10 09:40:02,241 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {
>   "owner" : {
>     "name" : "root"
>   },
>   "quota" : {
>     "unit" : "GB",
>     "size" : 10
>   },
>   "volumeName" : "volume",
>   "createdOn" : "Tue, 01 Jun +50573 08:11:18 GMT",
>   "createdBy" : "root"
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-346) ozoneShell show the new volume info after updateVolume command like updateBucket command.

2018-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-346:
--
Fix Version/s: 0.2.1

> ozoneShell show the new volume info after updateVolume command like 
> updateBucket command.
> -
>
> Key: HDDS-346
> URL: https://issues.apache.org/jira/browse/HDDS-346
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-346.001.patch
>
>
> ozoneShell  show nothing after UpdateVolume,we may list the new volume info 
> after update command.
> Like this:
> [root@localhost bin]# ./ozone oz -updateVolume /volume -quota 10GB
> 2018-08-10 09:40:02,241 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {
>   "owner" : {
>     "name" : "root"
>   },
>   "quota" : {
>     "unit" : "GB",
>     "size" : 10
>   },
>   "volumeName" : "volume",
>   "createdOn" : "Tue, 01 Jun +50573 08:11:18 GMT",
>   "createdBy" : "root"
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-347) Fix : testCloseContainerViaStandaAlone fails sometimes

2018-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-347:
--
Fix Version/s: 0.2.1

> Fix : testCloseContainerViaStandaAlone fails sometimes
> --
>
> Key: HDDS-347
> URL: https://issues.apache.org/jira/browse/HDDS-347
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-347.000.patch
>
>
> This issue was finded in the automatic JenKins unit test of HDDS-265.
>  The container life cycle state is : Open -> Closing -> closed, this test 
> submit the container close command and wait for container state change to 
> *not equal to open*, actually even when the state condition(not equal to 
> open) is satisfied, the container may still in process of closing, so the LOG 
> which will printf after the container closed can't be find sometimes and the 
> test fails.
> {code:java|title=KeyValueContainer.java|borderStyle=solid}
> try {
>   writeLock();
>   containerData.closeContainer();
>   File containerFile = getContainerFile();
>   // update the new container data to .container File
>   updateContainerFile(containerFile);
> } catch (StorageContainerException ex) {
> {code}
> Looking at the code above, the container state changes from CLOSING to CLOSED 
> in the first step, the remaining *updateContainerFile* may take hundreds of 
> milliseconds, so even we modify the test logic to wait for the *CLOSED* state 
> will not guarantee the test success, too.
>  These are two way to fix this:
>  1, Remove one of the double check which depends on the LOG.
>  2, If we have to preserve the double check, we should wait for the *CLOSED* 
> state and sleep for a while to wait for the LOG appears.
>  patch 000 is based on the second way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-341) HDDS/Ozone bits are leaking into Hadoop release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578617#comment-16578617
 ] 

Allen Wittenauer commented on HDDS-341:
---

I'll comment on HDDS-214 then.  

> HDDS/Ozone bits are leaking into Hadoop release
> ---
>
> Key: HDDS-341
> URL: https://issues.apache.org/jira/browse/HDDS-341
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Blocker
> Fix For: 0.2.1
>
>
> [~aw] in the Ozone release discussion reported that Ozone is leaking bits 
> into Hadoop. This has to be fixed before  Hadoop 3.2 or Ozone 0.2.1 release. 
> I will make this a release blocker for Ozone.
>  
> {noformat}
> >Has anyone verified that a Hadoop release doesn't have _any_ of the extra 
> >ozone bits that are sprinkled outside the maven modules?
> [aengineer] : As far as I know that is the state, we have had multiple Hadoop 
> releases after ozone has been merged. So far no one has reported Ozone bits 
> leaking into Hadoop. If we find something like that, it would be a bug.
> [aw]: There hasn't been a release from a branch where Ozone has been merged 
> yet. The first one will be 3.2.0.  Running create-release off of trunk 
> presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in 
> the Hadoop source tar ball.
>   So, consider this as a report. IMHO, cutting an Ozone release prior to 
> a Hadoop release ill-advised given the distribution impact and the 
> requirements of the merge vote.  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-13 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578607#comment-16578607
 ] 

Elek, Marton commented on HDDS-325:
---

Thank you the clarification [~ljain]. Let me summarize the goals (and fix me if 
my understanding is wrong):

1. You would like to use two different event_type/topics. If the event is sent 
to the SCMEvents.DATANODE_COMMAND, the message will be sent to the datanode and 
the retry logic should be implemented manually and an additional message should 
be sent to an event watcher.

2. If the message is sent to the SCMEvents.RETRIABLE_DATANODE_COMMAND the 
message will be sent to the datanode AND the retry will be handled 
automatically.

If this is the situation: I like the idea but I don't think that we need to 
implement a RetriableCompletionPayload for that.  I think it could be 
implemented with a more simple way.
 
As I wrote I prefer to keep the messaging logic out from the message payload. 
Message payload is just a collection of the data and I can't see any reason to 
put more logic to there. Even in your patch you can put the retry logic to 
RetriableEventWatcher.onTimeout method.

The method payload type and the event type is separated. It's very easy to 
create a RetriableEventWatcher which listens on the 
SCMEvents.RETRIABLE_DATANODE_COMMAND but can receive exactly the same message 
as the SCMEvents.DATANODE_COMMAND. (It also could help to switch between the 
two approach).

I think currently we use a generic SCMEvents.DATANODE_COMMAND for all the 
datanode commands. So we don't need to make the EventWatcher more complex with 
adding a Set to it. If in the future we will switch to handle all the different 
datanode commands with different event type we can easily instantiate multiple 
RetriableEventWatcher. With this approach we can monitor the different type of 
events easily and the implementation could be more easy (IMHO).

Does it make sense?

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578585#comment-16578585
 ] 

Arpit Agarwal commented on HDFS-13823:
--

+1 pending Jenkins.

Verified that your patch addresses the issue. Thanks for root-causing this 
[~nandakumar131]!

> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13823:
-
Target Version/s: 3.2.0, 3.0.4, 3.1.2

> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-341) HDDS/Ozone bits are leaking into Hadoop release

2018-08-13 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578578#comment-16578578
 ] 

Elek, Marton commented on HDDS-341:
---

Thanks the detailed definition of the problem [~aw]
 
1. According to the original proposal of merging Ozone to Hadoop trunk, the 
problem could be solved with deleting ozone files from the 3.2 branch (we need 
trunk/3.2 separation for that):

>From Owen's mail:

{quote}
 * On trunk (as opposed to release branches) HDSL will be a separate module in 
Hadoop's source tree. This will enable the HDSL to work on their trunk and the 
Hadoop trunk without making releases for every change.
 *  Hadoop's trunk will only build HDSL if a non-default profile is enabled. 
* When Hadoop creates a release branch, the RM will delete the HDSL module from 
the branch.
{quote}

I would be happy to help the RM of the 3.2 to do this.

2. I agree that the dependency on hadoop 3.2.0-SNAPSHOT is a real problem. The 
main reason for the original proposal (merge ozone to hadoop trunk) was to make 
it easier to use the latest hadoop all the time. I proposed  in HDDS-214 to 
release the required hadoop artifacts with an ozone specific version. (eg. 
org.apache.hadoop:hadoop-common:ozone-0.2.1)  

> HDDS/Ozone bits are leaking into Hadoop release
> ---
>
> Key: HDDS-341
> URL: https://issues.apache.org/jira/browse/HDDS-341
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Blocker
> Fix For: 0.2.1
>
>
> [~aw] in the Ozone release discussion reported that Ozone is leaking bits 
> into Hadoop. This has to be fixed before  Hadoop 3.2 or Ozone 0.2.1 release. 
> I will make this a release blocker for Ozone.
>  
> {noformat}
> >Has anyone verified that a Hadoop release doesn't have _any_ of the extra 
> >ozone bits that are sprinkled outside the maven modules?
> [aengineer] : As far as I know that is the state, we have had multiple Hadoop 
> releases after ozone has been merged. So far no one has reported Ozone bits 
> leaking into Hadoop. If we find something like that, it would be a bug.
> [aw]: There hasn't been a release from a branch where Ozone has been merged 
> yet. The first one will be 3.2.0.  Running create-release off of trunk 
> presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in 
> the Hadoop source tar ball.
>   So, consider this as a report. IMHO, cutting an Ozone release prior to 
> a Hadoop release ill-advised given the distribution impact and the 
> requirements of the merge vote.  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578577#comment-16578577
 ] 

Nanda kumar commented on HDDS-324:
--

Sorry, my bad. I was trying to apply it in a different branch.

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch, HDDS-324.009.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578567#comment-16578567
 ] 

Nanda kumar commented on HDDS-324:
--

[~msingh], the patch is not applying anymore, can you please rebase it?

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch, HDDS-324.009.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578564#comment-16578564
 ] 

Mukul Kumar Singh commented on HDDS-324:


Thanks for the review [~xyao]. patch v9 fixes the review comments. I will 
handle the config option for Replication level in HDDS-297.

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch, HDDS-324.009.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-324:
---
Attachment: HDDS-324.009.patch

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch, HDDS-324.009.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient

2018-08-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578565#comment-16578565
 ] 

Nanda kumar commented on HDDS-247:
--

[~shashikant], the patch is not applying anymore, can you please rebase it?

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch, 
> HDDS-247.03.patch, HDDS-247.04.patch, HDDS-247.05.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might 
> get closed on the Datanodes because of node loss, out of space issues etc. In 
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In 
> cases as such, ozone client should try to get the committed length of the 
> block from the Datanodes, and update the OM. This Jira aims  to address this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13823:
---
Status: Patch Available  (was: Open)

> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info

2018-08-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578550#comment-16578550
 ] 

Xiaoyu Yao commented on HDDS-324:
-

Thanks [~msingh] for the update. Patch v8 looks excellent to me. Just two more 
minor issues:

 

FSDirRenameOp.java

Unrelated HDFS change, can we fix that in a separate JIRA?

 

XceiverServerRatis.java

Line 298: can we make this a configurable option for choosing different  
ReplicationLevel, such as ALL, MAJOR?

I did not find an option to enable this with latest patch. I'm OK with fixing 
this in a separate JIRA.

> Use pipeline name as Ratis groupID to allow datanode to report pipeline info
> 
>
> Key: HDDS-324
> URL: https://issues.apache.org/jira/browse/HDDS-324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-324.001.patch, HDDS-324.002.patch, 
> HDDS-324.003.patch, HDDS-324.004.patch, HDDS-324.005.patch, 
> HDDS-324.006.patch, HDDS-324.007.patch, HDDS-324.008.patch
>
>
> Currently Ozone creates a random pipeline id for every pipeline where a 
> pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the 
> notion of RaftGroupID which is a unique id for the nodes in a ratis ring. 
> When a datanode sends information to SCM, the pipeline for the node is 
> currently identified using dn2PipelineMap. With correct use of RaftGroupID, 
> we can eliminate the use of dn2PipelineMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13823) NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the file" is not working

2018-08-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13823:
---
Attachment: HDFS-13823.000.patch

> NameNode UI : "Utilities -> Browse the file system -> open a file -> Head the 
> file" is not working
> --
>
> Key: HDFS-13823
> URL: https://issues.apache.org/jira/browse/HDFS-13823
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 3.1.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13823.000.patch
>
>
> In NameNode UI 'Head the file' and 'Tail the file' links under {{'Utilities 
> -> Browse the file system -> open a file'}} are not working. The file 
> contents box is coming up as empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578504#comment-16578504
 ] 

Allen Wittenauer commented on HDFS-12711:
-

bq. Was a follow up jira filed for this work? (and if so, which one was chosen)

Nope.  Few others seemed to care; patches go in regardless of what Jenkins says 
and/or how they may impact the build negatively.  

Yetus 0.7.0 and bumping up the surefire version (at least in trunk) stopped 
hadoop from crashing ASF Jenkins build nodes.  It's still horribly broken, just 
less obviously so. branch-2 nightlies were turned off months ago since they 
were failing at such a high level to be pointless. I don't think anyone really 
pays attention to the trunk nightlies so they should probably be turned off too.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >