[jira] [Updated] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-27 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14205:

Attachment: HDFS-14205-branch-2.004.patch

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780199#comment-16780199
 ] 

Hadoop QA commented on HDDS-1072:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
20s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
12s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 12s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 17s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-ozone/common generated 2 new + 0 unchanged - 0 
fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 15s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 13s{color} 
| {color:red} integration-test in the patch failed. {color} |
| 

[jira] [Updated] (HDDS-1128) Create stateful manager class for the pipeline creation scheduling

2019-02-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1128:
--
Status: Patch Available  (was: Open)

> Create stateful manager class for the pipeline creation scheduling
> --
>
> Key: HDDS-1128
> URL: https://issues.apache.org/jira/browse/HDDS-1128
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Lokesh Jain
>Priority: Critical
> Attachments: HDDS-1128.001.patch
>
>
> HDDS-1076 introduced a new static variable in RatisPipelineProvider: 
> Scheduler. It seems to be a global variable which makes the testing harder.
> [~shashikant] also suggested to remove it:
> {quote}It would be a good idea to move the scheduler Class Utility into some 
> common utility package so that it can be used in multiple places as and when 
> needed. 
> {quote}
> I agree. And findbug also complains about it:
> {quote}H D ST: Write to static field 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.scheduler from 
> instance method new 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider(NodeManager, 
> PipelineStateManager, Configuration)  At RatisPipelineProvider.java:[line 56]
> {quote}
> I think we need a new class which includes both the state of 
> RatisPipelineUtils.isPipelineCreatorRunning and 
> RaitsPipelineProvider.Scheduler. It should have one instance which is 
> available for the classes which requires it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-27 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780169#comment-16780169
 ] 

He Xiaoqiao commented on HDFS-14201:


+1 LGTM, Thanks [~surmountian].

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch, HDFS-14201.004.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-27 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780154#comment-16780154
 ] 

Xiao Liang commented on HDFS-14201:
---

Thanks [~hexiaoqiao] for reviewing.

TestHASafeMode#testTransitionToActiveWhenSafeMode is passing with 
[^HDFS-14201.004.patch], I think it could be the randomized root dir (set by 
line 904 of TestHASafeMode.java in [^HDFS-14201.004.patch]) of MiniDFSCluster 
working.

In my local machine all test cases of 
hadoop.hdfs.tools.TestDFSZKFailoverController passed, and other failed test 
cases reported by Jenkins don't seem like related to the change.

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch, HDFS-14201.004.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?focusedWorklogId=205630=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205630
 ]

ASF GitHub Bot logged work on HDDS-1183:


Author: ASF GitHub Bot
Created on: 28/Feb/19 06:16
Start Date: 28/Feb/19 06:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #526: HDDS-1183. 
Override getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#issuecomment-468151852
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 981 | trunk passed |
   | +1 | compile | 925 | trunk passed |
   | +1 | checkstyle | 190 | trunk passed |
   | -1 | mvnsite | 51 | common in trunk failed. |
   | +1 | shadedclient | 1065 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 36 | common in trunk failed. |
   | +1 | javadoc | 132 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 99 | the patch passed |
   | +1 | compile | 873 | the patch passed |
   | +1 | javac | 873 | the patch passed |
   | +1 | checkstyle | 183 | the patch passed |
   | +1 | mvnsite | 137 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 219 | the patch passed |
   | +1 | javadoc | 129 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 86 | common in the patch failed. |
   | +1 | unit | 49 | common in the patch passed. |
   | +1 | unit | 95 | ozonefs in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6256 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/526 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 980988c88033 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 538bb48 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/branch-mvnsite-hadoop-ozone_common.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/branch-findbugs-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/testReport/ |
   | Max. process+thread count | 2787 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/ozonefs 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205630)
Time Spent: 0.5h  (was: 20m)

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: 

[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-02-27 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780153#comment-16780153
 ] 

star commented on HDFS-14314:
-

checkstyle seems not consistent. 

Same code ".thenAnswer" at line 1000, 1002, 1003 has 

indentation 6, which passes checking, While code at 1010 and 1024 throws error.

How to fix such error?

 

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14316:
---
Attachment: HDFS-14316-HDFS-13891.004.patch

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780129#comment-16780129
 ] 

Hadoop QA commented on HDDS-1185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
36s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m 51s{color} | 
{color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 51s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-ozone: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-ozone_common generated 2 new + 1 unchanged - 0 
fixed = 3 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-ozone_client generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 40s{color} 
| {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-02-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780114#comment-16780114
 ] 

Hanisha Koneru commented on HDDS-1072:
--

Thank you [~arpitagarwal] for the offline discussion. 
Updated patch v03 with following changes:
 * Fixed checkstyle and unit test failures
 * Added unit test to retry proxy
 * Instantiating RetryPolicy only once instead of in every shouldRetry call 
inOzoneManagerProtocolClientSideTranslatorPB
 * OMFailoverProxyProvider#performFailover updates current proxy only if new 
leaderId is not same as the current leaderId.
 * Added javadocs

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-02-27 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-1072:
-
Attachment: HDDS-1072.003.patch

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1184) Parallelization of write chunks in datanodes is broken

2019-02-27 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780109#comment-16780109
 ] 

Mukul Kumar Singh commented on HDDS-1184:
-

Thanks for working on this [~shashikant]. The patch looks good to me. 

I feel that if the tokenVerifier throws an exception, we should wrap it inside 
the ContainerCommandResponseProto and return using an error code.

> Parallelization of write chunks in datanodes is broken 
> ---
>
> Key: HDDS-1184
> URL: https://issues.apache.org/jira/browse/HDDS-1184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.4.0
>
> Attachments: HDDS-1184.000.patch
>
>
> After the HDDS-4 branch merge, parallelization in write chunks and 
> applyTransaction is broken in ContainerStateMachine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780090#comment-16780090
 ] 

Hadoop QA commented on HDFS-14314:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 50 unchanged - 2 fixed = 52 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14314 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960509/HDFS-14314-trunk.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 773fffdf572f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cbf82fa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-7663) Erasure Coding: Append on striped file

2019-02-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780088#comment-16780088
 ] 

Ayush Saxena commented on HDFS-7663:


Thanx [~vinayrpet] for the review!!!

Have made the said changes.

Pls Review :)

> Erasure Coding: Append on striped file
> --
>
> Key: HDFS-7663
> URL: https://issues.apache.org/jira/browse/HDFS-7663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-7663-02.patch, HDFS-7663-03.patch, 
> HDFS-7663-04.patch, HDFS-7663-05.patch, HDFS-7663-06.patch, HDFS-7663.00.txt, 
> HDFS-7663.01.patch
>
>
> Append should be easy if we have variable length block support from 
> HDFS-3689, i.e., the new data will be appended to a new block. We need to 
> revisit whether and how to support appending data to the original last block.
> 1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
> 2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
> (HDFS-9173)
> 3. Append to a striped file, by appending to last block group (follow-on)
> This jira attempts to implement the #1, and also track #2, #3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-02-27 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1185:

Attachment: HDDS-1185.002.patch

> Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call 
> to OM.
> ---
>
> Key: HDDS-1185
> URL: https://issues.apache.org/jira/browse/HDDS-1185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch
>
>
> GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file 
> status for a given file. This can be optimized by performing all the 
> processing on the OzoneManager for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780082#comment-16780082
 ] 

Hadoop QA commented on HDFS-13699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 36s{color} | {color:orange} root: The patch generated 5 new + 743 unchanged 
- 3 fixed = 748 total (was 746) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\

[jira] [Commented] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780080#comment-16780080
 ] 

Hadoop QA commented on HDDS-1190:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} docker {color} | {color:blue}  0m  
6s{color} | {color:blue} Dockerfile 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/sourcedir/dev-support/docker/Dockerfile'
 not found, falling back to built-in. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
24s{color} | {color:red} Docker failed to build yetus/hadoop:date2019-02-28. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1190 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960519/HDDS-1190-docker-hadoop-runner.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2401/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7663) Erasure Coding: Append on striped file

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780066#comment-16780066
 ] 

Hadoop QA commented on HDFS-7663:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
31 unchanged - 0 fixed = 33 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-7663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960505/HDFS-7663-06.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7386635a5084 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780064#comment-16780064
 ] 

CR Hota commented on HDFS-14322:


[~elgoiri]

Many thanks for committing this. You are fast !

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14322-HDFS-13891.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?focusedWorklogId=205615=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205615
 ]

ASF GitHub Bot logged work on HDDS-1182:


Author: ASF GitHub Bot
Created on: 28/Feb/19 03:51
Start Date: 28/Feb/19 03:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #528: HDDS-1182. 
Pipeline Rule where atleast one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528#issuecomment-468127128
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 76 | trunk passed |
   | +1 | checkstyle | 32 | trunk passed |
   | +1 | mvnsite | 77 | trunk passed |
   | +1 | shadedclient | 760 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 110 | trunk passed |
   | +1 | javadoc | 61 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 67 | the patch passed |
   | +1 | javac | 67 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 62 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 121 | the patch passed |
   | +1 | javadoc | 57 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 70 | common in the patch failed. |
   | +1 | unit | 137 | server-scm in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3593 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/528 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 561cb5a46c9b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1779fc5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205615)
Time Spent: 20m  (was: 10m)

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  

[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Status: Patch Available  (was: Open)

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780057#comment-16780057
 ] 

Hadoop QA commented on HDFS-14316:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
57s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
6 unchanged - 0 fixed = 8 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 10s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.federation.router.TestRouterNamenodeMonitoring |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14316 |
| JIRA Patch URL | 

[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Attachment: HDDS-1190-docker-hadoop-runner.001.patch

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-4

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780054#comment-16780054
 ] 

Xiaoyu Yao commented on HDDS-1190:
--

cc [~elek]

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Attachment: HDDS-1190-trunk.001.patch

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-27 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780050#comment-16780050
 ] 

He Xiaoqiao commented on HDFS-14305:


Thanks [~xkrogen].
+1, LGTM. Is it necessary to mark that 64 namenodes limit scope just for single 
Namespace in release note? I think this message may be useful for Federation. 
FYI.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1190:


 Summary: Fix jdk 11 issue for ozonesecure base image and 
docker-compose 
 Key: HDDS-1190
 URL: https://issues.apache.org/jira/browse/HDDS-1190
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
docker-compose. There are a few issues that need to fixed.

 

1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
openjdk8 for JAVA_HOME. 

 

2. The KEYTAB_DIR needs to be quoted with '.

 

3. keytab based login failed with Message stream modified (41), [~elek] 
mentioned in HDDS-1019 that we need to add max_renewable_life to 
"docker-image/docker-krb5/krb5.conf" like follows.
[realms]
 EXAMPLE.COM = \{
  kdc = localhost
  admin_server = localhost
  max_renewable_life = 7d
 }
Failures:

{code}

 org.apache.hadoop.security.KerberosAuthException: failure to login: for 
principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
javax.security.auth.login.LoginException: Message stream modified (41)

scm_1           | at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)

scm_1           |

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?focusedWorklogId=205587=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205587
 ]

ASF GitHub Bot logged work on HDDS-1182:


Author: ASF GitHub Bot
Created on: 28/Feb/19 02:50
Start Date: 28/Feb/19 02:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #528: 
HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205587)
Time Spent: 10m
Remaining Estimate: 0h

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  
> This rule satisfies, when we exit chill mode, Ozone clients will have at 
> least one open replica for reads to succeed. (We can increase this threshold 
> default from 90%, if we want to see fewer failures during reads after exit 
> chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1182:
-
Status: Patch Available  (was: In Progress)

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  
> This rule satisfies, when we exit chill mode, Ozone clients will have at 
> least one open replica for reads to succeed. (We can increase this threshold 
> default from 90%, if we want to see fewer failures during reads after exit 
> chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1182:
-
Target Version/s: 0.4.0

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  
> This rule satisfies, when we exit chill mode, Ozone clients will have at 
> least one open replica for reads to succeed. (We can increase this threshold 
> default from 90%, if we want to see fewer failures during reads after exit 
> chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1182:
-
Labels: pull-request-available  (was: )

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  
> This rule satisfies, when we exit chill mode, Ozone clients will have at 
> least one open replica for reads to succeed. (We can increase this threshold 
> default from 90%, if we want to see fewer failures during reads after exit 
> chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780034#comment-16780034
 ] 

Íñigo Goiri commented on HDFS-14322:


Thanks [~crh] for the quick fix.
Committed to HDFS-13891.
I've checked and the logs now are way more reasonable.

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14322-HDFS-13891.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205586
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 02:38
Start Date: 28/Feb/19 02:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468113722
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1042 | trunk passed |
   | +1 | compile | 72 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 65 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 97 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 62 | the patch passed |
   | -1 | jshint | 75 | The patch generated 294 new + 1942 unchanged - 1053 
fixed = 2236 total (was 2995) |
   | +1 | compile | 66 | the patch passed |
   | +1 | javac | 66 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-hdds: The patch generated 12 new + 0 
unchanged - 0 fixed = 12 total (was 0) |
   | +1 | mvnsite | 54 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 766 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 109 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 87 | common in the patch failed. |
   | +1 | unit | 32 | framework in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3616 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux 37fa4eb47056 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cbf82fa |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/diff-patch-jshint.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205586)
Time Spent: 1.5h  (was: 1h 20m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>

[jira] [Updated] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14322:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14322-HDFS-13891.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14290) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2019-02-27 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780024#comment-16780024
 ] 

Lisheng Sun commented on HDFS-14290:


hi [~jojochuang] [~ayushtkn] I is intended to use EmbeddedChannel#writeInbound 
and  EmbeddedChannel#readInbound to add a UT for SimpleHttpProxyHandler to 
reproduce, because SimpleHttpProxyHandler extends 
SimpleChannelInboundHandler.  But SimpleHttpProxyHandler is 
implemented by

the following way:
{code:java}
@Override
  public void channelRead0
(final ChannelHandlerContext ctx, final HttpRequest req) {
uri = req.getUri();
final Channel client = ctx.channel();
Bootstrap proxiedServer = new Bootstrap()
  .group(client.eventLoop())
  .channel(NioSocketChannel.class)
  .handler(new ChannelInitializer() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
  ChannelPipeline p = ch.pipeline();
  p.addLast(new HttpRequestEncoder(), new Forwarder(uri, client));
}
  });
ChannelFuture f = proxiedServer.connect(host);
proxiedChannel = f.channel();
{code}
proxiedServer#connect that uses the  EmbeddedEventLoop and  judges whether its 
loop is NioEventLoop because proxiedServer#channel is NioSocketChannel. However 
EmbeddedEventLoop is not NioEventLoop and unfortunately the issue is difficult 
to use a UT to reproduce. 

the ChannelPipeline only include HttpRequestEncoder and not 
HttpRequestDecoder,and it has problem.
{code:java}
.handler(new ChannelInitializer() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
  ChannelPipeline p = ch.pipeline();
  p.addLast(new HttpRequestEncoder(), new Forwarder(uri, client));
}
  });
{code}
  Would you have any suggestions? Thank you.

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-14290
> URL: https://issues.apache.org/jira/browse/HDFS-14290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14290.000.patch, webhdfs show.png
>
>
> The issue is there is no HttpRequestDecoder in InboundHandler of netty,  
> appear unexpected message type when read message.
>   
> !webhdfs show.png!   
> DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy 
> failed. Cause: 
>  com.xiaomi.infra.thirdparty.io.netty.handler.codec.EncoderException: 
> java.lang.IllegalStateException: unexpected message type: 
> PooledUnsafeDirectByteBuf
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:106)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:816)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:304)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
>  at 
> org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
>  at 
> 

[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780017#comment-16780017
 ] 

Íñigo Goiri commented on HDFS-14322:


+1 on  [^HDFS-14322-HDFS-13891.001.patch].
I don't think there is a need for unit tests as this is just for improving 
logging.

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13891.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-787) UI page to show all the buckets for a user

2019-02-27 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-787:


Assignee: Vivek Ratnavel Subramanian  (was: Siddharth Wagle)

> UI page to show all the buckets for a user
> --
>
> Key: HDDS-787
> URL: https://issues.apache.org/jira/browse/HDDS-787
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> This Jira is to develop a UI page to list all buckets for a user.
> Currently, we have the code to list all buckets for a user. (But the user is 
> taken from authentication header),
>  # We should have a rest endpoint to take the username as path param, which 
> should return all the buckets for that.
>  # Develop a UI page, if browser=true, just show all the buckets and bucket 
> info on the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780014#comment-16780014
 ] 

Hadoop QA commented on HDFS-14322:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 0s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960502/HDFS-14322-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6ff7d76483cb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d8dccda |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26356/testReport/ |
| Max. process+thread count | 1372 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26356/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205575
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:53
Start Date: 28/Feb/19 01:53
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261020412
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
 
 Review comment:
   Better to use junit's TemporaryFolder and let framework take care of the 
cleanup as well as create etc.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205575)
Time Spent: 40m  (was: 0.5h)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205576
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:55
Start Date: 28/Feb/19 01:55
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261020806
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
+  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
+
+  private BufferedWriter out;
 
 Review comment:
   In general not too excited about sharing streams as a coding best practice, 
others reviewers can chime in but better to localize this vs deal with side 
effect code in tearDown.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205576)
Time Spent: 50m  (was: 40m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-27 Thread Deshui Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780011#comment-16780011
 ] 

Deshui Yu edited comment on HDFS-14316 at 2/28/19 2:01 AM:
---

[~elgoiri] Thanks for helping format the code snippet. 

Yes, the performance concern is mitigated if invokeSingle already fails fast 
towards failed subcluster, and it's will be good if we can run perf test to 
confirm that.






was (Author: fishyds):
[~elgoiri] Thanks for helping format the code snippet. 

Yes, the performance concern is mitigated if invokeSingle alreadys fail fast 
towards failed subcluster, and it's will be good if we can run perf test to 
confirm that.





> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-27 Thread Deshui Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780011#comment-16780011
 ] 

Deshui Yu commented on HDFS-14316:
--

[~elgoiri] Thanks for helping format the code snippet. 

Yes, the performance concern is mitigated if invokeSingle alreadys fail fast 
towards failed subcluster, and it's will be good if we can run perf test to 
confirm that.





> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205579=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205579
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:59
Start Date: 28/Feb/19 01:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261021548
 
 

 ##
 File path: hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js
 ##
 @@ -308,7 +308,6 @@
 ctrl.convertToArray(response.data);
 ctrl.configs = Object.values(ctrl.keyTagMap);
 ctrl.component = 'All';
-console.log("ajay -> " + JSON.stringify(ctrl.configs));
 
 Review comment:
   I believe this file needs to be removed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205579)
Time Spent: 1h 20m  (was: 1h 10m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205578=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205578
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:56
Start Date: 28/Feb/19 01:56
Worklog Time Spent: 10m 
  Work Description: brownscott commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468104921
 
 
   
   -- 
   Sent from my Android phone with GMX Mail. Please excuse my brevity.On 
2019-02-27, 9:55 p.m. Siddharth  wrote:
   @swagle commented on this pull request.
   
   
   
   In 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java:
   > +import org.junit.Assert;
   +import org.junit.Before;
   +import org.junit.Test;
   +
   +import java.io.BufferedWriter;
   +import java.io.File;
   +import java.io.FileWriter;
   +import java.io.IOException;
   +
   +public class TestOzoneConfiguration {
   +
   +  private Configuration conf;
   +  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
   +  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
   +
   +  private BufferedWriter out;
   
   In general not too excited about sharing streams as a coding best practice, 
others reviewers can chime in but better to localize this vs deal with side 
effect code in tearDown.
   
   —You are receiving this because you are subscribed to this thread.Reply to 
this email directly, view it on GitHub, or mute the thread.
   
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/apache/hadoop","title":"apache/hadoop","subtitle":"GitHub
 
repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open
 in 
GitHub","url":"https://github.com/apache/hadoop"}},"updates":{"snippets":[{"icon":"PERSON","message":"@swagle
 commented on #527"}],"action":{"name":"View Pull 
Request","url":"https://github.com/apache/hadoop/pull/527#pullrequestreview-208858889"}}}
   [
   {
   "@context": "http://schema.org;,
   "@type": "EmailMessage",
   "potentialAction": {
   "@type": "ViewAction",
   "target": 
"https://github.com/apache/hadoop/pull/527#pullrequestreview-208858889;,
   "url": 
"https://github.com/apache/hadoop/pull/527#pullrequestreview-208858889;,
   "name": "View Pull Request"
   },
   "description": "View this Pull Request on GitHub",
   "publisher": {
   "@type": "Organization",
   "name": "GitHub",
   "url": "https://github.com;
   }
   }
   ]
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205578)
Time Spent: 1h 10m  (was: 1h)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205577=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205577
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:55
Start Date: 28/Feb/19 01:55
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261020806
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
+  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
+
+  private BufferedWriter out;
 
 Review comment:
   In general not too excited about sharing streams as a coding best practice, 
other reviewers can chime in but better to localize this vs deal with side 
effect code in tearDown.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205577)
Time Spent: 1h  (was: 50m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205574
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:49
Start Date: 28/Feb/19 01:49
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261019552
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
+  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
+
+  private BufferedWriter out;
+
+  @Before
+  public void setUp() throws Exception {
+conf = new OzoneConfiguration();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+if(out != null) {
+  out.close();
+}
+new File(CONFIG).delete();
+new File(CONFIG_CORE).delete();
+  }
+
+  private void startConfig() throws IOException {
+out.write("\n");
+out.write("\n");
+  }
+
+  private void endConfig() throws IOException{
+out.write("\n");
+out.flush();
+out.close();
+  }
+
+  @Test
+  public void testGetAllPropertiesByTags() throws Exception {
+
+try{
+  out = new BufferedWriter(new FileWriter(CONFIG));
+  startConfig();
+  appendProperty("hadoop.tags.system", "YARN,HDFS,NAMENODE");
+  appendProperty("hadoop.tags.custom", "MYCUSTOMTAG");
+  appendPropertyByTag("dfs.cblock.trace.io", "false", "YARN");
+  appendPropertyByTag("dfs.replication", "1", "HDFS");
+  appendPropertyByTag("dfs.namenode.logging.level", "INFO", "NAMENODE");
+  appendPropertyByTag("dfs.random.key", "XYZ", "MYCUSTOMTAG");
+  endConfig();
+
+  Path fileResource = new Path(CONFIG);
+  conf.addResource(fileResource);
+  
assertEq(conf.getAllPropertiesByTag("MYCUSTOMTAG").getProperty("dfs.random.key"),
 "XYZ");
+} finally {
+  out.close();
+}
+try {
 
 Review comment:
   Better to use try construct that closes the Closeable vs explicit close. 
syntactic sugar
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205574)
Time Spent: 0.5h  (was: 20m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> 

[jira] [Work started] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1093 started by Vivek Ratnavel Subramanian.

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205571=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205571
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:38
Start Date: 28/Feb/19 01:38
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468101227
 
 
   @bharatviswa504 @avijayanhwx @arp7 Please review when you find time
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205571)
Time Spent: 20m  (was: 10m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1093:
-
Labels: pull-request-available  (was: )

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=205566=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205566
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:36
Start Date: 28/Feb/19 01:36
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #527: 
HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205566)
Time Spent: 10m
Remaining Estimate: 0h

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?focusedWorklogId=205560=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205560
 ]

ASF GitHub Bot logged work on HDDS-1183:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:33
Start Date: 28/Feb/19 01:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #526: HDDS-1183. 
Override getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#issuecomment-468100290
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 58 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1029 | trunk passed |
   | +1 | compile | 970 | trunk passed |
   | +1 | checkstyle | 240 | trunk passed |
   | +1 | mvnsite | 214 | trunk passed |
   | +1 | shadedclient | 1230 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 198 | trunk passed |
   | +1 | javadoc | 129 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 105 | the patch passed |
   | +1 | compile | 960 | the patch passed |
   | +1 | javac | 960 | the patch passed |
   | +1 | checkstyle | 204 | the patch passed |
   | +1 | mvnsite | 128 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 701 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 212 | the patch passed |
   | +1 | javadoc | 121 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 99 | common in the patch failed. |
   | +1 | unit | 48 | common in the patch passed. |
   | +1 | unit | 134 | ozonefs in the patch passed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6719 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/526 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 3e407a210f51 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ea3cdc6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/testReport/ |
   | Max. process+thread count | 2742 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/ozonefs 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205560)
Time Spent: 20m  (was: 10m)

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This includes addDelegationToken/renewDelegationToken/cancelDelegationToken 
> so that MR jobs can collect tokens correctly upon job 

[jira] [Updated] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-02-27 Thread star (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

star updated HDFS-14314:

Attachment: HDFS-14314-trunk.004.patch

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779986#comment-16779986
 ] 

Hadoop QA commented on HDDS-1043:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDDS-1043 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1043 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960399/HDDS-1043.05.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2400/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, HDDS-1043.05.patch
>
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1043) Enable token based authentication for S3 api

2019-02-27 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1043:
-
Status: Patch Available  (was: Open)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, HDDS-1043.05.patch
>
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1187) Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1187?focusedWorklogId=205558=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205558
 ]

ASF GitHub Bot logged work on HDDS-1187:


Author: ASF GitHub Bot
Created on: 28/Feb/19 01:27
Start Date: 28/Feb/19 01:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #524: HDDS-1187.  
Healthy pipeline Chill Mode rule to consider only pipelines with replication 
factor three.
URL: https://github.com/apache/hadoop/pull/524#issuecomment-468098966
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1144 | trunk passed |
   | +1 | compile | 62 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 744 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 52 | trunk passed |
   | +1 | javadoc | 26 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 766 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 110 | server-scm in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3286 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/524 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b4aa3d666b4d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 04b228e |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/2/testReport/ |
   | Max. process+thread count | 418 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205558)
Time Spent: 0.5h  (was: 20m)

> Healthy pipeline Chill Mode rule to consider only pipelines with replication 
> factor three
> -
>
> Key: HDDS-1187
> URL: https://issues.apache.org/jira/browse/HDDS-1187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Few offline comments from [~nandakumar131]
>  # We should not process pipeline report from datanode again during 
> calculations.
>  # We should consider only replication factor 3 ratis pipelines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7663) Erasure Coding: Append on striped file

2019-02-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-7663:
---
Attachment: HDFS-7663-06.patch

> Erasure Coding: Append on striped file
> --
>
> Key: HDFS-7663
> URL: https://issues.apache.org/jira/browse/HDFS-7663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-7663-02.patch, HDFS-7663-03.patch, 
> HDFS-7663-04.patch, HDFS-7663-05.patch, HDFS-7663-06.patch, HDFS-7663.00.txt, 
> HDFS-7663.01.patch
>
>
> Append should be easy if we have variable length block support from 
> HDFS-3689, i.e., the new data will be appended to a new block. We need to 
> revisit whether and how to support appending data to the original last block.
> 1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
> 2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
> (HDFS-9173)
> 3. Append to a striped file, by appending to last block group (follow-on)
> This jira attempts to implement the #1, and also track #2, #3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1061:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao] thanks for reviews, committed to trunk.

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, 
> HDDS-1061.05.patch, HDDS-1061.06.patch, HDDS-1061.07.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779978#comment-16779978
 ] 

Hudson commented on HDDS-1061:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16088 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16088/])
HDDS-1061. DelegationToken: Add certificate serial id to Ozone (ajay: rev 
cbf82fabf0b7fc32516ef1a67cfc79611b5d2332)
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneBlockTokenSecretManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/OMCertificateClient.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneTokenIdentifier.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneTokenIdentifier.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java


> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, 
> HDDS-1061.05.patch, HDDS-1061.06.patch, HDDS-1061.07.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779975#comment-16779975
 ] 

Hadoop QA commented on HDDS-134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
36s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
16s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
27s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 27s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 26s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  1s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-134 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-27 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13699:
--
Attachment: HDFS-13699.006.patch

> Add DFSClient sending handshake token to DataNode, and allow DataNode 
> overwrite downstream QOP
> --
>
> Key: HDFS-13699
> URL: https://issues.apache.org/jira/browse/HDFS-13699
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13699.001.patch, HDFS-13699.002.patch, 
> HDFS-13699.003.patch, HDFS-13699.004.patch, HDFS-13699.005.patch, 
> HDFS-13699.006.patch, HDFS-13699.WIP.001.patch
>
>
> Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to 
> redirect the encrypt secret to DataNode. The encrypted message is the QOP 
> that client and NameNode have used. DataNode decrypts the message and enforce 
> the QOP for the client connection. Also, this Jira will also include 
> overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. 
> Namely, this is to allow inter-DN QOP that is different from client-DN QOP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779971#comment-16779971
 ] 

Hadoop QA commented on HDDS-134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
47s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m  
2s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
30s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 30s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  9s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
54s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960480/HDDS-134.07.patch |
| Optional Tests |  

[jira] [Updated] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14322:
---
Attachment: HDFS-14322-HDFS-13891.001.patch

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13891.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-27 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779962#comment-16779962
 ] 

Chen Liang commented on HDFS-13699:
---

Fix a checkstyle issue. The rest are actually to following existing style. The 
failed test are unrelated, TestJournalNodeSync has been failing even without 
the patch, recent Jiras are having this test fail as well, such as HDFS-14305. 
The other two passed in my local run.

> Add DFSClient sending handshake token to DataNode, and allow DataNode 
> overwrite downstream QOP
> --
>
> Key: HDFS-13699
> URL: https://issues.apache.org/jira/browse/HDFS-13699
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13699.001.patch, HDFS-13699.002.patch, 
> HDFS-13699.003.patch, HDFS-13699.004.patch, HDFS-13699.005.patch, 
> HDFS-13699.006.patch, HDFS-13699.WIP.001.patch
>
>
> Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to 
> redirect the encrypt secret to DataNode. The encrypted message is the QOP 
> that client and NameNode have used. DataNode decrypts the message and enforce 
> the QOP for the client connection. Also, this Jira will also include 
> overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. 
> Namely, this is to allow inter-DN QOP that is different from client-DN QOP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14322:
---
Attachment: (was: HDFS-14322-HDFS-13532.001.patch)

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779961#comment-16779961
 ] 

Íñigo Goiri commented on HDFS-14322:


OK, let's just go for #2 then.
Provide the patch on HDFS-13891 BTW.

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13532.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779960#comment-16779960
 ] 

BELUGA BEHR commented on HDFS-14295:


Ya that code could use some TLC.  I'm working on [HDFS-14292].  I'll 
address it there, or in a follow-up to [HDFS-14292].

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779958#comment-16779958
 ] 

CR Hota commented on HDFS-14322:


[~elgoiri] Thanks for the quick review.
 # Thought about ordering, kept it this way as 
SecurityUtil.getAuthenticationMethod always returns a default if not set. Can 
change the order here.
 # Sorry missed the space.
 # For exception itself, I think it's better to leave it with the whole stack 
trace instead of single error line for easy debuggability. This exception is 
only going to be shown in case security is enabled and init of secret manager 
has a real issue. While running unit tests itself I realized that seeing the 
actual stack in front is easy to trace the issue origin. Now that we are not 
initializing when security if off, this issue should be clearly shown as a 
stack trace in the logs for users. Thoughts?

 

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13532.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779957#comment-16779957
 ] 

Íñigo Goiri commented on HDFS-14295:


Bufff that code looks tough.
At least here we are narrowing the wait to 30 seconds...
I think [^HDFS-14295.8.patch] is good then, but I'd like to get more feedback.

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779955#comment-16779955
 ] 

Hadoop QA commented on HDDS-1061:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
22s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m  
4s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 
20s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 20s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 20s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 17s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960475/HDDS-1061.07.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 960a37da996e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779952#comment-16779952
 ] 

Hadoop QA commented on HDDS-1061:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 9s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
43s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 
53s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 53s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 53s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 17s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960475/HDDS-1061.07.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux a1d7ac3e03ca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14322:
---
Attachment: HDFS-14322-HDFS-13532.001.patch

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13532.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779944#comment-16779944
 ] 

Hadoop QA commented on HDFS-14322:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-14322 does not apply to HDFS-13532. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960498/HDFS-14322-HDFS-13532.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26354/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13532.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779950#comment-16779950
 ] 

BELUGA BEHR commented on HDFS-14295:


[~elgoiri] Not sure if you saw my last comment.  We submitted at the same time. 
:)

The DataNode already implements this for the main threadpool.  So if anything, 
we are bringing the behavior inline with the other pool.

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14322:
---
Status: Patch Available  (was: Open)

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13532.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779947#comment-16779947
 ] 

Íñigo Goiri commented on HDFS-14322:


Thanks [~crh] for [^HDFS-14322-HDFS-13532.001.patch].
 A couple comments:
 * Change the order of the equals, as we know for sure that authMethodToInit 
won't be null.
 * Add a space between the if and the parenthesis.
 * For the exception, instead of printing the whole cause, what about: 
{{LOG.error("Could not instantiate {}: {}", clazz.getSimpleName(), 
e.getCause().getMessage());}}. Or something that doesn't have a full stack 
trace. The only issue is that right now we are using a precondition (throwing 
NPE) for the ZK, maybe we want to throw a proper exception for that instead of 
just relying on the precondition error. In any case, {{Zookeeper connection 
string cannot be null}} is ok as a message.

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14322-HDFS-13532.001.patch
>
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779937#comment-16779937
 ] 

BELUGA BEHR commented on HDFS-14295:


[~elgoiri] Unfortunately, we cannot.  The running threads depend on (interact 
with) some of these other services, so we need to wait for these threads to 
complete (max 30s) before closing down the other services.  There are going to 
be a lot of race conditions if these threads are running while the rest of the 
services are being closed.

This may be worth its own JIRA: To ensure that all of the components are being 
shutdown in a proper order.

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779943#comment-16779943
 ] 

BELUGA BEHR commented on HDFS-14295:


Also note, that this already happens for the main thread pool.  There is a 
"sleep until threads are done":

https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2061-L2084

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779942#comment-16779942
 ] 

Íñigo Goiri commented on HDFS-14295:


Mixed feelings here... I would prefer to leave it as close as possible to the 
existing behavior.
Currently we just ignore the daemons running and go ahead; so this would be 
equivalent to doing a shutdown at the very end (or not even that).
We can do a follow up for the extra time.

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14316:
---
Attachment: HDFS-14316-HDFS-13891.003.patch

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1189) Recon DB schema and ORM

2019-02-27 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1189:
--
Description: 
_Objectives_
- Define V1 of the db schema for recon service
- The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, b) 
Allows code to schema and schema to code seamless transition, critical for 
creating DDL through the code and unit testing across versions of the 
application.
- Add e2e unit tests suite for Recon entities, created based on the design doc

  was:
_Objectives_
- Define V1 of the db schema for recon service
- The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
main reasons: a) powerful DSL for querying that abstracts out SQL dialects, b) 
Allows code to schema and schema to code seamless transition, critical for 
creating DDL through the code and unit testing across versions of the 
application.
- Add e2e unit tests suite for Recon entities, created based on the design doc


> Recon DB schema and ORM
> ---
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1189) Recon DB schema and ORM

2019-02-27 Thread Siddharth Wagle (JIRA)
Siddharth Wagle created HDDS-1189:
-

 Summary: Recon DB schema and ORM
 Key: HDDS-1189
 URL: https://issues.apache.org/jira/browse/HDDS-1189
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Affects Versions: 0.5.0
Reporter: Siddharth Wagle
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


_Objectives_
- Define V1 of the db schema for recon service
- The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
main reasons: a) powerful DSL for querying that abstracts out SQL dialects, b) 
Allows code to schema and schema to code seamless transition, critical for 
creating DDL through the code and unit testing across versions of the 
application.
- Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1188) Implement a skeleton patch for Recon server with initial set of interfaces

2019-02-27 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1188:
--
Description: Jira to define package structure, maven module, recon server 
application.  (was: Jira to define package structure, maven module, recon 
server application and initial db schema.)

> Implement a skeleton patch for Recon server with initial set of interfaces
> --
>
> Key: HDDS-1188
> URL: https://issues.apache.org/jira/browse/HDDS-1188
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>
> Jira to define package structure, maven module, recon server application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1188) Implement a skeleton patch for Recon server with initial set of interfaces

2019-02-27 Thread Siddharth Wagle (JIRA)
Siddharth Wagle created HDDS-1188:
-

 Summary: Implement a skeleton patch for Recon server with initial 
set of interfaces
 Key: HDDS-1188
 URL: https://issues.apache.org/jira/browse/HDDS-1188
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Affects Versions: 0.5.0
Reporter: Siddharth Wagle
Assignee: Siddharth Wagle
 Fix For: 0.5.0


Jira to define package structure, maven module, recon server application and 
initial db schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1183:
-
Status: Patch Available  (was: Open)

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This includes addDelegationToken/renewDelegationToken/cancelDelegationToken 
> so that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2019-02-27 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779913#comment-16779913
 ] 

Jitendra Nath Pandey commented on HDDS-451:
---

Yes, I think we should resolve it. We have made two fixes related to duplicate 
transactions in HDDS-1026 and RATIS-489.

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: alpha2
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: 
> 54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk 
> request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779909#comment-16779909
 ] 

Hadoop QA commented on HDFS-13699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 20m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 58s{color} | {color:orange} root: The patch generated 6 new + 743 unchanged 
- 3 fixed = 749 total (was 746) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}240m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13699 |
| JIRA Patch URL | 

[jira] [Updated] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1183:
-
Labels: pull-request-available  (was: )

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> This includes addDelegationToken/renewDelegationToken/cancelDelegationToken 
> so that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2019-02-27 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDDS-451.
--
Resolution: Cannot Reproduce

Resolving as "Cannot Reproduce".

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: alpha2
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: 
> 54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk 
> request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?focusedWorklogId=205494=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205494
 ]

ASF GitHub Bot logged work on HDDS-1183:


Author: ASF GitHub Bot
Created on: 27/Feb/19 23:39
Start Date: 27/Feb/19 23:39
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #526: HDDS-1183. 
Override getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526
 
 
   …ibuted by Xiaoyu Yao.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205494)
Time Spent: 10m
Remaining Estimate: 0h

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This includes addDelegationToken/renewDelegationToken/cancelDelegationToken 
> so that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-372) There are three buffer copies in BlockOutputStream

2019-02-27 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779903#comment-16779903
 ] 

Tsz Wo Nicholas Sze commented on HDDS-372:
--

[~shashikant], it is great that you have picked this up.  Thanks.

> There are three buffer copies in BlockOutputStream
> --
>
> Key: HDDS-372
> URL: https://issues.apache.org/jira/browse/HDDS-372
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-372.20180829.patch
>
>
> Currently, there are three buffer copies in ChunkOutputStream
>  # from byte[] to ByteBuffer, and
>  # from ByteBuffer to ByteString.
>  # from ByteString to ByteBuffer for checskum computation
> We should eliminate the ByteBuffer in the middle.
> For zero copy io, we should support WritableByteChannel instead of 
> OutputStream. It won't be done in this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779899#comment-16779899
 ] 

Hadoop QA commented on HDDS-134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 
34s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
34s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 34s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 37s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  8s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | hadoop.ozone.scm.TestSCMMXBean |
|   | hadoop.ozone.om.TestOzoneManagerHA |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | 

[jira] [Commented] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2019-02-27 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779883#comment-16779883
 ] 

Tsz Wo Nicholas Sze commented on HDDS-451:
--

Let's resolve this then?  The description and stack trace become stale.  We 
should file a new JIRA if we see a problem in the future.

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: alpha2
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: 
> 54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk 
> request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1187) Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three

2019-02-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1187?focusedWorklogId=205472=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-205472
 ]

ASF GitHub Bot logged work on HDDS-1187:


Author: ASF GitHub Bot
Created on: 27/Feb/19 23:25
Start Date: 27/Feb/19 23:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #524: HDDS-1187.  
Healthy pipeline Chill Mode rule to consider only pipelines with replication 
factor three.
URL: https://github.com/apache/hadoop/pull/524#issuecomment-468071931
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1096 | trunk passed |
   | +1 | compile | 60 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 774 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 39 | trunk passed |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 800 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 102 | server-scm in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3267 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/524 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 9354ad2acfcb 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / feccd28 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/testReport/ |
   | Max. process+thread count | 402 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 205472)
Time Spent: 20m  (was: 10m)

> Healthy pipeline Chill Mode rule to consider only pipelines with replication 
> factor three
> -
>
> Key: HDDS-1187
> URL: https://issues.apache.org/jira/browse/HDDS-1187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Few offline comments from [~nandakumar131]
>  # We should not process pipeline report from datanode again during 
> calculations.
>  # We should consider only replication factor 3 ratis pipelines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Comment Edited] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779876#comment-16779876
 ] 

Xiaoyu Yao edited comment on HDDS-1061 at 2/27/19 11:16 PM:


Thanks [~ajayydv] for the update. +1 for the v7 patch pending Jenkins.


was (Author: xyao):
Thanks [~ajayydv] for the update. +1 for the v6 patch and I will commit it 
shortly.

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, 
> HDDS-1061.05.patch, HDDS-1061.06.patch, HDDS-1061.07.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779876#comment-16779876
 ] 

Xiaoyu Yao commented on HDDS-1061:
--

Thanks [~ajayydv] for the update. +1 for the v6 patch and I will commit it 
shortly.

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, 
> HDDS-1061.05.patch, HDDS-1061.06.patch, HDDS-1061.07.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779869#comment-16779869
 ] 

Hadoop QA commented on HDDS-134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
25s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
34s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m  
7s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m  7s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 14s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 51s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
|   | hadoop.ozone.ozShell.TestOzoneShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-134 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779868#comment-16779868
 ] 

Hadoop QA commented on HDDS-1061:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
23s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 20m 
11s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 
22s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 19m 22s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 22s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 39s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960446/HDDS-1061.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 7f20a2d9fc89 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779867#comment-16779867
 ] 

Ajay Kumar commented on HDDS-134:
-

patch v7 removes component from base path of client certificate. This should be 
handled at metadata dir level for HA.

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-27 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Attachment: HDDS-134.07.patch

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >