[jira] [Commented] (HDDS-186) Create under replicated queue

2018-06-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521856#comment-16521856
 ] 

genericqa commented on HDDS-186:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-186 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928981/HDDS-186.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e779d08c5b23 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 440140c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/351/testReport/ |
| Max. process+thread count | 430 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/351/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create under replicated queue
> -
>
> Key: HDDS-186
> URL: https://issues.apache.org/jira/browse/HDDS-186
> Project: 

[jira] [Created] (HDFS-13696) Distcp between 2 secure clusters and non encrypted zones fails with connection timeout

2018-06-24 Thread Rohit Pegallapati (JIRA)
Rohit Pegallapati created HDFS-13696:


 Summary: Distcp between 2 secure clusters and non encrypted zones 
fails with connection timeout
 Key: HDFS-13696
 URL: https://issues.apache.org/jira/browse/HDFS-13696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Rohit Pegallapati


We are trying to do a distcp between 2 secure clusters. But distcp is between 2 
unencrypted zones.
eg : hadoop distcp sourceHdfs://tmp/text1.txt  destHdfs://tmp
throws
{code:java}
org.apache.oozie.action.ActionExecutorException: JA009: connect timed out
at 
org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
at 
org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:437)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1247)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1425)
at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:184)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:139)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:308)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:180)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:382)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1014)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1008)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1008)
at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2333)
at 

[jira] [Commented] (HDDS-186) Create under replicated queue

2018-06-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521845#comment-16521845
 ] 

Ajay Kumar commented on HDDS-186:
-

patch v1 to add license header with updated add function to replace earlier 
message for same container/nodeid.

> Create under replicated queue
> -
>
> Key: HDDS-186
> URL: https://issues.apache.org/jira/browse/HDDS-186
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-186.00.patch, HDDS-186.01.patch
>
>
> Create under replicated queue to replicate under replicated containers in 
> Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-186) Create under replicated queue

2018-06-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-186:

Attachment: HDDS-186.01.patch

> Create under replicated queue
> -
>
> Key: HDDS-186
> URL: https://issues.apache.org/jira/browse/HDDS-186
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-186.00.patch, HDDS-186.01.patch
>
>
> Create under replicated queue to replicate under replicated containers in 
> Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-191) Queue SCMCommands via EventQueue in SCM

2018-06-24 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521829#comment-16521829
 ] 

Anu Engineer commented on HDDS-191:
---

[~elek] Patch looks very good overall. One minor comment: We should probably 
add a Precondition.checkNotNull before we access values in onMessage. +1 after 
fixing that issue.

> Queue SCMCommands via EventQueue in SCM
> ---
>
> Key: HDDS-191
> URL: https://issues.apache.org/jira/browse/HDDS-191
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-191.001.patch
>
>
> As a first step towards to a ReplicationManager I propose to introduce the 
> EventQueue to the StorageContainerManager and enable to send SCMCommands via 
> EventQueue.
> With this separation the ReplicationManager could easily send the appropriate 
> SCMCommand (eg. CopyContainer) to the EventQueue without hard dependency to 
> the SCMNodeManager. (And later we can introduce the CommandWatchers without 
> modifying the ReplicationManager part)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13610) [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove unnecessary dummy sync

2018-06-24 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521804#comment-16521804
 ] 

Yiqun Lin commented on HDFS-13610:
--

[~xkrogen], did the quick review for the v00 patch, some minor comments:

* As I see dfs.ha.tail-edits.qjm.rpc.max-txns is not public and not be 
documented in hdfs-default.xml, seems we also need to remove this in doc.
* For the test case TestStandbyInProgressTail#testCorruptJournalCache, if I am 
understanding correct, this is test In-Progress Tailing with skipping 
JournalCache, and fallback to use the streaming way, right? So the name 
testServeEditsSkippingCache maybe better understanding than 
testCorruptJournalCache.

> [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove 
> unnecessary dummy sync
> --
>
> Key: HDFS-13610
> URL: https://issues.apache.org/jira/browse/HDFS-13610
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13610-HDFS-12943.000.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is targeted at cleanup tasks:
> * Add in integration testing. We can expand {{TestStandbyInProgressTail}}
> * Documentation in HDFSHighAvailabilityWithQJM
> * Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
> now in-progress tailing does not rely as heavily on the JN committedTxnId



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521607#comment-16521607
 ] 

genericqa commented on HDDS-175:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m 
29s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 21m 
58s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 58s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 58s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 25s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} tools in the patch passed. 

[jira] [Commented] (HDDS-186) Create under replicated queue

2018-06-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521578#comment-16521578
 ] 

genericqa commented on HDDS-186:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-186 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928926/HDDS-186.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 18e6aa2f84a7 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e16e5b3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/349/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/349/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 442 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/349/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create under replicated queue
> -
>
> Key: 

[jira] [Updated] (HDDS-186) Create under replicated queue

2018-06-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-186:

Status: Patch Available  (was: Open)

> Create under replicated queue
> -
>
> Key: HDDS-186
> URL: https://issues.apache.org/jira/browse/HDDS-186
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-186.00.patch
>
>
> Create under replicated queue to replicate under replicated containers in 
> Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-175:

Attachment: HDDS-175.05.patch

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-186) Create under replicated queue

2018-06-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-186:

Attachment: HDDS-186.00.patch

> Create under replicated queue
> -
>
> Key: HDDS-186
> URL: https://issues.apache.org/jira/browse/HDDS-186
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-186.00.patch
>
>
> Create under replicated queue to replicate under replicated containers in 
> Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-189) Update HDDS to start OzoneManager

2018-06-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521451#comment-16521451
 ] 

genericqa commented on HDDS-189:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-189 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-189 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928847/HDDS-189.02.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/348/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch, HDDS-189.02.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-177) Create a releasable ozonefs artifact

2018-06-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521444#comment-16521444
 ] 

Hudson commented on HDDS-177:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14471 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14471/])
HDDS-177. Create a releasable ozonefs artifact Contributed by Marton, 
(aengineer: rev e16e5b307d6c4404db0698b9d128e5bf4aa16a8a)
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSOutputStream.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractGetFileStatus.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (edit) hadoop-dist/pom.xml
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-config
* (delete) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/Constants.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractSeek.java
* (add) hadoop-ozone/ozonefs/src/test/resources/log4j.properties
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/Constants.java
* (add) hadoop-ozone/ozonefs/pom.xml
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDelete.java
* (delete) hadoop-tools/hadoop-ozone/src/test/resources/log4j.properties
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractCreate.java
* (delete) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractGetFileStatus.java
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/package-info.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractCreate.java
* (edit) dev-support/bin/ozone-dist-layout-stitching
* (delete) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSOutputStream.java
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRename.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
* (add) 
hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-compose.yaml
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonefs.robot
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (edit) hadoop-ozone/pom.xml
* (delete) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzFs.java
* (delete) hadoop-tools/hadoop-ozone/pom.xml
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRootDir.java
* (add) hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzFs.java
* (delete) hadoop-tools/hadoop-ozone/src/test/resources/contract/ozone.xml
* (delete) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/package-info.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDistCp.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractMkdir.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractMkdir.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractOpen.java
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRename.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRootDir.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractOpen.java
* (edit) hadoop-tools/pom.xml
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (delete) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDistCp.java
* (add) hadoop-ozone/ozonefs/src/test/resources/contract/ozone.xml
* (add) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDelete.java
* (edit) hadoop-project/pom.xml
* (add) 

[jira] [Updated] (HDDS-189) Update HDDS to start OzoneManager

2018-06-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-189:
--
Status: Patch Available  (was: Open)

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch, HDDS-189.02.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-177) Create a releasable ozonefs artifact

2018-06-24 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521441#comment-16521441
 ] 

Anu Engineer commented on HDDS-177:
---

[~elek] Thanks for the patch. Some very minor comments.
# SCMNodeManager:onMessage - Shouldn't we check if the command is null and also 
verify that datanode ID is no-null or valid?
# StorageContainerManager.java -- unused import?



> Create a releasable ozonefs artifact 
> -
>
> Key: HDDS-177
> URL: https://issues.apache.org/jira/browse/HDDS-177
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-177.001.patch, HDDS-177.002.patch, 
> HDDS-177.003.patch, HDDS-177.004.patch, HDDS-177.005.patch, 
> HDDS-177.006.patch, HDDS-177.007.patch
>
>
> The current ozonefs implementaton is under hadoop-tools/hadoop-ozone and uses 
> the version of hadoop (3.2.0-SNAPSHOT currently) which is wrong.
> The other problem is that we have no single hadoop independent arfitact from 
> ozonefs which could be used with any hadoop version.
> In this patch I propose the following modification:
> * move hadoop-tools/hadoop-ozone to hadoop-ozone/ozonefs and use the hdds 
> version (0.2.1-SNAPSHOT)
> * Create a shaded artifact which includes all the required jar files to use 
> ozonefs (hdds/ozone client)
> * Create an ozonefs acceptance test to test it with the latest stable hadoop 
> version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-177) Create a releasable ozonefs artifact

2018-06-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-177:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1,  Thanks for the contribution. I have committed this to the trunk.

 

> Create a releasable ozonefs artifact 
> -
>
> Key: HDDS-177
> URL: https://issues.apache.org/jira/browse/HDDS-177
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-177.001.patch, HDDS-177.002.patch, 
> HDDS-177.003.patch, HDDS-177.004.patch, HDDS-177.005.patch, 
> HDDS-177.006.patch, HDDS-177.007.patch
>
>
> The current ozonefs implementaton is under hadoop-tools/hadoop-ozone and uses 
> the version of hadoop (3.2.0-SNAPSHOT currently) which is wrong.
> The other problem is that we have no single hadoop independent arfitact from 
> ozonefs which could be used with any hadoop version.
> In this patch I propose the following modification:
> * move hadoop-tools/hadoop-ozone to hadoop-ozone/ozonefs and use the hdds 
> version (0.2.1-SNAPSHOT)
> * Create a shaded artifact which includes all the required jar files to use 
> ozonefs (hdds/ozone client)
> * Create an ozonefs acceptance test to test it with the latest stable hadoop 
> version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin

2018-06-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521439#comment-16521439
 ] 

Hudson commented on HDDS-94:


FAILURE: Integrated in Jenkins build Hadoop-precommit-ozone-acceptance #20 (See 
[https://builds.apache.org/job/Hadoop-precommit-ozone-acceptance/20/])


> Change ozone datanode command to start the standalone datanode plugin
> -
>
> Key: HDDS-94
> URL: https://issues.apache.org/jira/browse/HDDS-94
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-94.001.patch, HDDS-94.002.patch
>
>
> The current ozone datanode command starts the regular hdfs datanode with an 
> enabled HddsDatanodeService as a datanode plugin.
> The goal is to start only the HddsDatanodeService.java (main function is 
> already there but GenericOptionParser should be adopted). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521438#comment-16521438
 ] 

Hudson commented on HDDS-184:
-

FAILURE: Integrated in Jenkins build Hadoop-precommit-ozone-acceptance #20 (See 
[https://builds.apache.org/job/Hadoop-precommit-ozone-acceptance/20/])
HDDS-184. Upgrade common-langs version to 3.7 in (aengineer: 
[https://github.com/apache/hadoop/commit/ca14fec02fb14e1b708f266bc715e84ae9784d6f])
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
* (edit) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (edit) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
* (edit) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (edit) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java


> Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone
> 
>
> Key: HDDS-184
> URL: https://issues.apache.org/jira/browse/HDDS-184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-184.1.patch
>
>
> This is a separated task from HADOOP-15495 for simplicity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521427#comment-16521427
 ] 

Hudson commented on HDFS-13682:
---

FAILURE: Integrated in Jenkins build Hadoop-precommit-ozone-acceptance #20 (See 
[https://builds.apache.org/job/Hadoop-precommit-ozone-acceptance/20/])
HDFS-13682. Cannot create encryption zone after KMS auth token expires. (xiao: 
[https://github.com/apache/hadoop/commit/32f867a6a907c05a312657139d295a92756d98ef])
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSecureEncryptionZoneWithKMS.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java


> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, kms, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13682.01.patch, HDFS-13682.02.patch, 
> HDFS-13682.03.patch, HDFS-13682.dirty.repro.branch-2.patch, 
> HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521432#comment-16521432
 ] 

Hudson commented on HDFS-13692:
---

FAILURE: Integrated in Jenkins build Hadoop-precommit-ozone-acceptance #20 (See 
[https://builds.apache.org/job/Hadoop-precommit-ozone-acceptance/20/])
HDFS-13692. StorageInfoDefragmenter floods log when compacting (yqlin: 
[https://github.com/apache/hadoop/commit/30728aced4a6b05394b3fc8c613f39fade9cf3c2])
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-166) Create a landing page for Ozone

2018-06-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-166:
--
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

+1, Thank you for the contribution. Please note that I have committed this site 
to

[https://git-wip-us.apache.org/repos/asf/hadoop-ozonesite.git]  branch: 
*asf-site*.

This is based on INFRA-16457 discussions.

> Create a landing page for Ozone
> ---
>
> Key: HDDS-166
> URL: https://issues.apache.org/jira/browse/HDDS-166
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: document
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: ozone-site-rendered.tar.gz, ozone-site-source.tar.gz
>
>
> As Ozone release cycle is seprated from hadoop we need a separated page to 
> publish the releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org