[jira] [Commented] (HDFS-13721) NPE in DataNode due to uninitialized DiskBalancer

2018-07-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535617#comment-16535617
 ] 

Hudson commented on HDFS-13721:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14534 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14534/])
HDFS-13721. NPE in DataNode due to uninitialized DiskBalancer. (xiao: rev 
936e0df0d344f13eea97fe624b154e8356cdea7c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> NPE in DataNode due to uninitialized DiskBalancer
> -
>
> Key: HDFS-13721
> URL: https://issues.apache.org/jira/browse/HDFS-13721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13721.01.patch, HDFS-13721.02.patch
>
>
> {noformat}
> 2018-06-28 05:11:47,650 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting 
> attribute DiskBalancerStatus of Hadoop:service=DataNode,name=DataNodeInfo 
> threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>  * TRACEBACK 4 *
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>  at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
>  at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
>  at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1537)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>  at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at org.eclipse.jetty.server.Server.handle(Server.java:534)
>  at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>  at 
> 

[jira] [Updated] (HDFS-13721) NPE in DataNode due to uninitialized DiskBalancer

2018-07-06 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13721:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the review [~elgoiri] and [~shashikant]!

> NPE in DataNode due to uninitialized DiskBalancer
> -
>
> Key: HDFS-13721
> URL: https://issues.apache.org/jira/browse/HDFS-13721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13721.01.patch, HDFS-13721.02.patch
>
>
> {noformat}
> 2018-06-28 05:11:47,650 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting 
> attribute DiskBalancerStatus of Hadoop:service=DataNode,name=DataNodeInfo 
> threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>  * TRACEBACK 4 *
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>  at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
>  at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
>  at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1537)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>  at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at org.eclipse.jetty.server.Server.handle(Server.java:534)
>  at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerStatus(DataNode.java:3146)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> 

[jira] [Commented] (HDFS-13721) NPE in DataNode due to uninitialized DiskBalancer

2018-07-06 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535610#comment-16535610
 ] 

Xiao Chen commented on HDFS-13721:
--

Failed tests look unrelated and passed locally. Committing this.

> NPE in DataNode due to uninitialized DiskBalancer
> -
>
> Key: HDFS-13721
> URL: https://issues.apache.org/jira/browse/HDFS-13721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13721.01.patch, HDFS-13721.02.patch
>
>
> {noformat}
> 2018-06-28 05:11:47,650 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting 
> attribute DiskBalancerStatus of Hadoop:service=DataNode,name=DataNodeInfo 
> threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>  * TRACEBACK 4 *
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>  at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
>  at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
>  at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1537)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>  at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at org.eclipse.jetty.server.Server.handle(Server.java:534)
>  at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerStatus(DataNode.java:3146)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> 

[jira] [Updated] (HDDS-48) ContainerIO - Storage Management

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-48?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-48:
---
Attachment: HDDS-48.01.patch

> ContainerIO - Storage Management
> 
>
> Key: HDDS-48
> URL: https://issues.apache.org/jira/browse/HDDS-48
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: ContainerIO-StorageManagement-DesignDoc.pdf, HDDS 
> DataNode Disk Layout.pdf, HDDS-48.00.patch, HDDS-48.01.patch
>
>
> We propose refactoring the HDDS DataNode IO path to enforce clean separation 
> between the Container management and the Storage layers. All components 
> requiring access to HDDS containers on a Datanode should do so via this 
> Storage layer.
> The proposed Storage layer would be responsible for end-to-end disk and 
> volume management. This involves running disk checks and detecting disk 
> failures, distributing data across disks as per the configured policy, 
> collecting performance statistics. 
> Attached Design Doc gives an overview of the proposed class diagram.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-215) Handle Container Already Exists exception on client side

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-215.
-
Resolution: Not A Problem

> Handle Container Already Exists exception on client side
> 
>
> Key: HDDS-215
> URL: https://issues.apache.org/jira/browse/HDDS-215
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Priority: Major
>
> When creating containers on DN, if we get CONTAINER_ALREADY_EXISTS exception, 
> it should be handled on the the client side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535572#comment-16535572
 ] 

genericqa commented on HDFS-13723:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 0s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}217m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13723 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930608/HDFS-13723.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7fb7d9251d8e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535529#comment-16535529
 ] 

genericqa commented on HDDS-213:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
43s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-hdds/container-service generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 57s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.keyvalue.TestChunkManagerImpl |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
|   | hadoop.ozone.container.keyvalue.TestKeyManagerImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-213 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930618/HDDS-213-HDDS-48.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c0815557d77 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / cb9574a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535515#comment-16535515
 ] 

genericqa commented on HDFS-13475:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 27s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterMountTable |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek |
|   | hadoop.hdfs.server.federation.router.TestRouter |
|   | hadoop.fs.contract.router.TestRouterHDFSContractRename |
|   | hadoop.hdfs.server.federation.router.TestSafeMode |
|   | hadoop.fs.contract.router.TestRouterHDFSContractConcat |
|   | hadoop.fs.contract.router.TestRouterHDFSContractSeek |
|   | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectory |
|   | hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat |
|   | hadoop.fs.contract.router.TestRouterHDFSContractAppend |
|   | hadoop.fs.contract.router.TestRouterHDFSContractMkdir |
|   | hadoop.hdfs.server.federation.router.TestDisableNameservices |
|   | hadoop.hdfs.server.federation.router.TestRouterAllResolver |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | 

[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535509#comment-16535509
 ] 

Íñigo Goiri commented on HDFS-13475:


Thanks [~csun], I think  [^HDFS-13475.001.patch] looks much cleaner as 
everything related to safe mode is in a single mode.
One thing I ca think of, is that there's no need to call:
{code}
this.router.getSafemodeService().setSafeMode(true);
this.router.getSafemodeService().setManualSafeMode(true);
{code}

As {{this.router.getSafemodeService().setManualSafeMode(true);}} should be 
enough.
Actually, I'm not sure if we use setSafeMode() for anything as nobody is 
actually calling it.

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535505#comment-16535505
 ] 

Hanisha Koneru commented on HDDS-213:
-

Thanks [~bharatviswa] for the review.

Updated the patch.

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.003.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-237:

Fix Version/s: 0.2.1

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-237-HDDS-48.00.patch
>
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-237:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 

Thank You [~hanishakoneru] for review.

I have committed this to HDDS-48.

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-237-HDDS-48.00.patch
>
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-06 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535479#comment-16535479
 ] 

Shweta commented on HDFS-13663:
---

Failed tests passed locally and don't look related to change. The issue was 
trivial and hence there aren’t any unit tests associated.

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535472#comment-16535472
 ] 

genericqa commented on HDDS-237:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930599/HDDS-237-HDDS-48.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0a54d5cb38e2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / e899c4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/459/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/459/testReport/ |
| Max. process+thread count | 327 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 

[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-06 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535464#comment-16535464
 ] 

Chao Sun commented on HDFS-13475:
-

Thanks [~elgoiri] for the review. Uploaded patch v1 to address the comments.

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-06 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13475:

Attachment: HDFS-13475.001.patch

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535465#comment-16535465
 ] 

Bharat Viswanadham commented on HDDS-213:
-

Thank You [~hanishakoneru] for fix.

Few comments I have are:

1. We can remove marking container as invalid, if update metadata operation 
failed.
As anyway we are restoring the data to older
state. In this case we can consider that update operation is failed on that
container, and return error to client.
KeyValueContainerData.java: Line 367,368
// On error, mark the container as Invalid and reset the metadata.
containerData.markAsInvalid();
 
2. Rename will fail here, as destination file already exists
KeyValueContainerData.java: Line 249,250
NativeIO.renameTo(tmpContainerFile, containerFile);
NativeIO.renameTo(tmpChecksumFile, checksumFile);

And also in update, we call createContainerFile and we rename them again. This 
will fail, as the containerFile and containerCheckSumFile are created in 
update() using createTempFile.

 

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535454#comment-16535454
 ] 

genericqa commented on HDDS-213:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
36s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-hdds/container-service generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-213 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930600/HDDS-213-HDDS-48.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 37688a3377d5 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 7dcf587 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/458/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/458/testReport/ |
| Max. process+thread count | 433 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: 

[jira] [Commented] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535440#comment-16535440
 ] 

Siyao Meng commented on HDFS-13723:
---

In [^HDFS-13723.003.patch]:

I have replaced
{code:java}
import org.apache.log4j.Level{code}
with
{code:java}
import org.slf4j.event.Level{code}
to get rid of "deprecation" warning.

 

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch, HDFS-13723.002.patch, 
> HDFS-13723.003.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535441#comment-16535441
 ] 

Wei-Chiu Chuang commented on HDFS-13723:


LGTM +1 pending Jenkins.

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch, HDFS-13723.002.patch, 
> HDFS-13723.003.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535441#comment-16535441
 ] 

Wei-Chiu Chuang edited comment on HDFS-13723 at 7/6/18 10:40 PM:
-

rev003 LGTM +1 pending Jenkins.


was (Author: jojochuang):
LGTM +1 pending Jenkins.

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch, HDFS-13723.002.patch, 
> HDFS-13723.003.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13723:
--
Attachment: HDFS-13723.003.patch

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch, HDFS-13723.002.patch, 
> HDFS-13723.003.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535427#comment-16535427
 ] 

Bharat Viswanadham commented on HDDS-187:
-

[~ajayydv] and [~xyao]

Can we hold off committing this until HDDS-48 merged into the trunk. 

As this Jira patch requires some rewrite according to new classes from 
ContainerIO.

Let me know your thoughts on this?

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch, 
> HDDS-187.07.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535422#comment-16535422
 ] 

Siyao Meng commented on HDFS-13723:
---

[~jojochuang] Thanks for your comment! I have updated the patch to add the 
annotation.

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch, HDFS-13723.002.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13723:
--
Attachment: HDFS-13723.002.patch

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch, HDFS-13723.002.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13637) RBF: Router fails when threadIndex (in ConnectionPool) wraps around Integer.MIN_VALUE

2018-07-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13637:
--

Assignee: CR Hota  (was: CR Hota(invalid))

> RBF: Router fails when threadIndex (in ConnectionPool) wraps around 
> Integer.MIN_VALUE
> -
>
> Key: HDFS-13637
> URL: https://issues.apache.org/jira/browse/HDFS-13637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
>  Labels: RBF
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13637.0.patch, HDFS-13637.1.patch, 
> HDFS-13637.2.patch, HDFS-13637.3.patch
>
>
> {code:java}
> int threadIndex = this.clientIndex.getAndIncrement();
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> The above code in ConnectionPool.java getConnection method throws 
> java.lang.ArrayIndexOutOfBoundsException when clientIndex wraps to 
> Integer.MIN_VALUE and makes router reject all requests. threadIndex should be 
> reset to 0.
> {code:java}
> if (threadIndex < 0) {
> // Wrap around 0 to keep array lookup index positive
> this.clientIndex.set(0);
> threadIndex = this.clientIndex.getAndIncrement();
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-235) Fix TestOzoneAuditLogger#verifyDefaultLogLevel

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535410#comment-16535410
 ] 

genericqa commented on HDDS-235:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 37m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-235 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930591/HDDS-235.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be39a560c39d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 061b168 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/457/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/457/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix TestOzoneAuditLogger#verifyDefaultLogLevel
> --
>
> Key: HDDS-235
> URL: https://issues.apache.org/jira/browse/HDDS-235
> Project: 

[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535407#comment-16535407
 ] 

genericqa commented on HDFS-13663:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930568/HDFS-13663.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9d0690603dd 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 061b168 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24569/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt

[jira] [Updated] (HDDS-211) Add a create container Lock

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-211:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~hanishakoneru] for review.

I have committed this to HDDS-48 branch.

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch, 
> HDDS-211-HDDS-48.02.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-211) Add a create container Lock

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-211:

Fix Version/s: 0.2.1

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch, 
> HDDS-211-HDDS-48.02.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13121) NPE when request file descriptors when SC read

2018-07-06 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13121:
---
   Resolution: Fixed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed patch 04. Thanks [~xiegang112] for filing the jira, [~zvenczel] for 
the patch and [~mackrorysd] for review!

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, 
> HDFS-13121.03.patch, HDFS-13121.04.patch, test-only.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-237:

Status: Patch Available  (was: Open)

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-237-HDDS-48.00.patch
>
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13121) NPE when request file descriptors when SC read

2018-07-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535404#comment-16535404
 ] 

Hudson commented on HDFS-13121:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14532 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14532/])
HDFS-13121. NPE when request file descriptors when SC read. Contributed 
(weichiu: rev 0247cb6318507afe06816e337a19f396afc53efa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java


> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, 
> HDFS-13121.03.patch, HDFS-13121.04.patch, test-only.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535403#comment-16535403
 ] 

Hanisha Koneru commented on HDDS-237:
-

Thanks [~bharatviswa] for this patch.

+1 pending Jenkins.

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-237-HDDS-48.00.patch
>
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-211) Add a create container Lock

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535402#comment-16535402
 ] 

genericqa commented on HDDS-211:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 3s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
43s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 1 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-ozone/tools in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-ozone/tools generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-211 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930576/HDDS-211-HDDS-48.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (HDFS-13121) NPE when request file descriptors when SC read

2018-07-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535401#comment-16535401
 ] 

Wei-Chiu Chuang commented on HDFS-13121:


Sorry for coming to this late.

I don't have anything in mind to improve it better than what you have, and it 
is already better than before.  We could improve the handling if the situation 
is hit again. So +1 and will commit the patch v04.

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, 
> HDFS-13121.03.patch, HDFS-13121.04.patch, test-only.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.002.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-237:

Attachment: HDDS-237-HDDS-48.00.patch

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-237-HDDS-48.00.patch
>
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-237:

Issue Type: Sub-task  (was: Bug)
Parent: HDDS-48

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-237:
---

Assignee: Bharat Viswanadham

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-237:
---

 Summary: Add updateDeleteTransactionId
 Key: HDDS-237
 URL: https://issues.apache.org/jira/browse/HDDS-237
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Add updateDeleteTransactionId to our new classes, which is added to 
ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-211) Add a create container Lock

2018-07-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535367#comment-16535367
 ] 

Hanisha Koneru commented on HDDS-211:
-

Thanks [~bharatviswa] for the fix.

+1 pending Jenkins.

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch, 
> HDDS-211-HDDS-48.02.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13716) hdfs.DFSclient should log KMS DT acquisition at INFO level

2018-07-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535348#comment-16535348
 ] 

Wei-Chiu Chuang commented on HDFS-13716:


Make sense to me, thanks for reporting the issue. CC: [~xiaochen] 

> hdfs.DFSclient should log KMS DT acquisition at INFO level
> --
>
> Key: HDFS-13716
> URL: https://issues.apache.org/jira/browse/HDFS-13716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-13716.001.patch
>
>
> We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
> Spark application logs but not for KMS DTs:
> 18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for admin: 
> HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
> issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
> masterKeyId=795 on ha-hdfs:dev
> 18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://hostnam.example.net:9083
> 18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
> current connections: 1
> 18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
> 18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 
> 68 2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 
> 76 65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96
> Please implement KMS DT acquisition events at INFO level as it will help 
> supportability of encrypted HDSF filesystems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-235) Fix TestOzoneAuditLogger#verifyDefaultLogLevel

2018-07-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-235:

Attachment: HDDS-235.001.patch

> Fix TestOzoneAuditLogger#verifyDefaultLogLevel
> --
>
> Key: HDDS-235
> URL: https://issues.apache.org/jira/browse/HDDS-235
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-235.001.patch
>
>
> This ticket is opened to fix the OOB error from 
> TestOzoneAuditLogger#verifyDefaultLogLevel.
> {code}
> h3. Error Message
> Index: 0, Size: 0
> h3. Stacktrace
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at 
> java.util.ArrayList.rangeCheck(ArrayList.java:657) at 
> java.util.ArrayList.get(ArrayList.java:433) at 
> org.apache.hadoop.ozone.audit.TestOzoneAuditLogger.verifyLog(TestOzoneAuditLogger.java:125)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-235) Fix TestOzoneAuditLogger#verifyDefaultLogLevel

2018-07-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-235:

Status: Patch Available  (was: Open)

> Fix TestOzoneAuditLogger#verifyDefaultLogLevel
> --
>
> Key: HDDS-235
> URL: https://issues.apache.org/jira/browse/HDDS-235
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-235.001.patch
>
>
> This ticket is opened to fix the OOB error from 
> TestOzoneAuditLogger#verifyDefaultLogLevel.
> {code}
> h3. Error Message
> Index: 0, Size: 0
> h3. Stacktrace
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at 
> java.util.ArrayList.rangeCheck(ArrayList.java:657) at 
> java.util.ArrayList.get(ArrayList.java:433) at 
> org.apache.hadoop.ozone.audit.TestOzoneAuditLogger.verifyLog(TestOzoneAuditLogger.java:125)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-235) Fix TestOzoneAuditLogger#verifyDefaultLogLevel

2018-07-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-235:

Fix Version/s: 0.2.1

> Fix TestOzoneAuditLogger#verifyDefaultLogLevel
> --
>
> Key: HDDS-235
> URL: https://issues.apache.org/jira/browse/HDDS-235
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
>
> This ticket is opened to fix the OOB error from 
> TestOzoneAuditLogger#verifyDefaultLogLevel.
> {code}
> h3. Error Message
> Index: 0, Size: 0
> h3. Stacktrace
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at 
> java.util.ArrayList.rangeCheck(ArrayList.java:657) at 
> java.util.ArrayList.get(ArrayList.java:433) at 
> org.apache.hadoop.ozone.audit.TestOzoneAuditLogger.verifyLog(TestOzoneAuditLogger.java:125)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-218) add existing docker-compose files to the ozone release artifact

2018-07-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535316#comment-16535316
 ] 

Ajay Kumar edited comment on HDDS-218 at 7/6/18 8:24 PM:
-

+1 for the idea.


was (Author: ajayydv):
+1

> add existing docker-compose files to the ozone release artifact
> ---
>
> Key: HDDS-218
> URL: https://issues.apache.org/jira/browse/HDDS-218
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Currently we use docker-compose files to run ozone pseudo cluster locally. 
> After a full build, they can be found under hadoop-dist/target/compose.
> As they are very useful, I propose to make them part of the ozone release to 
> make it easier to try out ozone locally. 
> I propose to create a new folder (docker/) in the ozone.tar.gz which contains 
> all the docker-compose subdirectories + some basic README how they could be 
> used.
> We should explain in the README that the docker-compose files are not for 
> production just for local experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-218) add existing docker-compose files to the ozone release artifact

2018-07-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535316#comment-16535316
 ] 

Ajay Kumar commented on HDDS-218:
-

+1

> add existing docker-compose files to the ozone release artifact
> ---
>
> Key: HDDS-218
> URL: https://issues.apache.org/jira/browse/HDDS-218
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Currently we use docker-compose files to run ozone pseudo cluster locally. 
> After a full build, they can be found under hadoop-dist/target/compose.
> As they are very useful, I propose to make them part of the ozone release to 
> make it easier to try out ozone locally. 
> I propose to create a new folder (docker/) in the ozone.tar.gz which contains 
> all the docker-compose subdirectories + some basic README how they could be 
> used.
> We should explain in the README that the docker-compose files are not for 
> production just for local experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535297#comment-16535297
 ] 

Arpit Agarwal commented on HDDS-232:


Added HDDS-216 and HDDS-236 as dependencies.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-236) hadoop-ozone unit tests should use randomized ports

2018-07-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-236:
--

 Summary: hadoop-ozone unit tests should use randomized ports
 Key: HDDS-236
 URL: https://issues.apache.org/jira/browse/HDDS-236
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Arpit Agarwal
 Fix For: 0.2.1


MiniOzoneCluster should use randomized ports by default, so individual tests 
don't have to do anything to avoid port conflicts at runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-236) hadoop-ozone unit tests should use randomized ports

2018-07-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-236:
---
Component/s: test

> hadoop-ozone unit tests should use randomized ports
> ---
>
> Key: HDDS-236
> URL: https://issues.apache.org/jira/browse/HDDS-236
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-216:
---
Component/s: (was: SCM)
 test

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: test
> Fix For: 0.2.1
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-216:
---
Description: 
MiniOzoneCluster should use randomized ports by default, so individual tests 
don't have to do anything to avoid port conflicts at runtime. e.g. 
TestStorageContainerManagerHttpServer fails if port 9876 is in use.

{code}
[INFO] Running org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 s 
<<< FAILURE! - in 
org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
[ERROR] 
testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
  Time elapsed: 0.401 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
{code}

  was:
TestStorageContainerManagerHttpServer fails if port 9876 is in use.

{code}
[INFO] Running org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 s 
<<< FAILURE! - in 
org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
[ERROR] 
testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
  Time elapsed: 0.401 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
{code}


> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: test
> Fix For: 0.2.1
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-216:
---
Summary: hadoop-hdds unit tests should use randomized ports  (was: 
TestStorageContainerManagerHttpServer uses hard-coded port)

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: test
> Fix For: 0.2.1
>
>
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535291#comment-16535291
 ] 

Arpit Agarwal commented on HDDS-232:


Thanks [~bharatviswa], good point. I will file a separate Jira to ensure that 
HDDS/Ozone tests use randomized ports. I hit a related issue recently via 
HDDS-216.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535274#comment-16535274
 ] 

Hudson commented on HDDS-167:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14531 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14531/])
HDDS-167. Rename KeySpaceManager to OzoneManager. Contributed by Arpit (arp: 
rev 061b168529a9cd5d6a3a482c890bacdb49186368)
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/exceptions/package-info.java
* (edit) .gitignore
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolPB.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/OpenKeyCleanupService.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHttpServer.java
* (edit) hadoop-dist/src/main/compose/ozone/docker-config
* (edit) dev-support/bin/ozone-dist-layout-stitching
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmMetrics.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneKey.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (add) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/OzoneGetConf.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyLocationInfoGroup.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/ksm/protocolPB/package-info.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManagerRestInterface.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-config
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
* (add) hadoop-ozone/ozone-manager/src/main/webapps/ozoneManager/om-metrics.html
* (edit) hadoop-hdds/common/src/main/proto/hdds.proto
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/ksm/protocol/package-info.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/VolumeArgs.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/ksm/helpers/KsmKeyInfo.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/KSMPBHelper.java
* (edit) hadoop-ozone/docs/content/GettingStarted.md
* (edit) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonefs.robot
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmBlockVersioning.java
* (edit) hadoop-ozone/common/src/main/bin/start-ozone.sh
* (edit) hadoop-ozone/docs/content/_index.md
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/DefaultRestServerSelector.java
* (edit) 
hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-compose.yaml
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestContainerReportWithKeys.java
* (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (delete) hadoop-ozone/ozone-manager/src/main/webapps/ksm/ksm-metrics.html
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/package-info.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/ksm/protocol/KeySpaceManagerProtocol.java
* (edit) hadoop-dist/src/main/compose/ozone/docker-compose.yaml
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/BucketManager.java
* (edit) 
hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
* (edit) 
hadoop-hdds/framework/src/main/resources/webapps/static/templates/config.html
* (delete) 

[jira] [Comment Edited] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535259#comment-16535259
 ] 

Bharat Viswanadham edited comment on HDDS-232 at 7/6/18 7:38 PM:
-

Thank You Arpit Agarwal, for reporting this issue.

I think if we have this parallel test enabled for ozone/HDDS, we should take 
care of multiple mini ozone clusters launching with the same port.


was (Author: bharatviswa):
Thank You Arpit Agarwal, for reporting this issue.

I think if we have this parallel test enabled for ozone/HDDS, we should take 
care of multiple ozone clusters launching with the same port.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535259#comment-16535259
 ] 

Bharat Viswanadham edited comment on HDDS-232 at 7/6/18 7:34 PM:
-

Thank You Arpit Agarwal, for reporting this issue.

I think if we have this parallel test enabled for ozone/HDDS, we should take 
care of multiple ozone clusters launching with the same port.


was (Author: bharatviswa):
I think if we have this parallel test enabled for ozone/HDDS, we should take 
care of multiple ozone clusters launching with the same port.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535259#comment-16535259
 ] 

Bharat Viswanadham commented on HDDS-232:
-

I think if we have this parallel test enabled for ozone/HDDS, we should take 
care of multiple ozone clusters launching with the same port.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-211) Add a create container Lock

2018-07-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535258#comment-16535258
 ] 

Bharat Viswanadham commented on HDDS-211:
-

Rebased the patch and also fixed one find bug issue which is missed in HDDS-182.

And also changed Handler to an abstract class, and we can implement handle 
method in the implementation of the handlers which will be there for each 
container type.

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch, 
> HDDS-211-HDDS-48.02.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-211) Add a create container Lock

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-211:

Attachment: HDDS-211-HDDS-48.02.patch

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch, 
> HDDS-211-HDDS-48.02.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks a lot for the review and all the rebasing assistance 
[~nandakumar131]! Really appreciate it.

Also thanks for the review [~anu].

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch, HDDS-167.10.patch, HDDS-167.11.patch, 
> HDDS-167.12.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-217) Move all SCMEvents to a package

2018-07-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535245#comment-16535245
 ] 

Ajay Kumar commented on HDDS-217:
-

+1

> Move all SCMEvents to a package
> ---
>
> Key: HDDS-217
> URL: https://issues.apache.org/jira/browse/HDDS-217
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-217.001.patch, HDDS-217.002.patch
>
>
> Moving all SCM internal events to a single package; then it is easy to write 
> event producers and consumers easily. Also, we have a single location for all 
> the events. This patch is a simple refactoring patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-06 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13663:
--
  Attachment: HDFS-13663.001.patch
Target Version/s: 3.2.0
  Status: Patch Available  (was: Open)

Adding Exception for Incorrect block size.

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535226#comment-16535226
 ] 

genericqa commented on HDDS-232:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
72m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 32s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930550/HDDS-232.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux ec512cf6190f 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 39ad989 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/455/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/455/testReport/ |
| Max. process+thread count | 2983 (vs. ulimit of 1) |
| modules | C: hadoop-hdds hadoop-ozone U: . |
| 

[jira] [Commented] (HDDS-217) Move all SCMEvents to a package

2018-07-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535225#comment-16535225
 ] 

Anu Engineer commented on HDDS-217:
---

Thanks will fix it while committing.


> Move all SCMEvents to a package
> ---
>
> Key: HDDS-217
> URL: https://issues.apache.org/jira/browse/HDDS-217
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-217.001.patch, HDDS-217.002.patch
>
>
> Moving all SCM internal events to a single package; then it is easy to write 
> event producers and consumers easily. Also, we have a single location for all 
> the events. This patch is a simple refactoring patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-06 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-13663:
-

Assignee: Shweta

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-217) Move all SCMEvents to a package

2018-07-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535223#comment-16535223
 ] 

Arpit Agarwal commented on HDDS-217:


+1. Nitpick - _NodeReports are  send out_ should be _NodeReports are sent out_, 
however please feel free to commit.

> Move all SCMEvents to a package
> ---
>
> Key: HDDS-217
> URL: https://issues.apache.org/jira/browse/HDDS-217
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-217.001.patch, HDDS-217.002.patch
>
>
> Moving all SCM internal events to a single package; then it is easy to write 
> event producers and consumers easily. Also, we have a single location for all 
> the events. This patch is a simple refactoring patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13674) Improve documentation on Metrics

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535222#comment-16535222
 ] 

genericqa commented on HDFS-13674:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
48m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930559/HDFS-13674.002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 141e99d786a5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 39ad989 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24568/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch, HDFS-13674.001.patch, 
> HDFS-13674.002.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-199) Implement ReplicationManager to replicate ClosedContainers

2018-07-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535207#comment-16535207
 ] 

Ajay Kumar edited comment on HDDS-199 at 7/6/18 6:34 PM:
-

[~elek] thanks for updating the patch. On a second look at ReplicationManager i 
thought of having a ExecutorPool inside it whose size is configuration driven. 
(Instead of it being a runnable thread). Its default size may be 1 but it will 
give us flexibility to dial it up if required. Not sure if this is an overkill 
as single thread might be sufficient to handle all replica related work even in 
busy big cluster. Any thought on this?
{quote}That's a very hard question. IMHO there is no easy way to get the 
current datanodes after HDDS-175, as there is no container -> datanode[] 
mapping for the closed containers. Do you know where this information available 
after HDDS-175? (I rebased the patch but can't return with{quote}
[HDDS-228] should give us the means to find out the replicas of given 
container.  [~anu], [~nandakumar131] We might have to check that we are not 
adding any replication request for RATIS, open containers. This can be done 
either by ContainerReportHandler or ReplicationManager.
{quote} fixed only the SCMContainerPlacementRandom.java and not the 
SCMCommonPolicy.java. Instead of todo, now it should be handled.{quote}
Shall we add a test case to validate excluded nodes are not returned?

Few more nits:
 * ReplicationManager
 ** L81: pipelineSelector can be removed.
 ** L200 ReplicationRequestToRepeat constructor takes UUID as parameter, can't 
we use ReplicationRequest uuid. (i.e we can remove extra paramter and field and 
have a API to return ReplicationRequest#getUUID)
 ** javadoc for class ReplicationRequestToRepeat



was (Author: ajayydv):
[~elek] thanks for updating the patch. On a second look at ReplicationManager i 
thought of having a ExecutorPool inside it whose size is configuration driven. 
(Instead of it being a runnable thread). Its default size may be 1 but it will 
give us flexibility to dial it up if required. Not sure if this is an overkill 
as single thread might be sufficient to handle all replica related work even in 
busy big cluster. Any thought on this?

Few more nits:
 * ReplicationManager
 ** L81: pipelineSelector can be removed.
 ** L200 ReplicationRequestToRepeat constructor takes UUID as parameter, can't 
we use ReplicationRequest uuid. (i.e we can remove extra paramter and field and 
have a API to return ReplicationRequest#getUUID)
 ** javadoc for class ReplicationRequestToRepeat
{quote}That's a very hard question. IMHO there is no easy way to get the 
current datanodes after HDDS-175, as there is no container -> datanode[] 
mapping for the closed containers. Do you know where this information available 
after HDDS-175? (I rebased the patch but can't return with{quote}
[HDDS-228] should give us the means to find out the replicas of given 
container. We might have to check that we are not adding any replication 
request for RATIS, open containers.
{quote} fixed only the SCMContainerPlacementRandom.java and not the 
SCMCommonPolicy.java. Instead of todo, now it should be handled.{quote}
Shall we add a test case to validate excluded nodes are not returned?


> Implement ReplicationManager to replicate ClosedContainers
> --
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete 

[jira] [Commented] (HDDS-199) Implement ReplicationManager to replicate ClosedContainers

2018-07-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535207#comment-16535207
 ] 

Ajay Kumar commented on HDDS-199:
-

[~elek] thanks for updating the patch. On a second look at ReplicationManager i 
thought of having a ExecutorPool inside it whose size is configuration driven. 
(Instead of it being a runnable thread). Its default size may be 1 but it will 
give us flexibility to dial it up if required. Not sure if this is an overkill 
as single thread might be sufficient to handle all replica related work even in 
busy big cluster. Any thought on this?

Few more nits:
 * ReplicationManager
 ** L81: pipelineSelector can be removed.
 ** L200 ReplicationRequestToRepeat constructor takes UUID as parameter, can't 
we use ReplicationRequest uuid. (i.e we can remove extra paramter and field and 
have a API to return ReplicationRequest#getUUID)
 ** javadoc for class ReplicationRequestToRepeat
{quote}That's a very hard question. IMHO there is no easy way to get the 
current datanodes after HDDS-175, as there is no container -> datanode[] 
mapping for the closed containers. Do you know where this information available 
after HDDS-175? (I rebased the patch but can't return with{quote}
[HDDS-228] should give us the means to find out the replicas of given 
container. We might have to check that we are not adding any replication 
request for RATIS, open containers.
{quote} fixed only the SCMContainerPlacementRandom.java and not the 
SCMCommonPolicy.java. Instead of todo, now it should be handled.{quote}
Shall we add a test case to validate excluded nodes are not returned?


> Implement ReplicationManager to replicate ClosedContainers
> --
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535203#comment-16535203
 ] 

Anu Engineer commented on HDDS-167:
---

[~jghoman]/[~jakobhoman] Thanks for this suggestion. We have officially renamed 
KSM to OM. Really appreciate all your comments on Ozone.


> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch, HDDS-167.10.patch, HDDS-167.11.patch, 
> HDDS-167.12.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535201#comment-16535201
 ] 

genericqa commented on HDDS-232:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
67m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 38s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.TestMiniOzoneCluster |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930463/HDDS-232.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux f05c2ec6e71a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 39ad989 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/454/artifact/out/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535195#comment-16535195
 ] 

Anu Engineer commented on HDDS-167:
---

+1, Thanks for getting this done. I know it has been difficult patch.


> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch, HDDS-167.10.patch, HDDS-167.11.patch, 
> HDDS-167.12.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-204) Modify Integration tests for new ContainerIO classes

2018-07-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-204:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~arpitagarwal] for review.

I have committed this to HDDS-48 branch.

> Modify Integration tests for new ContainerIO classes
> 
>
> Key: HDDS-204
> URL: https://issues.apache.org/jira/browse/HDDS-204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-204-HDDS-48.00.patch, HDDS-204-HDDS-48.01.patch
>
>
> Fix Integration tests in Ozone to modify according to ContainerIO classes
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535186#comment-16535186
 ] 

Arpit Agarwal commented on HDDS-167:


I tried running test-patch locally and ran into a bunch of odd errors, like 
changes not being picked up when compiling some modules.

I think test-patch is not correctly configured to do multi-module HDDS builds. 
I propose committing this without a +1 from Yetus, and if there is any breakage 
we address it later.

[~nandakumar131], are you +1 on the latest patch?

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch, HDDS-167.10.patch, HDDS-167.11.patch, 
> HDDS-167.12.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13674) Improve documentation on Metrics

2018-07-06 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13674:

Attachment: HDFS-13674.002.patch

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch, HDFS-13674.001.patch, 
> HDFS-13674.002.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-235) Fix TestOzoneAuditLogger#verifyDefaultLogLevel

2018-07-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-235:
---

 Summary: Fix TestOzoneAuditLogger#verifyDefaultLogLevel
 Key: HDDS-235
 URL: https://issues.apache.org/jira/browse/HDDS-235
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to fix the OOB error from 
TestOzoneAuditLogger#verifyDefaultLogLevel.

{code}
h3. Error Message

Index: 0, Size: 0
h3. Stacktrace

java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at 
java.util.ArrayList.rangeCheck(ArrayList.java:657) at 
java.util.ArrayList.get(ArrayList.java:433) at 
org.apache.hadoop.ozone.audit.TestOzoneAuditLogger.verifyLog(TestOzoneAuditLogger.java:125)

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13723) Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535114#comment-16535114
 ] 

Siyao Meng commented on HDFS-13723:
---

[~genericqa] Unrelated flaky test 
+hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints+. Passed locally. Can be 
ignored.

> Occasional "Should be different group" error in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13723
> URL: https://issues.apache.org/jira/browse/HDFS-13723
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13723.001.patch
>
>
> In some occasions, the user-group mapping refresh timeout test assertion 
> would fail due to the mapping didn't refresh in time, reporting "Should be 
> different group".
>  
> Trace:
> {code:java}
> java.lang.AssertionError: Should be different group 
> at 
> org.apache.hadoop.security.TestRefreshUserMappings.testGroupMappingRefresh(TestRefreshUserMappings.java:153)
> :
> :
> 2018-07-04 19:35:21,073 [BP-1412052829-172.26.17.254-1530758120647 
> heartbeating to localhost/127.0.0.1:39524] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1412052829-172.26.17.254-1530758120647
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,090 [IPC Server handler 6 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,092 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> 2018-07-04 19:35:21,095 [IPC Server handler 7 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 cmd=datanodeReport
> src=nulldst=nullperm=null   proto=rpc
> 2018-07-04 19:35:21,096 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
> first attempt:
> [jenkins11, jenkins12]
> second attempt, should be same:
> [jenkins11, jenkins12]
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO 
> namenode.NameNode (NameNodeRpcServer.java:refreshUserToGroupsMappings(1648)) 
> - Refreshing all user-to-groups mappings. Requested by user: jenkins
> 2018-07-04 19:35:21,101 [IPC Server handler 5 on 39524] INFO security.Groups 
> (Groups.java:refresh(401)) - clearing userToGroupsMap cache
> Refreshing groups in MockUnixGroupsMapping
> 2018-07-04 19:35:21,102 [IPC Server handler 5 on 39524] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7805)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1 
> cmd=refreshUserToGroupsMappings   src=nulldst=nullperm=null   
> proto=rpc
> Refresh user to groups mapping successful
> third attempt(after refresh command), should be different:
> Getting groups in MockUnixGroupsMapping
> [jenkins21, jenkins22]
> fourth attempt(after timeout), should be different:
> [jenkins21, jenkins22]
> Getting groups in MockUnixGroupsMapping
> 2018-07-04 19:35:22,204 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> {code}
>  
> Solution:
> Increase the timeout slightly, and place debugging message in load() and 
> reload() methods in class GroupCacheLoader.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-234) Add SCM node report handler

2018-07-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-234:
---

 Summary: Add SCM node report handler
 Key: HDDS-234
 URL: https://issues.apache.org/jira/browse/HDDS-234
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Ajay Kumar
Assignee: Ajay Kumar


This ticket is opened to add SCM nodereport handler after the refactoring. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13721) NPE in DataNode due to uninitialized DiskBalancer

2018-07-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535092#comment-16535092
 ] 

Íñigo Goiri commented on HDFS-13721:


Thanks [~xiaochen] for the patch.
The unit test failures seem unrelated and the ones tweaked look good 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24565/testReport/org.apache.hadoop.hdfs.server.diskbalancer/TestDiskBalancer/].
+1 on  [^HDFS-13721.02.patch].

> NPE in DataNode due to uninitialized DiskBalancer
> -
>
> Key: HDFS-13721
> URL: https://issues.apache.org/jira/browse/HDFS-13721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13721.01.patch, HDFS-13721.02.patch
>
>
> {noformat}
> 2018-06-28 05:11:47,650 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting 
> attribute DiskBalancerStatus of Hadoop:service=DataNode,name=DataNodeInfo 
> threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>  * TRACEBACK 4 *
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>  at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
>  at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
>  at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1537)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>  at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at org.eclipse.jetty.server.Server.handle(Server.java:534)
>  at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>  at 
> 

[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535090#comment-16535090
 ] 

Arpit Agarwal commented on HDDS-167:


I think the issue is that this patch is triggering a full unit test run of all 
Hadoop projects.
{code}
cd /testptch/hadoop
/usr/bin/mvn -Phdds 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
-Pparallel-tests -Pshelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-root.txt 2>&1
Build timed out (after 300 minutes). Marking the build as aborted.
{code}

Other HDDS patches only run the HDDS+Ozone UTs. E.g. [this 
build|https://builds.apache.org/job/PreCommit-HDDS-Build/453/consoleText].
{code}
cd /testptch/hadoop/hadoop-hdds/container-service
/usr/bin/mvn -Phdds 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-HDDS-48-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-hadoop-hdds_container-service.txt 2>&1
Elapsed:   1m 14s
cd /testptch/hadoop/hadoop-ozone/integration-test
/usr/bin/mvn -Phdds 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-HDDS-48-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-hadoop-ozone_integration-test.txt 2>&1
Elapsed:  18m 44s
{code}


> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch, HDDS-167.10.patch, HDDS-167.11.patch, 
> HDDS-167.12.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-07-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535087#comment-16535087
 ] 

Íñigo Goiri commented on HDFS-13710:


Thanks [~hfyang20071] for  [^HDFS-13710.007.patch].
I would keep the null check in the {{tearDown()}} method, not that it will make 
much of a difference but it's good practice to do it.
Other than that, it looks good.
I'll let [~linyiqun] do the final review but +1 from my side.

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13710.002.patch, HDFS-13710.003.patch, 
> HDFS-13710.004.patch, HDFS-13710.005.patch, HDFS-13710.006.patch, 
> HDFS-13710.007.patch, HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-07-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13710:
--

Assignee: yanghuafeng

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13710.002.patch, HDFS-13710.003.patch, 
> HDFS-13710.004.patch, HDFS-13710.005.patch, HDFS-13710.006.patch, 
> HDFS-13710.007.patch, HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535068#comment-16535068
 ] 

Arpit Agarwal commented on HDDS-232:


v02 - Fix trailing whitespace.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-232:
---
Attachment: HDDS-232.02.patch

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13674) Improve documentation on Metrics

2018-07-06 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535056#comment-16535056
 ] 

Chao Sun commented on HDFS-13674:
-

Oops good catch [~linyiqun]! I'll fix this and upload a new patch.

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch, HDFS-13674.001.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-07-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535010#comment-16535010
 ] 

Anu Engineer commented on HDDS-233:
---

Thanks for filing this JIRA. I am reading this as "update to the latest release 
of Ratis which we are voting for now". Just wanted to make sure. 

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-233) Update ozone to latest ratis snapshot build

2018-07-06 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-233:
--

 Summary: Update ozone to latest ratis snapshot build
 Key: HDDS-233
 URL: https://issues.apache.org/jira/browse/HDDS-233
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


This jira proposes to update ozone to latest ratis snapshot build. This jira 
also will add config to set append entry timeout as well as controlling the 
number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534987#comment-16534987
 ] 

genericqa commented on HDFS-13421:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
34s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 16 new + 582 unchanged - 0 fixed = 598 total (was 582) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
35s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 25 new + 1 
unchanged - 0 fixed = 26 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13421 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930532/HDFS-13421-HDFS-12090.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 58c8d43f30ca 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12090 / 9d7a903 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24567/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24567/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 

[jira] [Commented] (HDFS-13724) Storage Tiering Show Paths with Policies applied

2018-07-06 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534959#comment-16534959
 ] 

Brahma Reddy Battula commented on HDFS-13724:
-

bq. but I can't find anything relating to 'policy' or the name of our storage 
policy or the directory I know it's applied to.

you can check/grep with *"storagePolicyId"* for finding the storagepolicy of a 
file (i.e Policyid will stored in the fsiamge.)

*Example :*

if you set policy as *cold*, you can see like below.

2

 

> Storage Tiering Show Paths with Policies applied
> 
>
> Key: HDFS-13724
> URL: https://issues.apache.org/jira/browse/HDFS-13724
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hari Sekhon
>Priority: Major
>
> Improvement Request to add an hdfs storagepolicies command to find paths for 
> which storage policies have been applied.
> Right now you must explicitly query a single directory to get its policy to 
> determine if one has been applied, but if another hadoop admin has configured 
> policies on anything but trivially obvious paths such as /archive then there 
> is no way to find which paths have policies applied to them other than by 
> querying every single directory and subdirectory one by one which might 
> potentially have a policy, eg:
> {code:java}
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir2
> hdfs storagepolicies -getStoragePolicy -path /dir3
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir2
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir3
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN/subdirN/subsubdirN
> ...{code}
> In my current environment for example, a policy was configured for /data/blah 
> which doesn't show when trying
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data{code}
> and I had no way of knowing that I had to do:
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data/blah{code}
> other than trial and error of trying every directory and every subdirectory 
> in hdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13724) Storage Tiering Show Paths with Policies applied

2018-07-06 Thread Hari Sekhon (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534859#comment-16534859
 ] 

Hari Sekhon edited comment on HDFS-13724 at 7/6/18 2:26 PM:


I tried a workaround of dumping the fsimage to xml and grepping for info:
{code:java}
su - hdfs
kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
hdfs dfsadmin -fetchImage .
# this step might take a long time on big clusters (eg. 20 mins for 12GB 
fsimage.xml result file from a moderate 600TB cluster)
hadoop oiv -i $(ls -tr fsimage_* | tail -n1) -p XML -o fsimage.xml
grep ... fsimage.xml{code}
but I can't find anything relating to 'policy' or the name of our storage 
policy or the directory I know it's applied to.


was (Author: harisekhon):
I tried a workaround of dupming the for now is to do the following as hdfs 
superuser - dump the fsimage, convert to XML and then grep the tiers path info:
{code:java}
su - hdfs
kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
hdfs dfsadmin -fetchImage .
# this step might take a long time on big clusters (eg. 20 mins for 12GB 
fsimage.xml result file from a moderate 600TB cluster)
hadoop oiv -i $(ls -tr fsimage_* | tail -n1) -p XML -o fsimage.xml
grep ... fsimage.xml{code}
but I can't find anything relating to 'policy' or the name of our storage 
policy or the directory I know it's applied to.

> Storage Tiering Show Paths with Policies applied
> 
>
> Key: HDFS-13724
> URL: https://issues.apache.org/jira/browse/HDFS-13724
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hari Sekhon
>Priority: Major
>
> Improvement Request to add an hdfs storagepolicies command to find paths for 
> which storage policies have been applied.
> Right now you must explicitly query a single directory to get its policy to 
> determine if one has been applied, but if another hadoop admin has configured 
> policies on anything but trivially obvious paths such as /archive then there 
> is no way to find which paths have policies applied to them other than by 
> querying every single directory and subdirectory one by one which might 
> potentially have a policy, eg:
> {code:java}
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir2
> hdfs storagepolicies -getStoragePolicy -path /dir3
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir2
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir3
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN/subdirN/subsubdirN
> ...{code}
> In my current environment for example, a policy was configured for /data/blah 
> which doesn't show when trying
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data{code}
> and I had no way of knowing that I had to do:
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data/blah{code}
> other than trial and error of trying every directory and every subdirectory 
> in hdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13724) Storage Tiering Show Paths with Policies applied

2018-07-06 Thread Hari Sekhon (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534859#comment-16534859
 ] 

Hari Sekhon edited comment on HDFS-13724 at 7/6/18 2:26 PM:


I tried a workaround of dupming the for now is to do the following as hdfs 
superuser - dump the fsimage, convert to XML and then grep the tiers path info:
{code:java}
su - hdfs
kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
hdfs dfsadmin -fetchImage .
# this step might take a long time on big clusters (eg. 20 mins for 12GB 
fsimage.xml result file from a moderate 600TB cluster)
hadoop oiv -i $(ls -tr fsimage_* | tail -n1) -p XML -o fsimage.xml
grep ... fsimage.xml{code}
but I can't find anything relating to 'policy' or the name of our storage 
policy or the directory I know it's applied to.


was (Author: harisekhon):
I tried a workaround of dupming the for now is to do the following as hdfs 
superuser - dump the fsimage, convert to XML and then grep the tiers path info:
{code:java}
su - hdfs
kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
hdfs dfsadmin -fetchImage .
# this step might take a long time on big clusters (eg. 20 mins for 12GB 
fsimage.xml result file from a moderate 600TB cluster)
hadoop oiv -i $(ls -tr fsimage_* | tail -n1) -p XML -o fsimage.xml
grep ...{code}
but I can't find anything relating to 'policy' or the name of our storage 
policy or the directory I know it's applied to.

> Storage Tiering Show Paths with Policies applied
> 
>
> Key: HDFS-13724
> URL: https://issues.apache.org/jira/browse/HDFS-13724
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hari Sekhon
>Priority: Major
>
> Improvement Request to add an hdfs storagepolicies command to find paths for 
> which storage policies have been applied.
> Right now you must explicitly query a single directory to get its policy to 
> determine if one has been applied, but if another hadoop admin has configured 
> policies on anything but trivially obvious paths such as /archive then there 
> is no way to find which paths have policies applied to them other than by 
> querying every single directory and subdirectory one by one which might 
> potentially have a policy, eg:
> {code:java}
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir2
> hdfs storagepolicies -getStoragePolicy -path /dir3
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir2
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir3
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN/subdirN/subsubdirN
> ...{code}
> In my current environment for example, a policy was configured for /data/blah 
> which doesn't show when trying
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data{code}
> and I had no way of knowing that I had to do:
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data/blah{code}
> other than trial and error of trying every directory and every subdirectory 
> in hdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13724) Storage Tiering Show Paths with Policies applied

2018-07-06 Thread Hari Sekhon (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534859#comment-16534859
 ] 

Hari Sekhon commented on HDFS-13724:


I tried a workaround of dupming the for now is to do the following as hdfs 
superuser - dump the fsimage, convert to XML and then grep the tiers path info:
{code:java}
su - hdfs
kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
hdfs dfsadmin -fetchImage .
# this step might take a long time on big clusters (eg. 20 mins for 12GB 
fsimage.xml result file from a moderate 600TB cluster)
hadoop oiv -i $(ls -tr fsimage_* | tail -n1) -p XML -o fsimage.xml
grep ...{code}
but I can't find anything relating to 'policy' or the name of our storage 
policy or the directory I know it's applied to.

> Storage Tiering Show Paths with Policies applied
> 
>
> Key: HDFS-13724
> URL: https://issues.apache.org/jira/browse/HDFS-13724
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hari Sekhon
>Priority: Major
>
> Improvement Request to add an hdfs storagepolicies command to find paths for 
> which storage policies have been applied.
> Right now you must explicitly query a single directory to get its policy to 
> determine if one has been applied, but if another hadoop admin has configured 
> policies on anything but trivially obvious paths such as /archive then there 
> is no way to find which paths have policies applied to them other than by 
> querying every single directory and subdirectory one by one which might 
> potentially have a policy, eg:
> {code:java}
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir2
> hdfs storagepolicies -getStoragePolicy -path /dir3
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir1
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir2
> hdfs storagepolicies -getStoragePolicy -path /dir3/subdir3
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN
> ...
> hdfs storagepolicies -getStoragePolicy -path /dirN/subdirN/subsubdirN
> ...{code}
> In my current environment for example, a policy was configured for /data/blah 
> which doesn't show when trying
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data{code}
> and I had no way of knowing that I had to do:
> {code:java}
>  hdfs storagepolicies -getStoragePolicy -path /data/blah{code}
> other than trial and error of trying every directory and every subdirectory 
> in hdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534851#comment-16534851
 ] 

genericqa commented on HDFS-13310:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
36s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new 
+ 692 unchanged - 1 fixed = 708 total (was 693) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}223m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may 
expose internal representation by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:[line 38] |
|  |  new 
org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult(byte[], Long) 
may expose internal representation by 

[jira] [Commented] (HDDS-208) ozone createVolume command ignores the first character of the "volume name" given as argument

2018-07-06 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534849#comment-16534849
 ] 

Lokesh Jain commented on HDDS-208:
--

[~xyao] Thanks for reviewing the patch! For ozoneFS the keyname would be the 
absolute path of a file or directory. Therefore lastIndexOf('/') can not be 
used in such a case.

> ozone createVolume command ignores the first character of the "volume name" 
> given as argument
> -
>
> Key: HDDS-208
> URL: https://issues.apache.org/jira/browse/HDDS-208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-208.001.patch
>
>
> createVolume command ran to create volume "testvolume123".
> Volume created with name "estvolume123" instead of "testvolume123". It 
> ignores the first character of the volume name
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -createVolume testvolume123 -user root
> 2018-07-02 05:33:35,510 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 05:33:36,093 [main] INFO - Creating Volume: estvolume123, with 
> root as owner and quota set to 1152921504606846976 bytes.
> {noformat}
>  
> ozone listVolume command :
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume /
> 2018-07-02 05:36:47,835 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> ..
> ..
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "estvolume123",
>  "createdOn" : "Sat, 17 May +50470 08:01:41 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-06 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534830#comment-16534830
 ] 

Ewan Higgs commented on HDFS-13421:
---

003

- Rebased onto HDFS-13310 patch version 5 which removed PUT_FILE and flattens 
the BlockSyncTask.

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-06 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13421:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-06 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13421:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >