[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904333#comment-16904333
 ] 

Ayush Saxena commented on HDFS-14595:
-

Yes, the ones you added. We should atleast trigger those in a Test, to make 
sure it works,

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-09 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904332#comment-16904332
 ] 

Siyao Meng commented on HDFS-14595:
---

[~ayushtkn] I believe adding UT for new methods should be in another jira. Do 
you mean old but restored methods?

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904331#comment-16904331
 ] 

Ayush Saxena commented on HDFS-14714:
-

Thanx [~zhangchen], Makes sense to have, We already have implemented 
{{getECBlockGroupStats}}. Maybe you can similarly do for the replication 
counterpart too.

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1943) TestKeyManagerImpl.testLookupKeyWithLocation is failing

2019-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904330#comment-16904330
 ] 

Hudson commented on HDDS-1943:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17083 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17083/])
HDDS-1943. TestKeyManagerImpl.testLookupKeyWithLocation is failing. (github: 
rev fba222a85603d6321419aa37bcc48d276dd6c4a6)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java


> TestKeyManagerImpl.testLookupKeyWithLocation is failing
> ---
>
> Key: HDDS-1943
> URL: https://issues.apache.org/jira/browse/HDDS-1943
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR]   TestKeyManagerImpl.testLookupKeyWithLocation:757 
> expected:<102ad7e3-4226-4966-af79-2b12a56f83cb{ip: 32.53.16.224, host: 
> localhost-32.53.16.224, networkLocation: /default-rack, certSerialId: null}> 
> but was: localhost-238.199.149.19, networkLocation: /default-rack, certSerialId: null}>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14711) RBF: RBFMetrics throws NullPointerException if stateStore disabled

2019-08-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904328#comment-16904328
 ] 

Ayush Saxena commented on HDFS-14711:
-

That something we discussed at HDFS-14656, IMO we just need to prevent that 
NPE, May be just put a NULL check, LOG and return

> RBF: RBFMetrics throws NullPointerException if stateStore disabled
> --
>
> Key: HDFS-14711
> URL: https://issues.apache.org/jira/browse/HDFS-14711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14711.001.patch
>
>
> In current implementation, if \{{stateStore}} initialize fail, only log an 
> error message. Actually RBFMetrics can't work normally at this time.
> {code:java}
> 2019-08-08 22:43:58,024 [qtp812446698-28] ERROR jmx.JMXJsonServlet 
> (JMXJsonServlet.java:writeAttribute(345)) - getting attribute FilesTotal of 
> Hadoop:service=NameNode,name=FSNamesystem-2 threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
> at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
> at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
> at 
> org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilter.doFilter(ProxyUserAuthenticationFilter.java:104)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:51)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:539)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> 

[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904327#comment-16904327
 ] 

Ayush Saxena commented on HDFS-14595:
-

Would be good if we extend a UT for the newly added methods too. Apart LGTM

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1943) TestKeyManagerImpl.testLookupKeyWithLocation is failing

2019-08-09 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1943:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> TestKeyManagerImpl.testLookupKeyWithLocation is failing
> ---
>
> Key: HDDS-1943
> URL: https://issues.apache.org/jira/browse/HDDS-1943
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR]   TestKeyManagerImpl.testLookupKeyWithLocation:757 
> expected:<102ad7e3-4226-4966-af79-2b12a56f83cb{ip: 32.53.16.224, host: 
> localhost-32.53.16.224, networkLocation: /default-rack, certSerialId: null}> 
> but was: localhost-238.199.149.19, networkLocation: /default-rack, certSerialId: null}>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1943) TestKeyManagerImpl.testLookupKeyWithLocation is failing

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1943?focusedWorklogId=292433=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292433
 ]

ASF GitHub Bot logged work on HDDS-1943:


Author: ASF GitHub Bot
Created on: 10/Aug/19 05:09
Start Date: 10/Aug/19 05:09
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1262: HDDS-1943. 
TestKeyManagerImpl.testLookupKeyWithLocation is failing. C…
URL: https://github.com/apache/hadoop/pull/1262
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292433)
Time Spent: 40m  (was: 0.5h)

> TestKeyManagerImpl.testLookupKeyWithLocation is failing
> ---
>
> Key: HDDS-1943
> URL: https://issues.apache.org/jira/browse/HDDS-1943
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR]   TestKeyManagerImpl.testLookupKeyWithLocation:757 
> expected:<102ad7e3-4226-4966-af79-2b12a56f83cb{ip: 32.53.16.224, host: 
> localhost-32.53.16.224, networkLocation: /default-rack, certSerialId: null}> 
> but was: localhost-238.199.149.19, networkLocation: /default-rack, certSerialId: null}>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1943) TestKeyManagerImpl.testLookupKeyWithLocation is failing

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1943?focusedWorklogId=292432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292432
 ]

ASF GitHub Bot logged work on HDDS-1943:


Author: ASF GitHub Bot
Created on: 10/Aug/19 05:09
Start Date: 10/Aug/19 05:09
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1262: HDDS-1943. 
TestKeyManagerImpl.testLookupKeyWithLocation is failing. C…
URL: https://github.com/apache/hadoop/pull/1262#issuecomment-520118912
 
 
   Thanks @bharatviswa504  and @adoroszlai  for the review. I will merge this 
to trunk shortly. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292432)
Time Spent: 0.5h  (was: 20m)

> TestKeyManagerImpl.testLookupKeyWithLocation is failing
> ---
>
> Key: HDDS-1943
> URL: https://issues.apache.org/jira/browse/HDDS-1943
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code}
> [ERROR]   TestKeyManagerImpl.testLookupKeyWithLocation:757 
> expected:<102ad7e3-4226-4966-af79-2b12a56f83cb{ip: 32.53.16.224, host: 
> localhost-32.53.16.224, networkLocation: /default-rack, certSerialId: null}> 
> but was: localhost-238.199.149.19, networkLocation: /default-rack, certSerialId: null}>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904312#comment-16904312
 ] 

Wei-Chiu Chuang commented on HDFS-14450:


I am not sure about this fix. This'll definitely need a test case.

> Erasure Coding: decommissioning datanodes cause replicate a large number of 
> duplicate EC internal blocks
> 
>
> Key: HDFS-14450
> URL: https://issues.apache.org/jira/browse/HDFS-14450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14450-000.patch
>
>
> {code:java}
> //  [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in 
> need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
> required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> In a large-scale cluster, decommissioning large-scale datanodes cause EC 
> block groups to replicate a large number of duplicate internal blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-09 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904310#comment-16904310
 ] 

Siyao Meng commented on HDFS-14717:
---

[~jojochuang] [~pingsutw]
Yes I encountered this before. I worked around this by putting Junit jar back 
manually in Hadoop class jar path.
Some extra work need to be done in maven profile. I mentioned in HDFS-12345 
that junit seems to be intentionally removed from distribution package since 
3.2.x.

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Priority: Major
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904307#comment-16904307
 ] 

Hudson commented on HDDS-1895:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17082 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17082/])
HDDS-1895. Support Key ACL operations for OM HA. (#1230) (arp7: rev 
bd4be6e1682a154b07580b12a48d4e4346cb046e)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/util/ObjectParser.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeySetAclRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/acl/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/acl/OMKeyAclResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyRemoveAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java


> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904306#comment-16904306
 ] 

Wei-Chiu Chuang commented on HDFS-14717:


[~smeng] any ideas? I thought you addressed this issue before...?

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Priority: Major
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292414=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292414
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 10/Aug/19 03:33
Start Date: 10/Aug/19 03:33
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1230: HDDS-1895. Support Key 
ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-520114204
 
 
   I committed this. None of the test failures looks related.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292414)
Time Spent: 3h 10m  (was: 3h)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292413=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292413
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 10/Aug/19 03:32
Start Date: 10/Aug/19 03:32
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292413)
Time Spent: 3h  (was: 2h 50m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1895:

  Resolution: Fixed
   Fix Version/s: 0.5.0
Target Version/s:   (was: 0.5.0)
  Status: Resolved  (was: Patch Available)

+1 committed via Github.

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14199) make output of "dfs -getfattr -R -d " differentiate folder, file and symbol link

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904300#comment-16904300
 ] 

Hadoop QA commented on HDFS-14199:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14199 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954516/HDFS-14199.001 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6139a3d8656f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ce3c5a3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27464/testReport/ |
| Max. process+thread count | 1345 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=292390=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292390
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 10/Aug/19 02:00
Start Date: 10/Aug/19 02:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1257: HDDS-1913. Fix 
OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#issuecomment-520109012
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 602 | trunk passed |
   | +1 | compile | 366 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 983 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 223 | trunk passed |
   | 0 | spotbugs | 475 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 708 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 643 | the patch passed |
   | +1 | compile | 413 | the patch passed |
   | +1 | cc | 413 | the patch passed |
   | +1 | javac | 413 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 803 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | the patch passed |
   | +1 | findbugs | 707 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 356 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2145 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8671 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5dd0fbb2f379 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce3c5a3 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/3/testReport/ |
   | Max. process+thread count | 4909 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/s3gateway hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292390)
Time Spent: 2.5h  (was: 2h 20m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
>  

[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-09 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904296#comment-16904296
 ] 

Siddharth Wagle commented on HDFS-2470:
---

[~eyang] Thanks for the review comments, I will address the suggestions in the 
followup patch. Regarding the working directory comment: The curDir is 
actually, /tmp/namenode/current. Hence, set the permissions on /tmp/namenode/ 
and /tmp/namenode/current.

I do see your point though, can it actually end up being /tmp/current? I need 
to investigate further.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2019-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904295#comment-16904295
 ] 

Hudson commented on HDFS-13359:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17081 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17081/])
HDFS-13359. DataXceiver hung due to the lock in (weichiu: rev 
8a77a224c734bea0eb490f30c908748458c190c3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hung due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13359:
---
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun]!

> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hung due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904290#comment-16904290
 ] 

Wei-Chiu Chuang commented on HDFS-13359:


Patch still applies.
+1 I think this is a good improvement regardless. Didn't mean to stall the 
patch.


> DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
> -
>
> Key: HDFS-13359
> URL: https://issues.apache.org/jira/browse/HDFS-13359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13359.001.patch, stack.jpg
>
>
> DataXceiver hung due to the lock that locked by 
>  {{FsDatasetImpl#getBlockInputStream}} (have attached stack).
> {code:java}
>   @Override // FsDatasetSpi
>   public InputStream getBlockInputStream(ExtendedBlock b,
>   long seekOffset) throws IOException {
> ReplicaInfo info;
> synchronized(this) {
>   info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
> }
> ...
>   }
> {code}
> The lock {{synchronized(this)}} used here is expensive, there is already one 
> {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-09 Thread kevin su (JIRA)
kevin su created HDFS-14717:
---

 Summary: Junit not found in hadoop-dynamometer-infra
 Key: HDFS-14717
 URL: https://issues.apache.org/jira/browse/HDFS-14717
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: kevin su


{code}
hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
org.apache.hadoop.tools.dynamometer.Client
{code}
{code}
Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
 at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: org.junit.Assert
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-09 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14717:

Description: 
{code:java}
$ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
org.apache.hadoop.tools.dynamometer.Client
{code}
{code:java}
Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
 at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: org.junit.Assert
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 7 more{code}

  was:
{code}
hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
org.apache.hadoop.tools.dynamometer.Client
{code}
{code}
Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
 at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: org.junit.Assert
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 7 more{code}


> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Priority: Major
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12125) Document the missing EC removePolicy command

2019-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904284#comment-16904284
 ] 

Hudson commented on HDFS-12125:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17080 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17080/])
HDFS-12125. Document the missing EC removePolicy command (#1258) (weichiu: rev 
e02ffed1b12fa2659f1390d2ae5389eec6b0e35f)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


> Document the missing EC removePolicy command
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292381=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292381
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 10/Aug/19 01:04
Start Date: 10/Aug/19 01:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520105263
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 356 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 904 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   | 0 | spotbugs | 417 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 610 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 544 | the patch passed |
   | +1 | compile | 362 | the patch passed |
   | +1 | javac | 362 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 124 | hadoop-ozone generated 2 new + 13 unchanged - 0 fixed 
= 15 total (was 13) |
   | +1 | findbugs | 722 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 355 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2006 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 8051 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 312c1a2195d3 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce3c5a3 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/3/testReport/ |
   | Max. process+thread count | 5412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292381)
Time Spent: 40m  (was: 0.5h)

> Consolidate add/remove Acl into OzoneAclUtil class
> 

[jira] [Updated] (HDFS-12125) Document the missing EC removePolicy command

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12125:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

I merged the PR. Thanks [~smeng] [~vincent he] for the patch and [~ayushtkn] 
for review!

> Document the missing EC removePolicy command
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292380=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292380
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 10/Aug/19 00:53
Start Date: 10/Aug/19 00:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-520104312
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for branch |
   | +1 | mvninstall | 624 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 963 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 466 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 684 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 583 | the patch passed |
   | +1 | compile | 478 | the patch passed |
   | +1 | javac | 478 | the patch passed |
   | +1 | checkstyle | 92 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 888 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 215 | the patch passed |
   | +1 | findbugs | 770 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 421 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3537 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 10264 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1230 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ab345089517b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 98dd7c4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/6/testReport/ |
   | Max. process+thread count | 3462 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292380)
Time Spent: 2h 50m  (was: 2h 40m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
>

[jira] [Updated] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14655:
---
Summary: [SBN Read] Namenode crashes if one of The JN is down  (was: SBN : 
Namenode crashes if one of The JN is down)

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14199) make output of "dfs -getfattr -R -d " differentiate folder, file and symbol link

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904276#comment-16904276
 ] 

Wei-Chiu Chuang commented on HDFS-14199:


Thanks for proposing the change and contributing the patch, [~ZangLin].
Hadoop has a pretty strict compatibility guideline, in which CLI output should 
not change within a major version 
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html#CLIs
Can we make this output optional? Say add a -t option that prints this file 
type information.

> make output of "dfs  -getfattr -R -d " differentiate folder, file and symbol 
> link
> -
>
> Key: HDFS-14199
> URL: https://issues.apache.org/jira/browse/HDFS-14199
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Zang Lin
>Assignee: Zang Lin
>Priority: Minor
> Attachments: HDFS-14199.001
>
>
> The current output of  "hdfs dfs  -getfattr -R -d" print all type of file 
> with "file:" , it doesn't differentiate the type such as folder, symbol link.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14199) make output of "dfs -getfattr -R -d " differentiate folder, file and symbol link

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14199:
--

Assignee: Zang Lin

> make output of "dfs  -getfattr -R -d " differentiate folder, file and symbol 
> link
> -
>
> Key: HDFS-14199
> URL: https://issues.apache.org/jira/browse/HDFS-14199
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Zang Lin
>Assignee: Zang Lin
>Priority: Minor
> Attachments: HDFS-14199.001
>
>
> The current output of  "hdfs dfs  -getfattr -R -d" print all type of file 
> with "file:" , it doesn't differentiate the type such as folder, symbol link.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14199) make output of "dfs -getfattr -R -d " differentiate folder, file and symbol link

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14199:
---
Status: Patch Available  (was: Open)

> make output of "dfs  -getfattr -R -d " differentiate folder, file and symbol 
> link
> -
>
> Key: HDFS-14199
> URL: https://issues.apache.org/jira/browse/HDFS-14199
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Zang Lin
>Priority: Minor
> Attachments: HDFS-14199.001
>
>
> The current output of  "hdfs dfs  -getfattr -R -d" print all type of file 
> with "file:" , it doesn't differentiate the type such as folder, symbol link.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292375=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292375
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 10/Aug/19 00:40
Start Date: 10/Aug/19 00:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520103222
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | +1 | mvninstall | 610 | trunk passed |
   | +1 | compile | 372 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 414 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 611 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 555 | the patch passed |
   | +1 | compile | 379 | the patch passed |
   | +1 | javac | 379 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 93 | hadoop-ozone generated 2 new + 13 unchanged - 0 fixed 
= 15 total (was 13) |
   | +1 | findbugs | 640 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 292 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1705 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7564 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2dfcc121f70d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 98dd7c4 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/2/testReport/ |
   | Max. process+thread count | 4089 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292375)
Time Spent: 0.5h  (was: 20m)

> Consolidate add/remove Acl into OzoneAclUtil class
> 

[jira] [Commented] (HDDS-1947) fix naming issue for ScmBlockLocationTestingClient

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904274#comment-16904274
 ] 

Hadoop QA commented on HDDS-1947:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} dupname {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 duplicated filenames that differ only 
in case. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2764/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1947 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968644/HDFS-14489.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 445a9742883c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / ce3c5a3 |
| dupname | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2764/artifact/out/dupnames.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2764/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1947
> URL: https://issues.apache.org/jira/browse/HDDS-1947
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: star
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904270#comment-16904270
 ] 

Wei-Chiu Chuang commented on HDFS-14523:


Patch still applies. [~vagarychen] any chance you can take a look?

> Remove excess read lock for NetworkToplogy
> --
>
> Key: HDFS-14523
> URL: https://issues.apache.org/jira/browse/HDFS-14523
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14523.1.patch
>
>
> getNumOfRacks() and getNumOfLeaves() are two high frequencies call methods 
> for BlockPlacementPolicy, this two methods need to get NetworkTopology read 
> lock, and get lock in high frequencies call methods may impact the namenode 
> performance. 
> This two methods get number of racks and number of leaves just for 
> chooseTarget calculate,  lock in these two methods cannot guarantee these two 
> values will not change in the subsequent calculations.
> I think it's safe to remove the read lock from this two methods.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-09 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904269#comment-16904269
 ] 

Chen Liang commented on HDFS-14204:
---

Thanks for the review [~shv]! I've committed v007 patch to branch-2.

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch, HDFS-14204-branch-2.007.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1947) fix naming issue for ScmBlockLocationTestingClient

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDDS-1947:
-

 Assignee: (was: star)
Affects Version/s: (was: HDFS-7240)
  Component/s: (was: ozone)
 Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
   Issue Type: Improvement  (was: Bug)
  Key: HDDS-1947  (was: HDFS-14489)
  Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1947
> URL: https://issues.apache.org/jira/browse/HDDS-1947
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: star
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14715) RBF: Fix RBF failed tests

2019-08-09 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904268#comment-16904268
 ] 

Chen Zhang commented on HDFS-14715:
---

Thanks [~crh] and [~elgoiri], I'll work on HDFS-14609

> RBF: Fix RBF failed tests
> -
>
> Key: HDFS-14715
> URL: https://issues.apache.org/jira/browse/HDFS-14715
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> including:
> hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
> hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904266#comment-16904266
 ] 

Hadoop QA commented on HDFS-14450:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976739/HDFS-14450-000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f53dfd2b3477 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 98dd7c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27459/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27459/testReport/ |
| Max. process+thread count | 3530 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Updated] (HDFS-14711) RBF: RBFMetrics throws NullPointerException if stateStore disabled

2019-08-09 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14711:
--
Summary: RBF: RBFMetrics throws NullPointerException if stateStore disabled 
 (was: RBF: RBFMetrics should throw RuntimeException if stateStore initialized 
failed)

> RBF: RBFMetrics throws NullPointerException if stateStore disabled
> --
>
> Key: HDFS-14711
> URL: https://issues.apache.org/jira/browse/HDFS-14711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14711.001.patch
>
>
> In current implementation, if \{{stateStore}} initialize fail, only log an 
> error message. Actually RBFMetrics can't work normally at this time.
> {code:java}
> 2019-08-08 22:43:58,024 [qtp812446698-28] ERROR jmx.JMXJsonServlet 
> (JMXJsonServlet.java:writeAttribute(345)) - getting attribute FilesTotal of 
> Hadoop:service=NameNode,name=FSNamesystem-2 threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
> at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
> at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
> at 
> org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilter.doFilter(ProxyUserAuthenticationFilter.java:104)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:51)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:539)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> 

[jira] [Commented] (HDFS-14711) RBF: RBFMetrics should throw RuntimeException if stateStore initialized failed

2019-08-09 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904265#comment-16904265
 ] 

Chen Zhang commented on HDFS-14711:
---

Thanks [~elgoiri] [~ayushtkn] for your comments, if State Store is optional, 
should we handle the initialization of RBFMetrics in a better way? At least, in 
some tests which disabled State Store, the console output is full of NPE 
exceptions

> RBF: RBFMetrics should throw RuntimeException if stateStore initialized failed
> --
>
> Key: HDFS-14711
> URL: https://issues.apache.org/jira/browse/HDFS-14711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14711.001.patch
>
>
> In current implementation, if \{{stateStore}} initialize fail, only log an 
> error message. Actually RBFMetrics can't work normally at this time.
> {code:java}
> 2019-08-08 22:43:58,024 [qtp812446698-28] ERROR jmx.JMXJsonServlet 
> (JMXJsonServlet.java:writeAttribute(345)) - getting attribute FilesTotal of 
> Hadoop:service=NameNode,name=FSNamesystem-2 threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
> at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
> at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
> at 
> org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilter.doFilter(ProxyUserAuthenticationFilter.java:104)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:51)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:539)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> 

[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904263#comment-16904263
 ] 

Hadoop QA commented on HDFS-14423:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14423 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977193/HDFS-14423.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b228ccd437e0 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 98dd7c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27462/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27462/testReport/ |
| Max. process+thread count | 4084 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27462/console |
| Powered by | Apache Yetus 0.8.0  

[jira] [Created] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-09 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1946:


 Summary: CertificateClient should not persist keys/certs to 
ozone.metadata.dir
 Key: HDDS-1946
 URL: https://issues.apache.org/jira/browse/HDDS-1946
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Xiaoyu Yao
Assignee: Vivek Ratnavel Subramanian


For example, when OM and SCM are deployed on the same host with 
ozone.metadata.dir defined. SCM can start successfully but OM can not because 
the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292356=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292356
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 10/Aug/19 00:05
Start Date: 10/Aug/19 00:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-520099530
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | +1 | mvninstall | 598 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 423 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 619 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 567 | the patch passed |
   | +1 | compile | 380 | the patch passed |
   | +1 | javac | 380 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   | +1 | findbugs | 641 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 297 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1764 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7608 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1230 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 11ed57c004ba 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 98dd7c4 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/5/testReport/ |
   | Max. process+thread count | 5365 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292356)
Time Spent: 2h 40m  (was: 2.5h)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: 

[jira] [Updated] (HDDS-1945) Fix CreateBucket API in RpcClient

2019-08-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1945:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-1927

> Fix CreateBucket API in RpcClient
> -
>
> Key: HDDS-1945
> URL: https://issues.apache.org/jira/browse/HDDS-1945
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> When adding acls, using list.addAll acl's added can be duplicated. Suppose 
> think of case ACL as below:
> (USER,ozone,R,DEFAULT), (USER,ozone,W,DEFAULT).
> We can merge them into a single and add to list, but if we use the default 
> list.addAll it will not be done. This will be fixed after HDDS-1927. Thank 
> You @xiaoyu for reporting this during HDDS-1913 review.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1945) Fix CreateBucket API in RpcClient

2019-08-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1945:


 Summary: Fix CreateBucket API in RpcClient
 Key: HDDS-1945
 URL: https://issues.apache.org/jira/browse/HDDS-1945
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


When adding acls, using list.addAll acl's added can be duplicated. Suppose 
think of case ACL as below:

(USER,ozone,R,DEFAULT), (USER,ozone,W,DEFAULT).

We can merge them into a single and add to list, but if we use the default 
list.addAll it will not be done. This will be fixed after HDDS-1927. Thank You 
@xiaoyu for reporting this during HDDS-1913 review.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1944) Update document for Ozone HTTP SPNEGO authentication

2019-08-09 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1944:


Assignee: Xiaoyu Yao

> Update document for Ozone HTTP SPNEGO authentication
> 
>
> Key: HDDS-1944
> URL: https://issues.apache.org/jira/browse/HDDS-1944
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1944) Update document for Ozone HTTP SPNEGO authentication

2019-08-09 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1944:


 Summary: Update document for Ozone HTTP SPNEGO authentication
 Key: HDDS-1944
 URL: https://issues.apache.org/jira/browse/HDDS-1944
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Affects Versions: 0.4.0
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12914) Block report leases cause missing blocks until next report

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-12914:


Reopen. Thanks for reporting the issue. I'll start with reverts in 
branch-2/branch-2.9

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch, HDFS-12914.007.patch, HDFS-12914.008.patch, 
> HDFS-12914.009.patch, HDFS-12914.branch-2.000.patch, 
> HDFS-12914.branch-2.001.patch, HDFS-12914.branch-2.002.patch, 
> HDFS-12914.branch-2.8.001.patch, HDFS-12914.branch-2.8.002.patch, 
> HDFS-12914.branch-2.patch, HDFS-12914.branch-3.0.patch, 
> HDFS-12914.branch-3.1.001.patch, HDFS-12914.branch-3.1.002.patch, 
> HDFS-12914.branch-3.2.patch, HDFS-12914.utfix.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=292336=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292336
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 09/Aug/19 23:37
Start Date: 09/Aug/19 23:37
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1257: HDDS-1913. Fix 
OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#issuecomment-520095907
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292336)
Time Spent: 2h 10m  (was: 2h)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=292337=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292337
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 09/Aug/19 23:37
Start Date: 09/Aug/19 23:37
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#issuecomment-520095919
 
 
   LGTM +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292337)
Time Spent: 11h 40m  (was: 11.5h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=292335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292335
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 09/Aug/19 23:35
Start Date: 09/Aug/19 23:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1257: 
HDDS-1913. Fix OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#discussion_r312676316
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -388,7 +388,9 @@ public void deleteVolume(String volumeName) throws 
IOException {
   @Override
   public void createBucket(String volumeName, String bucketName)
   throws IOException {
-createBucket(volumeName, bucketName, BucketArgs.newBuilder().build());
+// Set acls of current user.
+createBucket(volumeName, bucketName,
+BucketArgs.newBuilder().setAcls(getAclList()).build());
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292335)
Time Spent: 2h  (was: 1h 50m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904249#comment-16904249
 ] 

Hadoop QA commented on HDFS-14523:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970307/HDFS-14523.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff4d0ccf4a9a 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 98dd7c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27457/testReport/ |
| Max. process+thread count | 1343 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27457/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove excess read lock for NetworkToplogy
> 

[jira] [Commented] (HDFS-12125) Document the missing EC removePolicy command

2019-08-09 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904246#comment-16904246
 ] 

Siyao Meng commented on HDFS-12125:
---

Thanks for the review [~ayushtkn] [~jojochuang].

> Document the missing EC removePolicy command
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=292334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292334
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 09/Aug/19 23:27
Start Date: 09/Aug/19 23:27
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1257: HDDS-1913. 
Fix OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#discussion_r312675211
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
 ##
 @@ -134,17 +132,6 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   bucketInfoBuilder.addAllMetadata(KeyValueUtil
   .getFromProtobuf(bucketArgs.getMetadataList()));
 
-  //Check ACLs to update
-  if (omBucketArgs.getAddAcls() != null ||
-  omBucketArgs.getRemoveAcls() != null) {
-bucketInfoBuilder.setAcls(getUpdatedAclList(oldBucketInfo.getAcls(),
-omBucketArgs.getRemoveAcls(), omBucketArgs.getAddAcls()));
-LOG.debug("Updating ACLs for bucket: {} in volume: {}",
-bucketName, volumeName);
-  } else {
-bucketInfoBuilder.setAcls(oldBucketInfo.getAcls());
 
 Review comment:
   We have a similar problem here because OMBucketArgs does not have acls, we 
will need to rely on the oldBucketInfo.getAcls to avoid reset existing acl on 
bucket.
   
   bucketInfoBuilder.setAcls(oldBucketInfo.getAcls());
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292334)
Time Spent: 1h 50m  (was: 1h 40m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=292333=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292333
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 09/Aug/19 23:27
Start Date: 09/Aug/19 23:27
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1257: HDDS-1913. 
Fix OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#discussion_r312675211
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
 ##
 @@ -134,17 +132,6 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   bucketInfoBuilder.addAllMetadata(KeyValueUtil
   .getFromProtobuf(bucketArgs.getMetadataList()));
 
-  //Check ACLs to update
-  if (omBucketArgs.getAddAcls() != null ||
-  omBucketArgs.getRemoveAcls() != null) {
-bucketInfoBuilder.setAcls(getUpdatedAclList(oldBucketInfo.getAcls(),
-omBucketArgs.getRemoveAcls(), omBucketArgs.getAddAcls()));
-LOG.debug("Updating ACLs for bucket: {} in volume: {}",
-bucketName, volumeName);
-  } else {
-bucketInfoBuilder.setAcls(oldBucketInfo.getAcls());
 
 Review comment:
   We have a similar problem here because OMBucketArgs does not have acls, we 
will need to rely on the oldBucketInfo.getAcls to avoid reset existing acl on 
bucket.
   
   bucketInfoBuilder.setAcls(oldBucketInfo.getAcls());
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292333)
Time Spent: 1h 40m  (was: 1.5h)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=292332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292332
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 09/Aug/19 23:20
Start Date: 09/Aug/19 23:20
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1257: HDDS-1913. 
Fix OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#discussion_r312674369
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -388,7 +388,9 @@ public void deleteVolume(String volumeName) throws 
IOException {
   @Override
   public void createBucket(String volumeName, String bucketName)
   throws IOException {
-createBucket(volumeName, bucketName, BucketArgs.newBuilder().build());
+// Set acls of current user.
+createBucket(volumeName, bucketName,
+BucketArgs.newBuilder().setAcls(getAclList()).build());
 
 Review comment:
   This will cause creator acls added twice into the final list because the 
same list will be added again when the passin bucketArgs has a non-empty acl 
list. 
   {code}
   List listOfAcls = getAclList();
   //ACLs from BucketArgs
   if(bucketArgs.getAcls() != null) {
 listOfAcls.addAll(bucketArgs.getAcls());
   }
   {code}
   
   The list of acls are not merged properly when using List#addAll, which will 
be fixed in HDDS-1927. 
   Let's file a separate JIRA for RpcClient#createBucket issue. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292332)
Time Spent: 1.5h  (was: 1h 20m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904234#comment-16904234
 ] 

Hudson commented on HDFS-14195:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17079 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17079/])
HDFS-14195. OIV: print out storage policy id in oiv Delimited output. (weichiu: 
rev 865021b8c96ae96940ca060faae87452b433d970)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java
* (add) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testStoragePolicy.csv
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForStoragePolicy.java


> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch, HDFS-14195.010.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14623) In NameNode Web UI, for Head the file (first 32K) old data is showing

2019-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904235#comment-16904235
 ] 

Hudson commented on HDFS-14623:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17079 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17079/])
HDFS-14623. In NameNode Web UI, for Head the file (first 32K) old data 
(weichiu: rev ce3c5a3e3bf6acac514de6d9c1dd6786520a)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> In NameNode Web UI, for Head the file (first 32K) old data is showing
> -
>
> Key: HDFS-14623
> URL: https://issues.apache.org/jira/browse/HDFS-14623
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14623.001.patch, HDFS-14623.patch, afterfix.JPG, 
> beforefix.JPG
>
>
> In Namenode Web UI , for Head the file (first 32K) 
> After opening multiple files and clicking on - "Head the file" is showing 
> wrong data 
> Scenario : 
> Uploaded Namenode log and Zkfc log , clicked head the file of namenode log 
> multiple times , then went for zkfc log and clicked on head the file , wrong 
> data is showing 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292315
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 09/Aug/19 22:50
Start Date: 09/Aug/19 22:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520088237
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 362 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 845 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 422 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 616 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 533 | the patch passed |
   | +1 | compile | 355 | the patch passed |
   | +1 | javac | 355 | the patch passed |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 603 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 85 | hadoop-ozone generated 2 new + 13 unchanged - 0 fixed 
= 15 total (was 13) |
   | +1 | findbugs | 656 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1982 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7544 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2fad9f5b2750 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 98dd7c4 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/1/testReport/ |
   | Max. process+thread count | 4981 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292315)
Time Spent: 20m  (was: 10m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> 

[jira] [Commented] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-09 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904232#comment-16904232
 ] 

Eric Yang commented on HDFS-14375:
--

This looks like a configuration issue in KDC server to perform cross realm 
trust.  Please verify that krbtgt/test1@test2.com principal has been added 
for cross realm trust to work and vis-vera for bi-directional trust.  You will 
also need to make sure Hadoop's auth_to_local would map the remote realm to the 
same dn user.  UserGroupInformation.getShortName() should be invoked to resolve 
user name instead of manually parsing principal name.  Otherwise, auth_to_local 
rules are skipped and losing hierarchical information often result in 
privileges escalation security holes.

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14623) In NameNode Web UI, for Head the file (first 32K) old data is showing

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14623:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~hemanthboyina]!

> In NameNode Web UI, for Head the file (first 32K) old data is showing
> -
>
> Key: HDFS-14623
> URL: https://issues.apache.org/jira/browse/HDFS-14623
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14623.001.patch, HDFS-14623.patch, afterfix.JPG, 
> beforefix.JPG
>
>
> In Namenode Web UI , for Head the file (first 32K) 
> After opening multiple files and clicking on - "Head the file" is showing 
> wrong data 
> Scenario : 
> Uploaded Namenode log and Zkfc log , clicked head the file of namenode log 
> multiple times , then went for zkfc log and clicked on head the file , wrong 
> data is showing 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14195:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed 010 patch to trunk. Thanks [~suxingfate] for the patch and [~adam.antal] 
for reviewing the patch!

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch, HDFS-14195.010.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904223#comment-16904223
 ] 

Wei-Chiu Chuang edited comment on HDFS-14195 at 8/9/19 10:36 PM:
-

+1


was (Author: jojochuang):
+1 with the following small change just to make the text more clear:
{code:java}
+ " -sp print storage policy, used by delimiter only.\n"{code}
to
{code:java}
+ " -sp print storage policy, used by delimited processor only.\n"{code}

I'll commit the 010 patch with this change, and upload a 011 patch for future 
reference.

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch, HDFS-14195.010.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904223#comment-16904223
 ] 

Wei-Chiu Chuang commented on HDFS-14195:


+1 with the following small change just to make the text more clear:
{code:java}
+ " -sp print storage policy, used by delimiter only.\n"{code}
to
{code:java}
+ " -sp print storage policy, used by delimited processor only.\n"{code}

I'll commit the 010 patch with this change, and upload a 011 patch for future 
reference.

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch, HDFS-14195.010.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904220#comment-16904220
 ] 

Wei-Chiu Chuang edited comment on HDFS-14375 at 8/9/19 10:18 PM:
-

Thanks for reporting the issue, [~Jihyun.Cho].
 But I am not sure – I thought this can be addressed by setting up a proper 
auth_to_local rule?
{quote}The patch checks only using username and hostname except realm.
{quote}
This is most likely not the right way to solve the problem.

[~eyang] FYI.


was (Author: jojochuang):
Thanks for reporting the issue, [~Jihyun.Cho].
But I am not sure -- I thought this can be addressed by setting up a proper 
auth_to_local rule?

[~eyang] FYI.

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904220#comment-16904220
 ] 

Wei-Chiu Chuang commented on HDFS-14375:


Thanks for reporting the issue, [~Jihyun.Cho].
But I am not sure -- I thought this can be addressed by setting up a proper 
auth_to_local rule?

[~eyang] FYI.

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14489) fix naming issue for ScmBlockLocationTestingClient

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904216#comment-16904216
 ] 

Hadoop QA commented on HDFS-14489:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} dupname {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 duplicated filenames that differ only 
in case. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968644/HDFS-14489.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0ccfe46570fe 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 98dd7c4 |
| maven | version: Apache Maven 3.3.9 |
| dupname | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27460/artifact/out/dupnames.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27460/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDFS-14489
> URL: https://issues.apache.org/jira/browse/HDFS-14489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: star
>Assignee: star
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292285=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292285
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:57
Start Date: 09/Aug/19 21:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1230: 
HDDS-1895. Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#discussion_r312660268
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
 ##
 @@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key.acl;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.util.BooleanBiFunction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.AddAclResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+/**
+ * Handle add Acl request for bucket.
+ */
+public class OMKeyAddAclRequest extends OMKeyAclRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyAddAclRequest.class);
+
+  private static BooleanBiFunction, OmKeyInfo> keyAddAclOp;
+  private String path;
+  private List ozoneAcls;
+
+  static {
+keyAddAclOp = (ozoneAcls, omKeyInfo) -> {
+  return omKeyInfo.addAcl(ozoneAcls.get(0));
 
 Review comment:
   As discussed offline, have changed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292285)
Time Spent: 1h 50m  (was: 1h 40m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292287=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292287
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:57
Start Date: 09/Aug/19 21:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1230: 
HDDS-1895. Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#discussion_r312660320
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
 ##
 @@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key.acl;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.util.BooleanBiFunction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.AddAclResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+/**
+ * Handle add Acl request for bucket.
+ */
+public class OMKeyAddAclRequest extends OMKeyAclRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyAddAclRequest.class);
+
+  private static BooleanBiFunction, OmKeyInfo> keyAddAclOp;
+  private String path;
+  private List ozoneAcls;
+
+  static {
+keyAddAclOp = (ozoneAcls, omKeyInfo) -> {
+  return omKeyInfo.addAcl(ozoneAcls.get(0));
+};
+  }
+
+  public OMKeyAddAclRequest(OMRequest omRequest) {
+super(omRequest, keyAddAclOp);
+OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
+getOmRequest().getAddAclRequest();
+path = addAclRequest.getObj().getPath();
+ozoneAcls = Lists.newArrayList(addAclRequest.getAcl());
+  }
+
+  @Override
+  List getAcls() {
+return ozoneAcls;
+  }
+
+  @Override
+  String getPath() {
+return path;
+  }
+
+  @Override
+  OMResponse.Builder onInit() {
+return OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.AddAcl).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+  }
+
+  @Override
+  OMClientResponse onSuccess(OMResponse.Builder omResponse,
+  OmKeyInfo omKeyInfo, boolean operationResult) {
+omResponse.setSuccess(operationResult);
+omResponse.setAddAclResponse(AddAclResponse.newBuilder()
+.setResponse(operationResult));
+return new OMKeyAclResponse(omKeyInfo,
+omResponse.build());
+  }
+
+  @Override
+  OMClientResponse onFailure(OMResponse.Builder omResponse,
+  IOException exception) {
+return new OMKeyAclResponse(null,
+createErrorOMResponse(omResponse, exception));
+  }
+
+  @Override
+  void onComplete(boolean operationResult, IOException exception,
+  OMMetrics omMetrics) {
+if (operationResult) {
+  LOG.debug("Add acl: {} to path: {} success!", getAcls(), getPath());
+} else {
+  omMetrics.incNumBucketUpdateFails();
+  if (exception == null) {
+LOG.error("Add acl {} to path {} failed, because acl already exist",
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292287)
Time 

[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292286=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292286
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:57
Start Date: 09/Aug/19 21:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1230: 
HDDS-1895. Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#discussion_r312660300
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -141,20 +144,26 @@ private static OMClientRequest getOMAclRequest(OMRequest 
omRequest) {
 return new OMVolumeAddAclRequest(omRequest);
   } else if (ObjectType.BUCKET == type) {
 return new OMBucketAddAclRequest(omRequest);
+  } else if (type == ObjectType.KEY) {
 
 Review comment:
   Updated it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292286)
Time Spent: 2h  (was: 1h 50m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292288=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292288
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:58
Start Date: 09/Aug/19 21:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-520077027
 
 
   Thank You @arp7 for the offline discussion.
   Addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292288)
Time Spent: 2h 20m  (was: 2h 10m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292289=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292289
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:58
Start Date: 09/Aug/19 21:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-520077044
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292289)
Time Spent: 2.5h  (was: 2h 20m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904215#comment-16904215
 ] 

Hadoop QA commented on HDFS-14375:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-14375 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14375 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962896/authorize.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27463/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13495) RBF: Support Router Admin REST API

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904213#comment-16904213
 ] 

Hadoop QA commented on HDFS-13495:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-13495 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13495 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975946/HDFS-13495-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27458/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support Router Admin REST API
> --
>
> Key: HDFS-13495
> URL: https://issues.apache.org/jira/browse/HDFS-13495
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mohammad Arshad
>Assignee: Fengnan Li
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13495-001.patch
>
>
> This JIRA intends to add REST API support for all admin commands. Router 
> Admin REST APIs can be useful in managing the Routers from a central 
> management layer tool. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14515) The proto type of quota should change to int64.

2019-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904214#comment-16904214
 ] 

Hadoop QA commented on HDFS-14515:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-14515 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14515 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27461/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> The proto type of quota should change to int64.
> ---
>
> Key: HDFS-14515
> URL: https://issues.apache.org/jira/browse/HDFS-14515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: INode.proto, Main.java, NINode.proto
>
>
> In fsimage.proto, the proto type of quota should be int64 rather than uint64. 
> In proto, uint64 represents 64 bits unsinged intergers. Since quota in image 
> could be -1, using uint64 is inappropriate.(see 
> https://developers.google.com/protocol-buffers/docs/proto#scalar)
> HDFS uses uint64 for quota and works fine because the java type corresponding 
> to uint64 is long, the same as int64. But in c++ and go uint64 and int64 are 
> mapping to different types. It would be a problem when loading an image with 
> c++ and fsimage.proto.
> The good news is we can simply change uint64 to int64 without breaking any 
> existing clusters. The two types, int64 and uint64, are serialized 
> to/deserialized from java long in the same way. Which means a long serialized 
> to uint64 could be treated as int64 and deserialized to the same long value.
> 1)long a -> uint64 serialized -> byte[] b -> int64 deserialized -> long c;
> 2)a == c;
> I did a test to show 1 & 2. INode.proto uses uint64 and NINode.proto uses 
> int64. Main.java shows serializing long as uint64 to byte array and 
> deserializing the array as int64 to long. Using proto2.5 to compile the proto 
> files.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14375:
--

Assignee: Jihyun Cho

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14375:
---
Status: Patch Available  (was: Open)

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14408) HttpFS handles paths with special charactes different than WebHdfs

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14408.

Resolution: Duplicate

> HttpFS handles paths with special charactes different than WebHdfs
> --
>
> Key: HDFS-14408
> URL: https://issues.apache.org/jira/browse/HDFS-14408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1
>Reporter: Andrey Zinovyev
>Priority: Major
> Attachments: httpfs-special-fix.patch, httpfs-special-test.patch
>
>
> After HDFS-13176 WebHdfsFileSystem encodes special characters twice. For 
> example path _/tmp/day=2018-01-01_ becomes 
> _/webhdfs/v1/tmp/day%253D2018-01-01_ call. 
> In NamenodeWebHdfsMethods it is handled by decode path twice (first by web 
> server and then in code explicityly).
> But if we use httpfs it fails to get paths with special characters (like 
> `=`), cause it decodes path once.
> Test to reproduce and simple fix in attachment. Although I think that double 
> encoding doesn't look right.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14450:
---
Status: Patch Available  (was: Open)

> Erasure Coding: decommissioning datanodes cause replicate a large number of 
> duplicate EC internal blocks
> 
>
> Key: HDFS-14450
> URL: https://issues.apache.org/jira/browse/HDFS-14450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14450-000.patch
>
>
> {code:java}
> //  [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in 
> need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
> required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> In a large-scale cluster, decommissioning large-scale datanodes cause EC 
> block groups to replicate a large number of duplicate internal blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14489) fix naming issue for ScmBlockLocationTestingClient

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14489:
---
Status: Patch Available  (was: Open)

> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDFS-14489
> URL: https://issues.apache.org/jira/browse/HDFS-14489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: star
>Assignee: star
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14515) The proto type of quota should change to int64.

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14515:
---
Status: Patch Available  (was: Open)

> The proto type of quota should change to int64.
> ---
>
> Key: HDFS-14515
> URL: https://issues.apache.org/jira/browse/HDFS-14515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: INode.proto, Main.java, NINode.proto
>
>
> In fsimage.proto, the proto type of quota should be int64 rather than uint64. 
> In proto, uint64 represents 64 bits unsinged intergers. Since quota in image 
> could be -1, using uint64 is inappropriate.(see 
> https://developers.google.com/protocol-buffers/docs/proto#scalar)
> HDFS uses uint64 for quota and works fine because the java type corresponding 
> to uint64 is long, the same as int64. But in c++ and go uint64 and int64 are 
> mapping to different types. It would be a problem when loading an image with 
> c++ and fsimage.proto.
> The good news is we can simply change uint64 to int64 without breaking any 
> existing clusters. The two types, int64 and uint64, are serialized 
> to/deserialized from java long in the same way. Which means a long serialized 
> to uint64 could be treated as int64 and deserialized to the same long value.
> 1)long a -> uint64 serialized -> byte[] b -> int64 deserialized -> long c;
> 2)a == c;
> I did a test to show 1 & 2. INode.proto uses uint64 and NINode.proto uses 
> int64. Main.java shows serializing long as uint64 to byte array and 
> deserializing the array as int64 to long. Using proto2.5 to compile the proto 
> files.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14523:
---
Status: Patch Available  (was: Open)

> Remove excess read lock for NetworkToplogy
> --
>
> Key: HDFS-14523
> URL: https://issues.apache.org/jira/browse/HDFS-14523
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14523.1.patch
>
>
> getNumOfRacks() and getNumOfLeaves() are two high frequencies call methods 
> for BlockPlacementPolicy, this two methods need to get NetworkTopology read 
> lock, and get lock in high frequencies call methods may impact the namenode 
> performance. 
> This two methods get number of racks and number of leaves just for 
> chooseTarget calculate,  lock in these two methods cannot guarantee these two 
> values will not change in the subsequent calculations.
> I think it's safe to remove the read lock from this two methods.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13495) RBF: Support Router Admin REST API

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13495:
---
Status: Patch Available  (was: Open)

Submit for precommit check

> RBF: Support Router Admin REST API
> --
>
> Key: HDFS-13495
> URL: https://issues.apache.org/jira/browse/HDFS-13495
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mohammad Arshad
>Assignee: Fengnan Li
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13495-001.patch
>
>
> This JIRA intends to add REST API support for all admin commands. Router 
> Admin REST APIs can be useful in managing the Routers from a central 
> management layer tool. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14423:
---
Status: Patch Available  (was: Open)

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2, 3.2.0
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14715) RBF: Fix RBF failed tests

2019-08-09 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota resolved HDFS-14715.

Resolution: Duplicate

> RBF: Fix RBF failed tests
> -
>
> Key: HDFS-14715
> URL: https://issues.apache.org/jira/browse/HDFS-14715
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> including:
> hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
> hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904196#comment-16904196
 ] 

Wei-Chiu Chuang commented on HDFS-14595:


Looks good. [~ayushtkn] anything you'd like to add?

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14715) RBF: Fix RBF failed tests

2019-08-09 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904194#comment-16904194
 ] 

CR Hota commented on HDFS-14715:


[~elgoiri] Yeah, its duplicate.

[~zhangchen] I have assigned HDFS-14609 to you, will be happy to help you get 
it going.

> RBF: Fix RBF failed tests
> -
>
> Key: HDFS-14715
> URL: https://issues.apache.org/jira/browse/HDFS-14715
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> including:
> hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
> hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-09 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota reassigned HDFS-14609:
--

Assignee: Chen Zhang  (was: CR Hota)

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292254=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292254
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:16
Start Date: 09/Aug/19 21:16
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#discussion_r312650511
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
 ##
 @@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key.acl;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.util.BooleanBiFunction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.AddAclResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+/**
+ * Handle add Acl request for bucket.
+ */
+public class OMKeyAddAclRequest extends OMKeyAclRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyAddAclRequest.class);
+
+  private static BooleanBiFunction, OmKeyInfo> keyAddAclOp;
+  private String path;
+  private List ozoneAcls;
+
+  static {
+keyAddAclOp = (ozoneAcls, omKeyInfo) -> {
+  return omKeyInfo.addAcl(ozoneAcls.get(0));
+};
+  }
+
+  public OMKeyAddAclRequest(OMRequest omRequest) {
+super(omRequest, keyAddAclOp);
+OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
+getOmRequest().getAddAclRequest();
+path = addAclRequest.getObj().getPath();
+ozoneAcls = Lists.newArrayList(addAclRequest.getAcl());
+  }
+
+  @Override
+  List getAcls() {
+return ozoneAcls;
+  }
+
+  @Override
+  String getPath() {
+return path;
+  }
+
+  @Override
+  OMResponse.Builder onInit() {
+return OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.AddAcl).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+  }
+
+  @Override
+  OMClientResponse onSuccess(OMResponse.Builder omResponse,
+  OmKeyInfo omKeyInfo, boolean operationResult) {
+omResponse.setSuccess(operationResult);
+omResponse.setAddAclResponse(AddAclResponse.newBuilder()
+.setResponse(operationResult));
+return new OMKeyAclResponse(omKeyInfo,
+omResponse.build());
+  }
+
+  @Override
+  OMClientResponse onFailure(OMResponse.Builder omResponse,
+  IOException exception) {
+return new OMKeyAclResponse(null,
+createErrorOMResponse(omResponse, exception));
+  }
+
+  @Override
+  void onComplete(boolean operationResult, IOException exception,
+  OMMetrics omMetrics) {
+if (operationResult) {
+  LOG.debug("Add acl: {} to path: {} success!", getAcls(), getPath());
+} else {
+  omMetrics.incNumBucketUpdateFails();
+  if (exception == null) {
+LOG.error("Add acl {} to path {} failed, because acl already exist",
 
 Review comment:
   Should we just make this info or even debug?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog 

[jira] [Commented] (HDFS-14716) Pass CreateFile Flags To Subclasses of FileSystem

2019-08-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904191#comment-16904191
 ] 

Íñigo Goiri commented on HDFS-14716:


Tweaking ClientProtocol is always tricky because of backwards compatibility.
You would need to keep the old way and probably adding a new one, but that's 
also tricky.

> Pass CreateFile Flags To Subclasses of FileSystem
> -
>
> Key: HDFS-14716
> URL: https://issues.apache.org/jira/browse/HDFS-14716
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: David Mollitor
>Priority: Major
>
> I need a way to pass [HDFS-13448] {{NO_LOCAL_WRITE}} to the 
> {{DistributedFileSystem}} class.  The {{create}} method should pass all of 
> the {{CreateFile}} flags to the underlying implementation.  The 'overwrite' 
> flag should be removed and let implementation read this directive as a 
> {{CreateFile}} flag.
> {code:java}
>   public abstract FSDataOutputStream create(Path f,
>   FsPermission permission,
>   boolean overwrite,
>   int bufferSize,
>   short replication,
>   long blockSize,
>   Progressable progress) throws IOException;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292253
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:14
Start Date: 09/Aug/19 21:14
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#discussion_r312650056
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
 ##
 @@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key.acl;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.util.BooleanBiFunction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.AddAclResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+/**
+ * Handle add Acl request for bucket.
+ */
+public class OMKeyAddAclRequest extends OMKeyAclRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyAddAclRequest.class);
+
+  private static BooleanBiFunction, OmKeyInfo> keyAddAclOp;
+  private String path;
+  private List ozoneAcls;
+
+  static {
+keyAddAclOp = (ozoneAcls, omKeyInfo) -> {
+  return omKeyInfo.addAcl(ozoneAcls.get(0));
 
 Review comment:
   Sorry I didn't understand this. What is it doing? Felt little odd to see a 
lambda in a static block.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292253)
Time Spent: 1.5h  (was: 1h 20m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904189#comment-16904189
 ] 

Íñigo Goiri commented on HDFS-14713:


[~ayushtkn], I think we should actually cover it :)
[~wangzhaohui] do you mind extending the test too?

> RBF: routeradmin support refreshRouterArgs command but it not on display
> 
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, after.png, before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=292249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292249
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 21:11
Start Date: 09/Aug/19 21:11
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#discussion_r312649376
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -141,20 +144,26 @@ private static OMClientRequest getOMAclRequest(OMRequest 
omRequest) {
 return new OMVolumeAddAclRequest(omRequest);
   } else if (ObjectType.BUCKET == type) {
 return new OMBucketAddAclRequest(omRequest);
+  } else if (type == ObjectType.KEY) {
 
 Review comment:
   The code is using reverse order for equality. Can you use the same?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292249)
Time Spent: 1h 20m  (was: 1h 10m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14716) Pass CreateFile Flags To Subclasses of FileSystem

2019-08-09 Thread David Mollitor (JIRA)
David Mollitor created HDFS-14716:
-

 Summary: Pass CreateFile Flags To Subclasses of FileSystem
 Key: HDFS-14716
 URL: https://issues.apache.org/jira/browse/HDFS-14716
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: David Mollitor


I need a way to pass [HDFS-13448] {{NO_LOCAL_WRITE}} to the 
{{DistributedFileSystem}} class.  The {{create}} method should pass all of the 
{{CreateFile}} flags to the underlying implementation.  The 'overwrite' flag 
should be removed and let implementation read this directive as a 
{{CreateFile}} flag.

{code:java}
  public abstract FSDataOutputStream create(Path f,
  FsPermission permission,
  boolean overwrite,
  int bufferSize,
  short replication,
  long blockSize,
  Progressable progress) throws IOException;
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=292241=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292241
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 09/Aug/19 20:50
Start Date: 09/Aug/19 20:50
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#issuecomment-520059219
 
 
   +1 LGTM
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292241)
Time Spent: 11.5h  (was: 11h 20m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1927:
-
Labels: pull-request-available  (was: )

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292237=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292237
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 09/Aug/19 20:43
Start Date: 09/Aug/19 20:43
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292237)
Time Spent: 10m
Remaining Estimate: 0h

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-09 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1927:
-
Summary: Consolidate add/remove Acl into OzoneAclUtil class  (was: Create 
AclUtil class with helpers for add/remove Acl.)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14378) Simplify the design of multiple NN and both logic of edit log roll and checkpoint

2019-08-09 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14378:
---
Priority: Major  (was: Minor)

> Simplify the design of multiple NN and both logic of edit log roll and 
> checkpoint
> -
>
> Key: HDFS-14378
> URL: https://issues.apache.org/jira/browse/HDFS-14378
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Affects Versions: 3.1.2
>Reporter: star
>Assignee: star
>Priority: Major
> Attachments: HDFS-14378-trunk.001.patch, HDFS-14378-trunk.002.patch, 
> HDFS-14378-trunk.003.patch, HDFS-14378-trunk.004.patch, 
> HDFS-14378-trunk.005.patch, HDFS-14378-trunk.006.patch
>
>
>       HDFS-6440 introduced a mechanism to support more than 2 NNs. It 
> implements a first-writer-win policy to avoid duplicated fsimage downloading. 
> Variable 'isPrimaryCheckPointer' is used to hold the first-writer state, with 
> which SNN will provide fsimage for ANN next time. Then we have three roles in 
> NN cluster: ANN, one primary SNN, one or more normal SNN.
>       Since HDFS-12248, there may be more than two primary SNN shortly after 
> a exception occurred. It takes care with a scenario  that SNN will not upload 
> fsimage on IOE and Interrupted exceptions. Though it will not cause any 
> further functional issues, it is inconsistent. 
>       Futher more, edit log may be rolled more frequently than necessary with 
> multiple Standby name nodes, HDFS-14349. (I'm not so sure about this, will 
> verify by unit tests or any one could point it out.)
>       Above all, I‘m wondering if we could make it simple with following 
> changes:
>  * There are only two roles:ANN, SNN
>  * ANN will roll its edit log every DFS_HA_LOGROLL_PERIOD_KEY period.
>  * ANN will select a SNN to download checkpoint.
> SNN will just do logtail and checkpoint. Then provide a servlet for fsimage 
> downloading as normal. SNN will not try to roll edit log or send checkpoint 
> request to ANN.
> In a word, ANN will be more active. Suggestions are welcomed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >