[jira] [Commented] (HDFS-14909) DFSNetworkTopology#chooseRandomWithStorageType() should not decrease storage count for excluded node which is already part of excluded scope

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953943#comment-16953943
 ] 

Hudson commented on HDFS-14909:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17546 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17546/])
HDFS-14909. DFSNetworkTopology#chooseRandomWithStorageType() should not 
(surendralilhore: rev 54dc6b7d720851eb6017906d664aa0fda2698225)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java


> DFSNetworkTopology#chooseRandomWithStorageType() should not decrease storage 
> count for excluded node which is already part of excluded scope 
> -
>
> Key: HDFS-14909
> URL: https://issues.apache.org/jira/browse/HDFS-14909
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14909.001.patch, HDFS-14909.002.patch, 
> HDFS-14909.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14810) Review FSNameSystem editlog sync

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953914#comment-16953914
 ] 

Hudson commented on HDFS-14810:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17545 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17545/])
HDFS-14810. Review FSNameSystem editlog sync. Contributed by Xiaoqiao 
(ayushsaxena: rev 5527d79adb9b1e2f2779c283f81d6a3d5447babc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Review FSNameSystem editlog sync
> 
>
> Key: HDFS-14810
> URL: https://issues.apache.org/jira/browse/HDFS-14810
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14810.001.patch, HDFS-14810.002.patch, 
> HDFS-14810.003.patch, HDFS-14810.004.patch
>
>
> refactor and unified type of edit log sync in FSNamesystem as HDFS-11246 
> mentioned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-10-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952877#comment-16952877
 ] 

Hudson commented on HDFS-14739:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17541 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17541/])
HDFS-14739. RBF: LS command for mount point shows wrong owner and (ayushsaxena: 
rev 375224edebb1c937afe4bbea8fe884499ca8ece5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableNameservices.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/RouterResolveException.java


> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14739-trunk-001.patch, HDFS-14739-trunk-002.patch, 
> HDFS-14739-trunk-003.patch, HDFS-14739-trunk-004.patch, 
> HDFS-14739-trunk-005.patch, HDFS-14739-trunk-006.patch, 
> HDFS-14739-trunk-007.patch, HDFS-14739-trunk-008.patch, 
> HDFS-14739-trunk-009.patch, HDFS-14739-trunk-010.patch, 
> HDFS-14739-trunk-011.patch, image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14886) In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec

2019-10-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951227#comment-16951227
 ] 

Hudson commented on HDFS-14886:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17534 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17534/])
HDFS-14886. In NameNode Web UI's Startup Progress page, Loading edits 
(surendralilhore: rev 336abbd8737f3dff38f7bdad9721511c711c522b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec
> 
>
> Key: HDFS-14886
> URL: https://issues.apache.org/jira/browse/HDFS-14886
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14886.001.patch, HDFS-14886.002.patch, 
> HDFS-14886.003.patch, HDFS-14886_After.png, HDFS-14886_before.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14856) Add ability to import file ACLs from remote store

2019-10-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951157#comment-16951157
 ] 

Hudson commented on HDFS-14856:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17533 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17533/])
HDFS-14856. Fetch file ACLs while mounting external store. (#1478) (virajith: 
rev fabd41fa480303f86bfe7b6ae0277bc0b6015f80)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSTreeWalk.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java


> Add ability to import file ACLs from remote store
> -
>
> Key: HDFS-14856
> URL: https://issues.apache.org/jira/browse/HDFS-14856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, the 
> external store scanner, {{FsTreeWalk,}} ignores any ACLs on the data. In a 
> secure HDFS setup where external storage system and HDFS belong to the same 
> security domain, uniform enforcement of the authorization policies may be 
> desired. This task aims to extend the ability of the external store scanner 
> to support this use case. When configured, the scanner should attempt to 
> fetch ACLs and provide it to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"

2019-10-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950076#comment-16950076
 ] 

Hudson commented on HDFS-14238:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17529/])
HDFS-14238. A log in NNThroughputBenchmark should change log level to 
(ayushsaxena: rev 5f4641a120331d049a55c519a0d15da18c820fed)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> A log in NNThroughputBenchmark should  change log level to "INFO" instead of 
> "ERROR"
> 
>
> Key: HDFS-14238
> URL: https://issues.apache.org/jira/browse/HDFS-14238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14238.patch
>
>
> In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString());
> this loglevel should be changed to “LOG.info()” ,since no error occurs here, 
> just tell us namenode log level has changed .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949960#comment-16949960
 ] 

Hudson commented on HDFS-14899:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17528 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17528/])
HDFS-14899. Use Relative URLS in Hadoop HDFS RBF. Contributed by David 
(ayushsaxena: rev 6e5cd5273f1107635867ee863cb0e17ef7cc4afa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js


> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2213) Reduce key provider loading log level in OzoneFileSystem#getAdditionalTokenIssuers

2019-10-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949672#comment-16949672
 ] 

Hudson commented on HDDS-2213:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17526 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17526/])
HDDS-2213.Reduce key provider loading log level in (arp7: rev 
c561a70c49dd62d8ca563182af17ac21479a87de)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java


> Reduce key provider loading log level in 
> OzoneFileSystem#getAdditionalTokenIssuers
> --
>
> Key: HDDS-2213
> URL: https://issues.apache.org/jira/browse/HDDS-2213
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> OzoneFileSystem#getAdditionalTokenIssuers log an error when secure client 
> tries to collect ozone delegation token to run MR/Spark jobs but ozone file 
> system does not have a kms provider configured. In this case, we simply 
> return null provider here in the code below. This is a benign error and we 
> should reduce the log level to debug level.
> {code:java}
> KeyProvider keyProvider;
>  try {
>   keyProvider = getKeyProvider(); }
> catch (IOException ioe) {
>   LOG.error("Error retrieving KeyProvider.", ioe);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2282) scmcli pipeline list command throws NullPointerException

2019-10-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949166#comment-16949166
 ] 

Hudson commented on HDDS-2282:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17523 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17523/])
HDDS-2282. scmcli pipeline list command throws NullPointerException. (bharat: 
rev f267917ce3cf282b32166e39af871a8d1231d090)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh
* (add) hadoop-ozone/dist/src/main/smoketest/scmcli/pipeline.robot


> scmcli pipeline list command throws NullPointerException
> 
>
> Key: HDDS-2282
> URL: https://issues.apache.org/jira/browse/HDDS-2282
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ozone scmcli pipeline list
> {noformat}
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.(XceiverClientManager.java:98)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.(XceiverClientManager.java:83)
>   at 
> org.apache.hadoop.hdds.scm.cli.SCMCLI.createScmClient(SCMCLI.java:139)
>   at 
> org.apache.hadoop.hdds.scm.cli.pipeline.ListPipelinesSubcommand.call(ListPipelinesSubcommand.java:55)
>   at 
> org.apache.hadoop.hdds.scm.cli.pipeline.ListPipelinesSubcommand.call(ListPipelinesSubcommand.java:30)
>   at picocli.CommandLine.execute(CommandLine.java:1173)
>   at picocli.CommandLine.access$800(CommandLine.java:141)
>   at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
>   at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
>   at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
>   at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
>   at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
>   at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
>   at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
>   at org.apache.hadoop.hdds.scm.cli.SCMCLI.main(SCMCLI.java:101){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1986) Fix listkeys API

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949032#comment-16949032
 ] 

Hudson commented on HDDS-1986:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17522 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17522/])
HDDS-1986. Fix listkeys API. (#1588) (github: rev 
9c72bf462196e1d71a243903b74e3c4673f29efb)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java


> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1984) Fix listBucket API

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949003#comment-16949003
 ] 

Hudson commented on HDDS-1984:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17521 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17521/])
HDDS-1984. Fix listBucket API. (#1555) (github: rev 
957253fea682b6389b02b0191b71b9e12087bd72)
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java


> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2269) Provide config for fair/non-fair for OM RW Lock

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948841#comment-16948841
 ] 

Hudson commented on HDDS-2269:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17520 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17520/])
HDDS-2269. Provide config for fair/non-fair for OM RW Lock. (#1623) (nanda: rev 
4850b3aa86970f7af8f528564f2573becbd8e434)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/PooledLockFactory.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java


> Provide config for fair/non-fair for OM RW Lock
> ---
>
> Key: HDDS-2269
> URL: https://issues.apache.org/jira/browse/HDDS-2269
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Provide config in OzoneManager Lock for fair/non-fair for OM RW Lock.
> Created based on review comments during HDDS-2244.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948752#comment-16948752
 ] 

Hudson commented on HDFS-14900:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17518 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17518/])
HDFS-14900. Fix build failure of hadoop-hdfs-native-client. Contributed 
(ayushsaxena: rev 104ccca916997bbf3c37d87adbae673f4dd42036)
* (edit) dev-support/docker/Dockerfile
* (edit) BUILDING.txt


>  Fix build failure of hadoop-hdfs-native-client
> ---
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14900.001.patch, HDFS-14900.002.patch, 
> HDFS-14900.003.patch
>
>
> HADOOP-16558 removed protocol buffers from build requirements but libhdfspp 
> requires libprotobuf and libprotoc. {{-Pnative}} build fails if protocol 
> buffers is not installed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2266) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (Ozone)

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948383#comment-16948383
 ] 

Hudson commented on HDDS-2266:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17517 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17517/])
HDDS-2266. Avoid evaluation of LOG.trace and LOG.debug statement in the 
(shashikant: rev a031388a2e8b7ac60ebca5a08216e2dd19ea6933)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OpenKeyCleanupService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OMRatisHelper.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java


> Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path 
> (Ozone)
> 
>
> Key: HDDS-2266
> URL: https://issues.apache.org/jira/browse/HDDS-2266
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI, Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> LOG.trace and LOG.debug with logging information will be evaluated even when 
> debug/trace logging is disabled. This jira proposes to wrap all the 
> trace/debug logging with
> LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948181#comment-16948181
 ] 

Hudson commented on HDFS-14898:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17516 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17516/])
HDFS-14898. Use Relative URLS in Hadoop HDFS HTTP FS. Contributed by 
(ayushsaxena: rev eeb58a07e24e6a1abdf32e1c198a5a1e9c2a8f1a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/webapps/static/index.html


> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14898.1.patch, HDFS-14898.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947912#comment-16947912
 ] 

Hudson commented on HDFS-14754:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17515 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17515/])
HDFS-14754. Erasure Coding : The number of Under-Replicated Blocks never 
(surendralilhore: rev d76e2655ace56490a92da70bde9e651ce515f80c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java


> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14754-addendum.001.patch, 
> HDFS-14754-addendum.002.patch, HDFS-14754-addendum.003.patch, 
> HDFS-14754.001.patch, HDFS-14754.002.patch, HDFS-14754.003.patch, 
> HDFS-14754.004.patch, HDFS-14754.005.patch, HDFS-14754.006.patch, 
> HDFS-14754.007.patch, HDFS-14754.008.patch, HDFS-14754.branch-3.1.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2265) integration.sh may report false negative

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947755#comment-16947755
 ] 

Hudson commented on HDDS-2265:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17513 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17513/])
HDDS-2265. integration.sh may report false negative (elek: rev 
2d81abce5ecfec555eda4819a6e2f5b22e1cd9b8)
* (edit) hadoop-ozone/dev-support/checks/_mvn_unit_report.sh


> integration.sh may report false negative
> 
>
> Key: HDDS-2265
> URL: https://issues.apache.org/jira/browse/HDDS-2265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Sometimes integration test run gets killed, and {{integration.sh}} 
> incorrectly reports "success".  Example:
> {noformat:title=https://github.com/elek/ozone-ci-q4/tree/ae930d6f7f10c7d2aeaf1f2f21b18ada954ea444/pr/pr-hdds-2259-hlwmv/integration/result}
> success
> {noformat}
> {noformat:title=https://github.com/elek/ozone-ci-q4/blob/ae930d6f7f10c7d2aeaf1f2f21b18ada954ea444/pr/pr-hdds-2259-hlwmv/integration/output.log#L2457}
> /workdir/hadoop-ozone/dev-support/checks/integration.sh: line 22:   369 
> Killed  mvn -B -fn test -f pom.ozone.xml -pl 
> :hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools 
> -Dtest=\!TestMiniChaosOzoneCluster "$@"
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947716#comment-16947716
 ] 

Hudson commented on HDDS-2217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17512 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17512/])
HDDS-2217. Remove log4j and audit configuration from the docker-config (elek: 
rev 4b0a5bca465c84265b8305e001809fd1f986e8da)
* (edit) hadoop-ozone/dev-support/checks/_mvn_unit_report.sh


> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947700#comment-16947700
 ] 

Hudson commented on HDDS-2217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17511 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17511/])
HDDS-2217. Remove log4j and audit configuration from the docker-config (elek: 
rev 1f954e679895f68d6ce9e822498daa2b142e7e46)
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-mr/common-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config


> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2233) Remove ByteStringHelper and refactor the code to the place where it used

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947383#comment-16947383
 ] 

Hudson commented on HDDS-2233:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17507 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17507/])
HDDS-2233 - Remove ByteStringHelper and refactor the code to the place 
(shashikant: rev 1d279304079cb898e84c8f37ec40fb0e5cfb92ae)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/interfaces/ChunkManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerDummyImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringConversion.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestChunkManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringHelper.java


> Remove ByteStringHelper and refactor the code to the place where it used
> 
>
> Key: HDDS-2233
> URL: https://issues.apache.org/jira/browse/HDDS-2233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> See HDDS-2203 where there is a race condition reported by me.
> Later in the discussion we agreed that it is better to refactor the code and 
> remove the class completely for now, and that would also resolve the race 
> condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947166#comment-16947166
 ] 

Hudson commented on HDDS-2244:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17506 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17506/])
HDDS-2244. Use new ReadWrite lock in OzoneManager. (#1589) (github: rev 
87d9f3668ce00171d7c2dfbbaf84acb482317b67)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java


> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947133#comment-16947133
 ] 

Hudson commented on HDFS-14509:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17505 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17505/])
HDFS-14509. DN throws InvalidToken due to inequality of password when (cliang: 
rev 72ae371e7a6695f45f0d9cea5ae9aae83941d360)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java


> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch, 
> HDFS-14509-003.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2260) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS)

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947109#comment-16947109
 ] 

Hudson commented on HDDS-2260:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17503 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17503/])
HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the 
(bharat: rev 15a9beed1b0a14e8e1f0537294bdac13c9340465)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/AbstractContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/ThrottledAsyncChecker.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerCommandRequestPBHelper.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolumeChecker.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineReportHandler.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueBlockIterator.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/CommitWatcher.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerAttribute.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/EndpointStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/RandomContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/LevelDBStore.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/TopNOrderedContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/LeaseManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/DefaultProfile.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
* (edit) 

[jira] [Commented] (HDFS-14859) Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946649#comment-16946649
 ] 

Hudson commented on HDFS-14859:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17502 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17502/])
HDFS-14859. Prevent unnecessary evaluation of costly operation (ayushsaxena: 
rev 91320b446171013ad47783d7400d646d2d71ca3d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java


> Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> ---
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch, HDFS-14859.007.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946640#comment-16946640
 ] 

Hudson commented on HDFS-14814:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17501 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17501/])
HDFS-14814. RBF: RouterQuotaUpdateService supports inherited rule. 
(ayushsaxena: rev 761594549ec0c6bab50a28a7eb6c741aec7239d7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java


> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch, 
> HDFS-14814.009.patch, HDFS-14814.010.patch, HDFS-14814.011.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946314#comment-16946314
 ] 

Hudson commented on HDDS-2245:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17500 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17500/])
HDDS-2245. Use dynamic ports for SCM in TestSecureOzoneCluster (aengineer: rev 
4fdf01635835a1b8f1107a50c112a3601a6a61f9)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java


> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2262) SLEEP_SECONDS: command not found

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946288#comment-16946288
 ] 

Hudson commented on HDDS-2262:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17499 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17499/])
HDDS-2262. SLEEP_SECONDS: command not found (aengineer: rev 
012d897e5b13228152ca31ad97fae87e4b1e4b54)
* (edit) hadoop-ozone/dist/src/main/dockerbin/entrypoint.sh


> SLEEP_SECONDS: command not found
> 
>
> Key: HDDS-2262
> URL: https://issues.apache.org/jira/browse/HDDS-2262
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {noformat}
> datanode_1  | /opt/hadoop/bin/docker/entrypoint.sh: line 66: SLEEP_SECONDS: 
> command not found
> datanode_1  | Sleeping for  seconds
> {noformat}
> Eg. 
> https://raw.githubusercontent.com/elek/ozone-ci-q4/master/pr/pr-hdds-2238-79fll/acceptance/docker-ozonesecure-ozonesecure-s3-s3g.log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2259) Container Data Scrubber computes wrong checksum

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946279#comment-16946279
 ] 

Hudson commented on HDDS-2259:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17498 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17498/])
HDDS-2259. Container Data Scrubber computes wrong checksum (aengineer: rev 
aaa94c3da6e725cbf8118993d17502f852de6fc0)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java


> Container Data Scrubber computes wrong checksum
> ---
>
> Key: HDDS-2259
> URL: https://issues.apache.org/jira/browse/HDDS-2259
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
> byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
> v = fs.read(buffer);
> ...
> bytesRead += v;
> ...
> ByteString actual = cal.computeChecksum(buffer)
> .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2264) Improve output of TestOzoneContainer

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946263#comment-16946263
 ] 

Hudson commented on HDDS-2264:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17497 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17497/])
HDDS-2264. Improve output of TestOzoneContainer (aengineer: rev 
cfba6ac9512b180d598a7a477a1ee0ea251e7b41)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java


> Improve output of TestOzoneContainer
> 
>
> Key: HDDS-2264
> URL: https://issues.apache.org/jira/browse/HDDS-2264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestOzoneContainer#testContainerCreateDiskFull fails intermittently 
> (HDDS-2263), but test output does not reveal too much about the reason.  The 
> goal of this task is to improve the assertion/output to make it easier to fix 
> the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946248#comment-16946248
 ] 

Hudson commented on HDDS-2238:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17496 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17496/])
HDDS-2238. Container Data Scrubber spams log in empty cluster (aengineer: rev 
187731244067f6bf817ad352851cb27850b81c92)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScrubberMetrics.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerSet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerMetadataScanner.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerScrubberConfiguration.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestContainerScrubberMetrics.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerMetadataScrubberMetrics.java


> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14373) EC : Decoding is failing when block group last incomplete cell fall in to AlignedStripe

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946156#comment-16946156
 ] 

Hudson commented on HDFS-14373:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17495 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17495/])
HDFS-14373. EC : Decoding is failing when block group last incomplete 
(surendralilhore: rev 382967be51052d59e31d8d05713645b8d3c2325b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java


> EC : Decoding is failing when block group last incomplete cell fall in to 
> AlignedStripe
> ---
>
> Key: HDFS-14373
> URL: https://issues.apache.org/jira/browse/HDFS-14373
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs-client
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14373.001.patch, HDFS-14373.002.patch, 
> HDFS-14373.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946065#comment-16946065
 ] 

Hudson commented on HDDS-2239:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17494 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17494/])
HDDS-2239. Fix TestOzoneFsHAUrls (#1600) (bharat: rev 
9685a6c0e56a26add8be15606233573b514bff9e)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java


> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945742#comment-16945742
 ] 

Hudson commented on HDDS-2252:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17492 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17492/])
HDDS-2252. Enable gdpr robot test in daily build (elek: rev 
7f332ebf8b67d1ebf03f4fac9596ee18a99054cc)
* (edit) hadoop-ozone/dist/src/main/compose/ozone/test.sh


> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-10-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945580#comment-16945580
 ] 

Hudson commented on HDDS-2169:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17490 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17490/])
HDDS-2169. Avoid buffer copies while submitting client requests in (shashikant: 
rev 022fe5f5b226f1e9e03bfe8421975f6e90973903)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/ContainerCommandRequestMessage.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/ratis/TestContainerCommandRequestMessage.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java


> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2251) Add an option to customize unit.sh and integration.sh parameters

2019-10-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945039#comment-16945039
 ] 

Hudson commented on HDDS-2251:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17487 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17487/])
HDDS-2251. Add an option to customize unit.sh and integration.sh (elek: rev 
579dc870150868de5b27b6eb133d2cda88ec9ef9)
* (edit) hadoop-ozone/dev-support/checks/integration.sh
* (edit) hadoop-ozone/dev-support/checks/unit.sh


> Add an option to customize unit.sh and integration.sh parameters
> 
>
> Key: HDDS-2251
> URL: https://issues.apache.org/jira/browse/HDDS-2251
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks/unit.sh (and same with integration) provides 
> an easy entrypoint to execute all the unit/integration test. But in same 
> cases it would be great to use the script but further specify the scope of 
> the test.
> With this simple patch it will be possible to adjust the surefire parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944919#comment-16944919
 ] 

Hudson commented on HDDS-2257:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17485 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17485/])
HDDS-2257. Fix checkstyle issues in ChecksumByteBuffer (#1603) (bharat: rev 
f209722a19c5e18cd2371ace62aa20a753a8acc8)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java


> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: case child has incorrect indentation level 8, expected 
> level should be 6.
>  102: case child has incorrect indentation level 8, expected 
> level should be 6.
>  103: case child has incorrect indentation level 8, expected 
> level should be 6.
>  104: case child has incorrect indentation level 8, expected 
> level should be 6.
>  105: case child has incorrect indentation level 8, expected 
> level should be 6.
>  106: case child has incorrect indentation level 8, expected 
> level should be 6.
>  107: case child has incorrect indentation level 8, expected 
> level should be 6.
>  108: case child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2250) Generated configs missing from ozone-filesystem-lib jars

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944852#comment-16944852
 ] 

Hudson commented on HDDS-2250:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17484 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17484/])
HDDS-2250. Generated configs missing from ozone-filesystem-lib jars (elek: rev 
a3cf54ccdc3e59ca4a9a48d42f24ab96ec4c0583)
* (edit) hadoop-ozone/ozonefs-lib-current/pom.xml


> Generated configs missing from ozone-filesystem-lib jars
> 
>
> Key: HDDS-2250
> URL: https://issues.apache.org/jira/browse/HDDS-2250
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hadoop 3.1 and 3.2 acceptance tests started failing with HDDS-1720, which 
> added a new, annotated configuration class.
> The [change itself|https://github.com/apache/hadoop/pull/1538/files] looks 
> fine.  The problem is that the packaging process for {{ozone-filesystem-lib}} 
> jars keeps only 1 or 2 {{ozone-default-generated.xml}} files.  With the new 
> config in place, client configs are missing, so Ratis client gets evicted 
> immediately due to {{scm.container.client.idle.threshold}} = 0.  This results 
> in NPE:
> {code:title=https://elek.github.io/ozone-ci-q4/pr/pr-hdds-1720-trunk-rd9ht/acceptance/summary.html#s1-s5-t1-k2-k2}
> Running command 'hdfs dfs -put /opt/hadoop/NOTICE.txt 
> o3fs://bucket1.vol1/ozone-14607
> ...
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
>   at java.util.Objects.requireNonNull(Objects.java:228)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:208)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:234)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:332)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2158) Fix Json Injection in JsonUtils

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944799#comment-16944799
 ] 

Hudson commented on HDDS-2158:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17483 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17483/])
HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486) (github: rev 
8de4374427e77d5d9b79a710ca9225f749556eda)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/AddAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/AddAclKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/SetAclVolumeHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/AddAclVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/SetAclKeyHandler.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/SetAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/RemoveAclVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/GetAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetAclKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/GetTokenHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/RemoveAclKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/GetAclVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/RemoveAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/ObjectPrinter.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/PrintTokenHandler.java


> Fix Json Injection in JsonUtils
> ---
>
> Key: HDDS-2158
> URL: https://issues.apache.org/jira/browse/HDDS-2158
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
> String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944788#comment-16944788
 ] 

Hudson commented on HDDS-2164:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17482 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17482/])
HDDS-2164 : om.db.checkpoints is getting filling up fast. (#1536) (aengineer: 
rev f3eaa84f9d2db47741fae1394e182f3ea60a1331)
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RDBCheckpointManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServlet.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RocksDBCheckpoint.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMDbCheckpointServlet.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/TestReconUtils.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestOzoneManagerServiceProviderImpl.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/TestOmUtils.java


> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14890) Setting permissions on name directory fails on non posix compliant filesystems

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944699#comment-16944699
 ] 

Hudson commented on HDFS-14890:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17479 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17479/])
HDFS-14890.  Fixed namenode and journalnode startup on Windows.  
(eyang: rev aa24add8f0e9812d1f787efb3c40155b0fdeed9c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java


> Setting permissions on name directory fails on non posix compliant filesystems
> --
>
> Key: HDFS-14890
> URL: https://issues.apache.org/jira/browse/HDFS-14890
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: Windows 10.
>Reporter: hirik
>Assignee: Siddharth Wagle
>Priority: Blocker
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2237) KeyDeletingService throws NPE if it's started too early

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944673#comment-16944673
 ] 

Hudson commented on HDDS-2237:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17478 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17478/])
HDDS-2237. KeyDeletingService throws NPE if it's started too early (bharat: rev 
3f166512afa2564ba1f34512e31476282af862be)
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> KeyDeletingService throws NPE if it's started too early
> ---
>
> Key: HDDS-2237
> URL: https://issues.apache.org/jira/browse/HDDS-2237
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: om
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> 1. OzoneManager starts KeyManager
> 2. KeyManager starts KeyDeletingService
> 3. KeyDeletingService uses OzoneManager.isLeader()
> 4. OzoneManager.isLeader() uses omRatisServer
> 5. omRatisServer can be null (bumm)
>  
> Now the initialization order in OzoneManager:
>  
> new KeymanagerServer() *Includes start()*
> omRatisServer initialization
> start() (includes KeyManager.start())
>  
> The solution seems to be easy: start the key manager only from the 
> OzoneManager.start() and not from the OzoneManager.instantiateServices()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944617#comment-16944617
 ] 

Hudson commented on HDDS-2230:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17476 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17476/])
HDDS-2230. Invalid entries in ozonesecure-mr config. (Addendum) (elek: rev 
f826420d2bb14caeb047f130a5d6e370df8f015f)
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml


> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDDS-2230.001.patch, HDDS-2230.002.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944508#comment-16944508
 ] 

Hudson commented on HDDS-:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17475 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17475/])
HDDS-. Add a method to update ByteBuffer in (github: rev 
531cc938fe84eb895eec110240181d8dc492c32e)
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
* (edit) hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/PureJavaCrc32ByteBuffer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/PureJavaCrc32CByteBuffer.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/common/TestChecksumByteBuffer.java


> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2216) Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose .env files

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944484#comment-16944484
 ] 

Hudson commented on HDDS-2216:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17473 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17473/])
HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in (elek: rev 
bca014b0e03fb37711022ee6ed4272c346cdf5c9)
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-topology/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/.env
* (edit) 
hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/.env
* (edit) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/cluster.py
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozone/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/.env
* (edit) hadoop-ozone/dev-support/checks/blockade.sh
* (edit) 
hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozone-topology/docker-compose.yaml
* (edit) 
hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonescripts/.env
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/.env


> Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose .env files
> --
>
> Key: HDDS-2216
> URL: https://issues.apache.org/jira/browse/HDDS-2216
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In HDDS-1698 we replaced our apache/hadoop-runner base image to 
> apache/ozone-runner base image. 
> The version of the image is set by the .env files under the 
> hadoop-ozone/dist/src/main/compose directories
> {code:java}
> cd hadoop-ozone/dist/src/main/compose
> grep -r HADOOP_RUNNER .
> ./ozoneperf/docker-compose.yaml:  image: 
> apache/ozone-runner:${HADOOP_RUNNER_VERSION}
> ./ozoneperf/docker-compose.yaml:  image: 
> apache/ozone-runner:${HADOOP_RUNNER_VERSION}
> ./ozoneperf/docker-compose.yaml:  image: 
> apache/ozone-runner:${HADOOP_RUNNER_VERSION}
>  {code}
> But the name of the variable is HADOOP_RUNNER_VERSION instead of 
> OZONE_RUNNER_VERSION.
> Would be great to rename it the OZONE_RUNNER_VERSION.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2199) In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944459#comment-16944459
 ] 

Hudson commented on HDDS-2199:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17472 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17472/])
HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track multiple DNs on (elek: 
rev 6171a41b4c29a4039b53209df546c4c42a278464)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java


> In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host
> -
>
> Key: HDDS-2199
> URL: https://issues.apache.org/jira/browse/HDDS-2199
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Often in test clusters and tests, we start multiple datanodes on the same 
> host.
> In SCMNodeManager.register() there is a map of hostname -> datanode UUID 
> called dnsToUuidMap.
> If several DNs register from the same host, the entry in the map will be 
> overwritten and the last DN to register will 'win'.
> This means that the method getNodeByAddress() does not return the correct 
> DatanodeDetails object when many hosts are registered from the same address.
> This method is only used in SCMBlockProtocolServer.sortDatanodes() to allow 
> it to see if one of the nodes matches the client, but it need to be used by 
> the Decommission code.
> Perhaps we could change the getNodeByAddress() method to returns a list of 
> DNs? In normal production clusters, there should only be one returned, but in 
> test clusters, there may be many. Any code looking for a specific DN entry 
> would need to iterate the list and match on the port number too, as host:port 
> would be the unique definition of a datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2140) Add robot test for GDPR feature

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1693#comment-1693
 ] 

Hudson commented on HDDS-2140:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17471 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17471/])
HDDS-2140. Add robot test for GDPR feature (elek: rev 
d061c8469f6ada5e0068752e0621307a804bd27c)
* (add) hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot


> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944435#comment-16944435
 ] 

Hudson commented on HDDS-2230:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17470 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17470/])
HDDS-2230. Invalid entries in ozonesecure-mr config (elek: rev 
bffcd330859088cbd0d809dc7a580676af54103d)
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config


> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDDS-2230.001.patch, HDDS-2230.002.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944425#comment-16944425
 ] 

Hudson commented on HDDS-:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17469 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17469/])
Revert "HDDS- (#1578)" (#1594) (github: rev 
a9849f65ba79fa4efd80ead0ac7b4d37eee54f92)
* (delete) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/common/TestChecksumByteBuffer.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/PureJavaCrc32CByteBuffer.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/PureJavaCrc32ByteBuffer.java


> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944378#comment-16944378
 ] 

Hudson commented on HDDS-:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17468 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17468/])
HDDS- (#1578) (github: rev 4cf0b3660f620dd8a67201b75f4c88492c9adfb3)
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/common/TestChecksumByteBuffer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/PureJavaCrc32ByteBuffer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/PureJavaCrc32CByteBuffer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java


> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14879) Header was wrong in Snapshot web UI

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944303#comment-16944303
 ] 

Hudson commented on HDFS-14879:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17466 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17466/])
HDFS-14879. Header was wrong in Snapshot web UI. Contributed by (tasanuma: rev 
b23bdaf085dbc561c785cef1613bacaf6735d909)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Header was wrong in Snapshot web UI
> ---
>
> Key: HDFS-14879
> URL: https://issues.apache.org/jira/browse/HDFS-14879
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14879.001.patch, snapshotted.JPG
>
>
> we are showing the list of snapshots but the header is Snapshotted Directories
> attached screenshot of UI



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2225) SCM fails to start in most unsecure environments due to leftover secure config

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944277#comment-16944277
 ] 

Hudson commented on HDDS-2225:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17465 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17465/])
HDDS-2225. SCM fails to start in most unsecure environments due to (elek: rev 
ec8f691201a30a3ff3746954b3e6cd066a83a6fb)
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml


> SCM fails to start in most unsecure environments due to leftover secure config
> --
>
> Key: HDDS-2225
> URL: https://issues.apache.org/jira/browse/HDDS-2225
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Intermittent failure of {{ozone-recon}} and some other acceptance tests where 
> SCM container is not available is caused by leftover secure config in 
> {{core-site.xml}}.
> Initially the config file is 
> [empty|https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdds/common/src/main/conf/core-site.xml].
>   Various test environments populate it with different settings.  The problem 
> happens when a test does not specify any config for {{core-site.xml}}, in 
> which case the previous test's config file is retained.
> {code}
> scm_1   | 2019-10-01 19:42:05 WARN  WebAppContext:531 - Failed startup of 
> context 
> o.e.j.w.WebAppContext@1cc680e{/,file:///tmp/jetty-0.0.0.0-9876-scm-_-any-1272594486261557815.dir/webapp/,UNAVAILABLE}{/scm}
> scm_1   | javax.servlet.ServletException: javax.servlet.ServletException: 
> Keytab does not exist: /etc/security/keytabs/HTTP.keytab
> scm_1   | at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
> ...
> scm_1   | at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:791)
> ...
> scm_1   | Unable to initialize WebAppContext
> scm_1   | 2019-10-01 19:42:05 INFO  StorageContainerManagerStarter:51 - 
> SHUTDOWN_MSG:
> scm_1   | /
> scm_1   | SHUTDOWN_MSG: Shutting down StorageContainerManager at 
> 8724df7131bb/192.168.128.6
> scm_1   | /
> {code}
> The problem is intermittent due to ordering of test cases being different in 
> different runs.  If a secure test is run earlier, more tests are affected.  
> If secure tests are run last, the issue does not happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14637) Namenode may not replicate blocks to meet the policy after enabling upgradeDomain

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944232#comment-16944232
 ] 

Hudson commented on HDFS-14637:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17464 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17464/])
HDFS-14637. Namenode may not replicate blocks to meet the policy after 
(weichiu: rev c99a12167ff9566012ef32104a3964887d62c899)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusDefault.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusDefault.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithNodeGroup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatus.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/BlockPlacementPolicyAlwaysSatisfied.java


> Namenode may not replicate blocks to meet the policy after enabling 
> upgradeDomain
> -
>
> Key: HDFS-14637
> URL: https://issues.apache.org/jira/browse/HDFS-14637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14637.001.patch, HDFS-14637.002.patch, 
> HDFS-14637.003.patch, HDFS-14637.004.patch, HDFS-14637.005.patch
>
>
> After changing the network topology or placement policy on a cluster and 
> restarting the namenode, the namenode will scan all blocks on the cluster at 
> startup, and check if they meet the current placement policy. If they do not, 
> they are added to the replication queue and the namenode will arrange for 
> them to be replicated to ensure the placement policy is used.
> If you start with a cluster with no UpgradeDomain, and then enable 
> UpgradeDomain, then on restart the NN does notice all the blocks violate the 
> placement policy and it adds them to the replication queue. I believe there 
> are some issues in the logic that prevents the blocks from replicating 
> depending on the setup:
> With UD enabled, but no racks configured, and possible on a 2 rack cluster, 
> the queued replication work never makes any progress, as in 
> blockManager.validateReconstructionWork(), it checks to see if the new 
> replica increases the number of racks, and if it does not, it skips it and 
> tries again later.
> {code:java}
> DatanodeStorageInfo[] targets = rw.getTargets();
> if ((numReplicas.liveReplicas() >= requiredRedundancy) &&
> (!isPlacementPolicySatisfied(block)) ) {
>   if (!isInNewRack(rw.getSrcNodes(), targets[0].getDatanodeDescriptor())) {
> // No use continuing, unless a new rack in this case
> return false;
>   }
>   // mark that the reconstruction work is to replicate internal block to a
>   // new rack.
>   rw.setNotEnoughRack();
> }
> {code}
> Additionally, in blockManager.scheduleReconstruction() is there some logic 
> that sets the number of new replicas required to one, if the live replicas >= 
> requiredReduncancy:
> {code:java}
> int additionalReplRequired;
> if (numReplicas.liveReplicas() < requiredRedundancy) {
>   additionalReplRequired = requiredRedundancy - numReplicas.liveReplicas()
>   - pendingNum;
> } else {
>   additionalReplRequired = 1; // Needed on a new rack
> }{code}
> With UD, it is possible for 2 new replicas to be needed to meet the block 
> placement policy, if all existing replicas are on nodes with the same domain. 
> For traditional '2 rack redundancy', only 1 new replica would ever have been 
> needed in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-14889) Ability to check if a block has a replica on provided storage

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944209#comment-16944209
 ] 

Hudson commented on HDFS-14889:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17463 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17463/])
HDFS-14889. Ability to check if a block has a replica on provided (virajith: 
rev 844b766da535894b792892b38de6bc2500eca57f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> Ability to check if a block has a replica on provided storage
> -
>
> Key: HDFS-14889
> URL: https://issues.apache.org/jira/browse/HDFS-14889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, 
> there is no easy way to distinguish a {{Block}} belonging to an external 
> provided storage volume from a block belonging to the local cluster. This 
> task simplifies this. 
> {{isProvided}} block will be useful in hybrid scenarios when the local 
> cluster will host both kinds of blocks. For e.g. policy for management of 
> replica/cached-blocks will be different from that of regular blocks. As of 
> this task {{isProvided}} is not invoked anywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944189#comment-16944189
 ] 

Hudson commented on HDDS-2223:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17462 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17462/])
HDDS-2223. Support ReadWrite lock in LockManager. (#1564) (github: rev 
9700e2003aa1b7e2c4072a2a08d8827acc5aa779)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/lock/TestLockManager.java


> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944183#comment-16944183
 ] 

Hudson commented on HDDS-2198:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17461/])
HDDS-2198. SCM should not consider containers in CLOSING state to come (github: 
rev cdaa480dbfd8cc0f0d358f17047c8aa97299cb35)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/ContainerSafeModeRule.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/safemode/TestSCMSafeModeManager.java


> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944123#comment-16944123
 ] 

Hudson commented on HDDS-2200:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17458 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17458/])
HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. 
(aengineer: rev b7cb8fe07c25f31caae89d6406be54c505343f3c)
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconContainerDBProvider.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/recovery/ReconOmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/TestReconUtils.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestOzoneManagerServiceProviderImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestReconContainerDBProvider.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconUtils.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/recovery/TestReconOmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java


> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1720) Add ability to configure RocksDB logs for Ozone Manager

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944091#comment-16944091
 ] 

Hudson commented on HDDS-1720:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17457 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17457/])
HDDS-1720 : Add ability to configure RocksDB logs for Ozone Manager. 
(aengineer: rev 76605f17dd15a48bc40c1b2fe6c8d0c2f4631959)
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/utils/db/TestDBStoreBuilder.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RocksDBConfiguration.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/DBStoreBuilder.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRocksDBLogging.java


> Add ability to configure RocksDB logs for Ozone Manager
> ---
>
> Key: HDDS-1720
> URL: https://issues.apache.org/jira/browse/HDDS-1720
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> While doing performance testing, it was seen that there was no way to get 
> RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
> useful mechanism to understand the health of Rocksdb while investigating 
> large clusters. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2231) test-single.sh cannot copy results

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944060#comment-16944060
 ] 

Hudson commented on HDDS-2231:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17456 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17456/])
HDDS-2231. test-single.sh cannot copy results (#1575) (aengineer: rev 
944668674b57291050262d2d6f84a39ca437671d)
* (edit) hadoop-ozone/dist/src/main/compose/test-single.sh


> test-single.sh cannot copy results
> --
>
> Key: HDDS-2231
> URL: https://issues.apache.org/jira/browse/HDDS-2231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Previously {{result}} directory was created by simply {{source}}-ing 
> {{testlib.sh}}, but HDDS-2185 changed it to avoid lost results.  
> {{test-single.sh}} needs to be adjusted accordingly.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
> $ docker-compose up -d --scale datanode=3
> $ ../test-single.sh scm basic/basic.robot
> ...
> invalid output path: directory 
> "hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone/result" does not 
> exist
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2234) rat.sh fails due to ozone-recon-web/build files

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944054#comment-16944054
 ] 

Hudson commented on HDDS-2234:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17455 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17455/])
HDDS-2234. rat.sh fails due to ozone-recon-web/build files (#1580) (aengineer: 
rev 47d721d7dd9b875e8a981c34c78ae882f8899ebc)
* (edit) hadoop-ozone/pom.xml


> rat.sh fails due to ozone-recon-web/build files
> ---
>
> Key: HDDS-2234
> URL: https://issues.apache.org/jira/browse/HDDS-2234
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Anu Engineer
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hadoop-ozone-recon
> [INFO] Build failures were ignored.
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/index.html
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/main.96eebd44.chunk.css
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/main.5bb53989.chunk.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/precache-manifest.1d05d7a103ee9d6b280ef7adfcab3c01.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/service-worker.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2211) Collect docker logs if env fails to start

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943862#comment-16943862
 ] 

Hudson commented on HDDS-2211:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17454 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17454/])
HDDS-2211. Collect docker logs if env fails to start (#1553) (arp7: rev 
51eaecab20cd7c8362899e56284522734f24668a)
* (edit) hadoop-ozone/dist/src/main/compose/test-all.sh
* (edit) hadoop-ozone/dist/src/main/compose/testlib.sh


> Collect docker logs if env fails to start
> -
>
> Key: HDDS-2211
> URL: https://issues.apache.org/jira/browse/HDDS-2211
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Occasionally some acceptance test docker environment fails to start up 
> properly.  We need docker logs for analysis, but they are not being collected.
> https://github.com/elek/ozone-ci-q4/blob/master/trunk/trunk-nightly-extra-20190930-74rp4/acceptance/output.log#L3765-L3768



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14881) Safemode 'forceExit' option, doesn’t shown in help message

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943773#comment-16943773
 ] 

Hudson commented on HDFS-14881:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17453 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17453/])
HDFS-14881. Safemode 'forceExit' option, doesn’t shown in help message. 
(ayushsaxena: rev a3fe4042448eefcb7ccfe102ae4d35ae963240b9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


> Safemode 'forceExit' option, doesn’t shown in help message
> --
>
> Key: HDFS-14881
> URL: https://issues.apache.org/jira/browse/HDFS-14881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14881.0003.patch, HDFS-14881.001.patch, 
> HDFS-14881.002.patch, HDFS-14881.004.patch
>
>
> Safemode option 'forceExit' doesn’t shown in help message.
> bin # ./hdfs dfsadmin
> Usage: hdfs dfsadmin
> Note: Administrative commands can only be run as the HDFS superuser.
> [-report [-live] [-dead] [-decommissioning] [-enteringmaintenance] 
> [-inmaintenance]]
> [-safemode ]
> 'forceExit' option become hidden option, end user will not aware of such 
> option from command help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943727#comment-16943727
 ] 

Hudson commented on HDDS-2226:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17452 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17452/])
HDDS-2226. S3 Secrets should use a strong RNG. (#1572) (github: rev 
d59bcbfa0f30fc6fedb0a7e1896292a524ff71c7)
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java


> S3 Secrets should use a strong RNG
> --
>
> Key: HDDS-2226
> URL: https://issues.apache.org/jira/browse/HDDS-2226
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The S3 token generation under ozone should use a strong RNG. 
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14888) RBF: Enable Parallel Test Profile for builds

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943628#comment-16943628
 ] 

Hudson commented on HDFS-14888:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17451 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17451/])
HDFS-14888. RBF: Enable Parallel Test Profile for builds. Contributed by 
(ayushsaxena: rev 5a7483ca5ceddd822024f46e38913e36dc8fadc8)
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml


> RBF: Enable Parallel Test Profile for builds
> 
>
> Key: HDFS-14888
> URL: https://issues.apache.org/jira/browse/HDFS-14888
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14888-01.patch
>
>
> Enable Parallel Test Profile for builds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2228) Fix NPE in OzoneDelegationTokenManager#addPersistedDelegationToken

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943357#comment-16943357
 ] 

Hudson commented on HDDS-2228:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17448 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17448/])
HDDS-2228. Fix NPE in OzoneDelegationTokenManager#addPersistedDelegat… (github: 
rev c5665b23ca92a8e18c4e9d24413c13f7cb7fd5fe)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java


> Fix NPE in OzoneDelegationTokenManager#addPersistedDelegationToken
> --
>
> Key: HDDS-2228
> URL: https://issues.apache.org/jira/browse/HDDS-2228
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The certClient was not initialized in proper order as a result, when OM 
> restart with delegation token issued, the ozone delegation token secret 
> manager NPE. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943257#comment-16943257
 ] 

Hudson commented on HDDS-2072:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17447 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17447/])
HDDS-2072. Make StorageContainerLocationProtocolService message based 
(aengineer: rev 4c24f2434dd8c09bb104ee660975855eca287fe6)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightSubCommand.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolBlockLocationInsight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolContainerLocationInsight.java
* (edit) hadoop-hdds/common/src/main/proto/ScmBlockLocationProtocol.proto


> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943250#comment-16943250
 ] 

Hudson commented on HDFS-14858:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17446 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17446/])
HDFS-14858. [SBN read] Allow configurably enable/disable (cliang: rev 
1303255aee75e5109433f937592a890e8d274ce2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestMultiObserverNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConsistentReadsObserver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestStateAlignmentContextWithHA.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java


> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch, 
> HDFS-14858.003.patch, HDFS-14858.004.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943219#comment-16943219
 ] 

Hudson commented on HDDS-2019:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17444 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17444/])
HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA. (#1489) 
(github: rev b09d389001d95eedb7ec17c6f890e0ea3baace9d)
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/util/TestOzoneS3Util.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java


> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943212#comment-16943212
 ] 

Hudson commented on HDDS-2224:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17443/])
HDDS-2224. Fix loadup cache for cache cleanup policy NEVER. (#1567) (github: 
rev 53ed78bcdb716d0351a934ac18661ef9fa6a03d4)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/TypedTable.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java


> Fix loadup cache for cache cleanup policy NEVER
> ---
>
> Key: HDDS-2224
> URL: https://issues.apache.org/jira/browse/HDDS-2224
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During initial startup/restart of OM, if table has cache cleanup policy set 
> to NEVER, we fill the table cache and also epochEntries. We do not need to 
> add entries to epochEntries, as the epochEntries is used for eviction from 
> the cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943207#comment-16943207
 ] 

Hudson commented on HDDS-2162:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17442/])
HDDS-2162. Make OM Generic related configuration support HA style (github: rev 
169cef758dcbe7021d44765b4c18f3ed50eb5a03)
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMNodeDetails.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/OzoneManagerSnapshotProvider.java
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java


> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943113#comment-16943113
 ] 

Hudson commented on HDDS-2227:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17441/])
HDDS-2227. GDPR key generation could benefit from secureRandom. (#1574) 
(github: rev 685918ef41a9fff51a1a84718097b90b4a915e68)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestGDPRSymmetricKey.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java


> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2073) Make SCMSecurityProtocol message based

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943101#comment-16943101
 ] 

Hudson commented on HDDS-2073:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17440 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17440/])
HDDS-2073. Make SCMSecurityProtocol message based. Contributed by Elek, 
(aengineer: rev ffd4e527256389d91dd8e4c49ca1681f70a790e2)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
* (edit) hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/SCMSecurityProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightSubCommand.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolSecurityInsight.java


> Make SCMSecurityProtocol message based
> --
>
> Key: HDDS-2073
> URL: https://issues.apache.org/jira/browse/HDDS-2073
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
> generic debug tool more powerful and unify our protocols I suggest to 
> transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2068) Make StorageContainerDatanodeProtocolService message based

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943086#comment-16943086
 ] 

Hudson commented on HDDS-2068:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17439 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17439/])
HDDS-2068. Make StorageContainerDatanodeProtocolService message based 
(aengineer: rev e8ae632d4c4f13788b0c42dbf297c8f7b9d889f3)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerDatanodeProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolBlockLocationInsight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolDatanodeInsight.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/SCMTestUtils.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightSubCommand.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerDatanodeProtocolServerSideTranslatorPB.java


> Make StorageContainerDatanodeProtocolService message based
> --
>
> Key: HDDS-2068
> URL: https://issues.apache.org/jira/browse/HDDS-2068
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2201) Rename VolumeList to UserVolumeInfo

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942962#comment-16942962
 ] 

Hudson commented on HDDS-2201:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17437 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17437/])
HDDS-2201. Rename VolumeList to UserVolumeInfo. (#1566) (github: rev 
2e1fd44596285f66f5874d2897b6d42dc5f82f65)
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/UserVolumeInfoCodec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/VolumeListCodec.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeSetOwnerResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/package-info.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/volume/OMVolumeSetOwnerResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/TestOMResponseUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/volume/OMVolumeDeleteResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/volume/OMVolumeCreateResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeCreateResponse.java


> Rename VolumeList to UserVolumeInfo
> ---
>
> Key: HDDS-2201
> URL: https://issues.apache.org/jira/browse/HDDS-2201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Under Ozone Manager, The volume points to a structure called volumeInfo, 
> Bucket points to BucketInfo, Key points to KeyInfo. However, User points to 
> VolumeList. duh?
> This JIRA proposes to refactor the VolumeList as UserVolumeInfo. Why not, 
> UserInfo, because that structure is already taken by the security work of 
> Ozone Manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942723#comment-16942723
 ] 

Hudson commented on HDDS-2187:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17435 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17435/])
HDDS-2187. ozone-mr test fails with No FileSystem for scheme "o3fs" (elek: rev 
f1ba9bfad75acf40faabd5b2f30cbd920fa800ec)
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (add) 
hadoop-ozone/tools/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem


> ozone-mr test fails with No FileSystem for scheme "o3fs"
> 
>
> Key: HDDS-2187
> URL: https://issues.apache.org/jira/browse/HDDS-2187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> HDDS-2101 changed how Ozone filesystem provider is configured.  {{ozone-mr}} 
> tests [started 
> failing|https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/pr/pr-hdds-2101-rtz55/acceptance/output.log#L255-L263],
>  but it [wasn't 
> noticed|https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-2101-rtz55/acceptance/result]
>  due to HDDS-2185.
> {code}
> Running command 'ozone fs -mkdir /user'
> ${output} = mkdir: No FileSystem for scheme "o3fs"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2210) ContainerStateMachine should not be marked unhealthy if applyTransaction fails with closed container exception

2019-10-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942487#comment-16942487
 ] 

Hudson commented on HDDS-2210:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17434 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17434/])
HDDS-2210. ContainerStateMachine should not be marked unhealthy if (github: rev 
41440ec890348f95bf7f10b5ced737e41dd6c3d3)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java


> ContainerStateMachine should not be marked unhealthy if applyTransaction 
> fails with closed container exception
> --
>
> Key: HDDS-2210
> URL: https://issues.apache.org/jira/browse/HDDS-2210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, if applyTransaction fails, the stateMachine is marked unhealthy 
> and next snapshot creation will fail. As a result of which the the raftServer 
> will close down leading to pipeline failure. ClosedContainer exception should 
> be ignored while marking the stateMachine unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14885) UI: Fix a typo on WebUI of DataNode.

2019-10-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942434#comment-16942434
 ] 

Hudson commented on HDFS-14885:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17433 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17433/])
HDFS-14885. UI: Fix a typo on WebUI of DataNode. Contributed by Xieming 
(aajisaka: rev 3df733c25010591fe7c646a076d57f38e916296a)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> UI: Fix a typo on WebUI of DataNode.
> 
>
> Key: HDFS-14885
> URL: https://issues.apache.org/jira/browse/HDFS-14885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ui
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14885.patch, Screen Shot 2019-10-01 at 12.40.29.png
>
>
> A Period('.') should be added to the end of following sentence on WebUI of 
> DataNode.
> "No nodes are decommissioning" 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2166) Some RPC metrics are missing from SCM prometheus endpoint

2019-10-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942111#comment-16942111
 ] 

Hudson commented on HDDS-2166:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17428 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17428/])
HDDS-2166. Some RPC metrics are missing from SCM prometheus endpoint (elek: rev 
918b470deb35c892efcfa8ceba211a38cbe7bf4c)
* (edit) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java


> Some RPC metrics are missing from SCM prometheus endpoint
> -
>
> Key: HDDS-2166
> URL: https://issues.apache.org/jira/browse/HDDS-2166
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In Hadoop metrics it's possible to register multiple metrics with the same 
> name but with different tags. For example each RpcServere has an own metrics 
> instance in SCM.
> {code}
> "name" : 
> "Hadoop:service=StorageContainerManager,name=RpcActivityForPort9860",
> "name" : 
> "Hadoop:service=StorageContainerManager,name=RpcActivityForPort9863",
> {code}
> They are converted by PrometheusSink to a prometheus metric line with proper 
> name and tags. For example:
> {code}
> rpc_rpc_queue_time60s_num_ops{port="9860",servername="StorageContainerLocationProtocolService",context="rpc",hostname="72736061cbc5"}
>  0
> {code}
> The PrometheusSink uses a Map to cache all the recent values but 
> unfortunately the key contains only the name (rpc_rpc_queue_time60s_num_ops 
> in our example) but not the tags (port=...)
> For this reason if there are multiple metrics with the same name, only the 
> first one will be displayed.
> As a result in SCM only the metrics of the first RPC server can be exported 
> to the prometheus endpoint. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14492) Snapshot memory leak

2019-10-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942102#comment-16942102
 ] 

Hudson commented on HDFS-14492:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17427 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17427/])
HDFS-14492. Snapshot memory leak. Contributed by Wei-Chiu Chuang. (shashikant: 
rev 6ef6594c7ee09b561e42c16ce4e91c0479908ad8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java


> Snapshot memory leak
> 
>
> Key: HDFS-14492
> URL: https://issues.apache.org/jira/browse/HDFS-14492
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.6.0
> Environment: CDH5.14.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.1.4
>
>
> We recently examined the NameNode heap dump of a big, heavy snapshot user, 
> trying to trim some fat, and surely enough we found memory leak in it: when 
> snapshots are removed, the corresponding data structures are not removed.
> This cluster has 586 million file system objects (286 million files, 287 
> million blocks, 13 million directories), using around 132gb of heap.
> While only 44.5 million files have snapshotted copies, 
> (INodeFileAttributes$SnapshotCopy), most inodes (nearly 212 million) have 
> FileWithSnapshotFeature and FileDiffList. Those inodes had snapshotted copies 
> at some point in the past, but after snapshots are removed, those data 
> structured are still kept in the heap.
> INode$Feature = 32.5 byte on average, FileWithSnapshotFeature = 32 bytes, 
> FileDiffList = 24 bytes. It may not sound a lot, but they add up quickly in 
> large clusters like this. In this cluster, a whopping 13.8gb of memory could 
> have been saved:  ((32.5 + 32 + 24) bytes * (211997769 -  44572380) =~ 
> 13.8gb) if not for this bug. That is more than 10% of savings in heap size.
> Heap histogram for reference:
> {noformat}
> num #instances #bytes class name
>  --
>  1: 286418254 27496152384 org.apache.hadoop.hdfs.server.namenode.INodeFile
>  2: 28737 18388622528 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
>  3: 227899550 17144816120 [B
>  4: 287324031 13769408616 
> [Lorg.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
>  5: 71352116 12353841568 [Ljava.lang.Object;
>  6: 286322650 9170335840 
> [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
>  7: 235632329 7658462416 
> [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
>  8: 4 7046430816 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
>  9: 211997769 6783928608 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature
>  10: 211997769 5087946456 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList
>  11: 76586261 3780468856 [I
>  12: 44572380 3209211360 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy
>  13: 58634517 2345380680 java.util.ArrayList
>  14: 44572380 2139474240 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff
>  15: 76582416 1837977984 org.apache.hadoop.hdfs.server.namenode.AclFeature
>  16: 12907668 1135874784 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory{noformat}
> [~szetszwo] [~arpaga] [~smeng] [~shashikant]  any thoughts?
> I am thinking that inside 
> AbstractINodeDiffList#deleteSnapshotDiff() , in addition to cleaning up file 
> diffs, it should also remove FileWithSnapshotFeature. I am not familiar with 
> the snapshot implementation, so any guidance is greatly appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1615) ManagedChannel references are being leaked in ReplicationSupervisor.java

2019-10-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941661#comment-16941661
 ] 

Hudson commented on HDDS-1615:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17423 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17423/])
HDDS-1615. ManagedChannel references are being leaked in (github: rev 
8efd25b33a210f507da58be88e1c93e7f9b7aaed)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java


> ManagedChannel references are being leaked in ReplicationSupervisor.java
> 
>
> Key: HDDS-1615
> URL: https://issues.apache.org/jira/browse/HDDS-1615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ManagedChannel references are being leaked in ReplicationSupervisor.java
> {code}
> May 30, 2019 8:10:56 AM 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
>  cleanQueue
> SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=1495, 
> target=192.168.0.3:49868} was not shutdown properly!!! ~*~*~*
> Make sure to call shutdown()/shutdownNow() and wait until 
> awaitTermination() returns true.
> java.lang.RuntimeException: ManagedChannel allocation site
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:411)
> at 
> org.apache.hadoop.ozone.container.replication.GrpcReplicationClient.(GrpcReplicationClient.java:65)
> at 
> org.apache.hadoop.ozone.container.replication.SimpleContainerDownloader.getContainerDataFromReplicas(SimpleContainerDownloader.java:87)
> at 
> org.apache.hadoop.ozone.container.replication.DownloadAndImportReplicator.replicate(DownloadAndImportReplicator.java:118)
> at 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$TaskRunner.run(ReplicationSupervisor.java:115)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941420#comment-16941420
 ] 

Hudson commented on HDFS-14305:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17421 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17421/])
HDFS-14305. Fix serial number calculation in BlockTokenSecretManager to (shv: 
rev b3275ab1f2f4546ba4bdc0e48cfa60b5b05071b9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java


> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
>  Labels: multi-sbnn, release-blocker
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305-007.patch, HDFS-14305-008.patch, 
> HDFS-14305.001.patch, HDFS-14305.002.patch, HDFS-14305.003.patch, 
> HDFS-14305.004.patch, HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2205) checkstyle.sh reports wrong failure count

2019-09-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941409#comment-16941409
 ] 

Hudson commented on HDDS-2205:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17420 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17420/])
HDDS-2205. checkstyle.sh reports wrong failure count (aengineer: rev 
e5bba592a84a94e0545479b668e6925eb4b8858c)
* (edit) hadoop-ozone/dev-support/checks/checkstyle.sh


> checkstyle.sh reports wrong failure count
> -
>
> Key: HDDS-2205
> URL: https://issues.apache.org/jira/browse/HDDS-2205
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{checkstyle.sh}} outputs files with checkstyle violations and the violations 
> themselves on separate lines.  It then reports line count as number of 
> failures.
> {code:title=target/checkstyle/summary.txt}
> hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
>  49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager.
> {code}
> {code:title=target/checkstyle/failures}
> 2
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2207) Update Ratis to latest snapshot

2019-09-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941051#comment-16941051
 ] 

Hudson commented on HDDS-2207:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17418 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17418/])
HDDS-2207. Update Ratis to latest snapshot. Contributed by Shashikant (msingh: 
rev 98ca07ebed2ae3d7e41e5029b5bba6d089d41d43)
* (edit) hadoop-hdds/pom.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) hadoop-ozone/pom.xml


> Update Ratis to latest snapshot
> ---
>
> Key: HDDS-2207
> URL: https://issues.apache.org/jira/browse/HDDS-2207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This Jira aims to update ozone with latest ratis snapshot which has a crtical 
> fix for retry behaviour on getting not leader exception in client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2202) Remove unused import in OmUtils

2019-09-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940981#comment-16940981
 ] 

Hudson commented on HDDS-2202:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17417 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17417/])
HDDS-2202. Remove unused import in OmUtils (elek: rev 
b46d82339f73534efa35c60f74eec1cdce9fd4b3)
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java


> Remove unused import in OmUtils
> ---
>
> Key: HDDS-2202
> URL: https://issues.apache.org/jira/browse/HDDS-2202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Fix hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
> Remove L49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940969#comment-16940969
 ] 

Hudson commented on HDDS-2153:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17416 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17416/])
HDDS-2153. Add a config to tune max pending requests in Ratis leader (elek: rev 
a530ac3f50d71c608235168acefe2f8eb1753131)
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java


> Add a config to tune max pending requests in Ratis leader
> -
>
> Key: HDDS-2153
> URL: https://issues.apache.org/jira/browse/HDDS-2153
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2183) Container and pipline subcommands of scmcli should be grouped

2019-09-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940948#comment-16940948
 ] 

Hudson commented on HDDS-2183:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17415 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17415/])
HDDS-2183. Container and pipline subcommands of scmcli should be grouped (elek: 
rev d6b0a8da77916ed814c0b04bd5f3a46e8c59268f)
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ClosePipelineSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/InfoSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/DeactivatePipelineSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/DeleteSubcommand.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/PipelineCommands.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ActivatePipelineSubcommand.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ContainerCommands.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/CreateSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/CloseSubcommand.java


> Container and pipline subcommands of scmcli should be grouped
> -
>
> Key: HDDS-2183
> URL: https://issues.apache.org/jira/browse/HDDS-2183
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Once upon an time when we had only a few subcommands under `ozone scmcli` to 
> manage containers.
>  
> Now we have many admin commands some of them are grouped to a subcommand (eg. 
> safemode, replicationmanager) some of are not.
>  
> I propose to group the container and pipeline related commands:
>  
> Instead of "ozone scmcli info" use "ozone scmcli container info"
> Instead of "ozone scmcli list" use "ozone scmcli container list"
> Instead of "ozone scmcli listPipelines" use "ozone scmcli pipeline list"
>  
> And so on...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940530#comment-16940530
 ] 

Hudson commented on HDFS-14305:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17414/])
Revert "HDFS-14305. Fix serial number calculation in (shv: rev 
760b523e58fd1069f0726ae853bed5d44e9d1dc6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java


> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305-007.patch, HDFS-14305.001.patch, 
> HDFS-14305.002.patch, HDFS-14305.003.patch, HDFS-14305.004.patch, 
> HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940186#comment-16940186
 ] 

Hudson commented on HDFS-14850:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17412/])
HDFS-14850. Optimize FileSystemAccessService#getFileSystemConfiguration. 
(inigoiri: rev d8313b227495d748abe8884eee34db507476cee1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java


> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch, 
> HDFS-14850.003.patch, HDFS-14850.004(2).patch, HDFS-14850.004.patch, 
> HDFS-14850.005.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14876) Remove unused imports from TestBlockMissingException.java and TestClose.java

2019-09-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940106#comment-16940106
 ] 

Hudson commented on HDFS-14876:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17411 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17411/])
HDFS-14876. Remove unused imports from TestBlockMissingException.java 
(ayushsaxena: rev 22008716075b461ef48e801fc7049cfad6aade45)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClose.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java


> Remove unused imports from TestBlockMissingException.java and TestClose.java
> 
>
> Key: HDFS-14876
> URL: https://issues.apache.org/jira/browse/HDFS-14876
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14876.000.patch, HDFS-14876.001.patch
>
>
> There 3 unused imports in TestBlockMissingException.java and TestClose.java. 
> Let's clean them up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940099#comment-16940099
 ] 

Hudson commented on HDFS-14849:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17410 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17410/])
Revert "HDFS-14849. Erasure Coding: the internal block is replicated 
(ayushsaxena: rev 0d5d0b914ac959ce2c41f483ac5b74f58053cd00)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
HDFS-14849. Erasure Coding: the internal block is replicated many times 
(ayushsaxena: rev c4c8d5fd0e3c17ccdcf18ece8e005f510328b060)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939788#comment-16939788
 ] 

Hudson commented on HDDS-1146:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17409 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17409/])
HDDS-1146. Adding container related metrics in SCM. (#1541) (github: rev 
14b4fbc019c98e982466083838226af8790a53a8)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/metrics/TestSCMContainerManagerMetrics.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/AbstractContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/metrics/SCMContainerManagerMetrics.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java


> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939718#comment-16939718
 ] 

Hudson commented on HDFS-14564:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17408 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17408/])
HDFS-14564: Add libhdfs APIs for readFully; add readFully to (weichiu: rev 
13b427fc05da7352fadd7214adfa09c326bba238)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/libhdfs_wrapper_defines.h
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/libhdfspp_wrapper_defines.h
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_ops.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestByteBufferPread.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/libhdfs_wrapper_undefs.h
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h


> Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
> -
>
> Key: HDFS-14564
> URL: https://issues.apache.org/jira/browse/HDFS-14564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> Splitting this out from HDFS-14478
> The {{PositionedReadable#readFully}} APIs have existed for a while, but have 
> never been exposed via libhdfs.
> HDFS-3246 added a new interface called {{ByteBufferPositionedReadable}} that 
> provides a {{ByteBuffer}} version of {{PositionedReadable}}, but it does not 
> contain a {{readFully}} method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939469#comment-16939469
 ] 

Hudson commented on HDFS-14849:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17407 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17407/])
HDFS-14849. Erasure Coding: the internal block is replicated many times 
(ayushsaxena: rev ce58c05f1d89a72c787f3571f78a9464d0ab3933)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2185) createmrenv failure not reflected in acceptance test result

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939330#comment-16939330
 ] 

Hudson commented on HDDS-2185:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17405 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17405/])
HDDS-2185. createmrenv failure not reflected in acceptance test result (elek: 
rev a93a139b5df3b37c36bb9c633f35b89bb0601e44)
* (edit) hadoop-ozone/dist/src/main/compose/testlib.sh


> createmrenv failure not reflected in acceptance test result
> ---
>
> Key: HDDS-2185
> URL: https://issues.apache.org/jira/browse/HDDS-2185
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Part of the MR tests fail, but it's not reflected in the test report, which 
> shows all green.
> {noformat:title=https://github.com/elek/ozone-ci/blob/679228c146628cd4d1a416e1ffc9c513d19fb43d/pr/pr-hdds-2179-9bnxk/acceptance/output.log#L718-L730}
> ==
> hadoop31-createmrenv :: Create directories required for MR test   
> ==
> Create test volume, bucket and key| PASS |
> --
> Create user dir for hadoop| FAIL |
> 1 != 0
> --
> hadoop31-createmrenv :: Create directories required for MR test   | FAIL |
> 2 critical tests, 1 passed, 1 failed
> 2 tests total, 1 passed, 1 failed
> ==
> Output:  
> /tmp/smoketest/hadoop31/result/robot-hadoop31-hadoop31-createmrenv-scm.xml
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2149) Replace findbugs with spotbugs

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939157#comment-16939157
 ] 

Hudson commented on HDDS-2149:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17402 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17402/])
HDDS-2149. Replace findbugs with spotbugs (aengineer: rev 
9bf7a6e5b26a361fd08552793852208d817fdfbd)
* (edit) hadoop-ozone/common/pom.xml
* (edit) hadoop-ozone/insight/pom.xml
* (edit) hadoop-ozone/csi/pom.xml
* (edit) hadoop-ozone/ozonefs-lib-current/pom.xml
* (edit) hadoop-ozone/tools/pom.xml
* (edit) hadoop-ozone/upgrade/pom.xml
* (edit) hadoop-ozone/ozone-manager/pom.xml
* (edit) hadoop-ozone/ozonefs/pom.xml
* (edit) hadoop-ozone/recon/pom.xml
* (edit) hadoop-hdds/container-service/pom.xml
* (edit) hadoop-ozone/s3gateway/pom.xml
* (edit) hadoop-hdds/server-scm/pom.xml
* (edit) hadoop-hdds/common/pom.xml
* (edit) pom.ozone.xml
* (edit) hadoop-hdds/pom.xml
* (edit) hadoop-ozone/dev-support/checks/findbugs.sh
* (edit) hadoop-ozone/ozonefs-lib-legacy/pom.xml
* (edit) hadoop-ozone/pom.xml


> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939132#comment-16939132
 ] 

Hudson commented on HDDS-2179:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17401/])
HDDS-2179. ConfigFileGenerator fails with Java 10 or newer (aengineer: rev 
0371e953ac51d991f2bfed9ffd1724ff80733752)
* (edit) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java


> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939115#comment-16939115
 ] 

Hudson commented on HDDS-2174:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17399 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17399/])
HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is (aengineer: 
rev c55ac6a1c7d1dc65a0d2e735b315bbf6898f6ff1)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java


> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939019#comment-16939019
 ] 

Hudson commented on HDFS-14785:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17398 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17398/])
HDFS-14785. [SBN read] Change client logging to be less aggressive. (cliang: 
rev 2adcc3c932fd4f39a42724390ba81b2d431d7782)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java


> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.10.0, 3.2.0, 3.1.2, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-14785.001.patch
>
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939015#comment-16939015
 ] 

Hudson commented on HDFS-14461:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17397 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17397/])
HDFS-14461. RBF: Fix intermittently failing kerberos related unit test. 
(inigoiri: rev b1e55cfb557056306db92b4a74f7b0288fd193ee)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRootDirectorySecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractOpenSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractAppendSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractGetFileStatusSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterHttpDelegationToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRenameSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSetTimesSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractMkdirSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractCreateSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDeleteSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDelegationToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSeekSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractConcatSecure.java


> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch, 
> HDFS-14461.003.patch, HDFS-14461.004.patch, HDFS-14461.005.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> 

[jira] [Commented] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938801#comment-16938801
 ] 

Hudson commented on HDDS-2180:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17396 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17396/])
HDDS-2180. Add Object ID and update ID on VolumeList Object. (#1526) (github: 
rev 06998a11266c8d71a67114ef5c9a691987426630)
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/TestOMResponseUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeCreateResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeSetOwnerResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java


> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11934) Add assertion to TestDefaultNameNodePort#testGetAddressFromConf

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938737#comment-16938737
 ] 

Hudson commented on HDFS-11934:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17394 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17394/])
HDFS-11934. Add assertion to (ayushsaxena: rev 
1a2a352ecd4cc7c8d71b6bebf52609c5764d2981)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java


> Add assertion to TestDefaultNameNodePort#testGetAddressFromConf
> ---
>
> Key: HDFS-11934
> URL: https://issues.apache.org/jira/browse/HDFS-11934
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
>Assignee: Nikhil Navadiya
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-11934.002.patch, HDFS-11934.patch
>
>
> Add an additional assertion to TestDefaultNameNodePort, verify that 
> testGetAddressFromConf returns 555 if setDefaultUri(conf, "foo:555").



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >