[jira] [Assigned] (HDFS-14630) Configuration.getTimeDurationHelper() should not log time unit warning in info log.

2019-07-03 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14630:


 Assignee: hemanthboyina
Affects Version/s: 3.1.1
  Component/s: hdfs

> Configuration.getTimeDurationHelper() should not log time unit warning in 
> info log.
> ---
>
> Key: HDFS-14630
> URL: https://issues.apache.org/jira/browse/HDFS-14630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Minor
>
> To solve [HDFS-12920|https://issues.apache.org/jira/browse/HDFS-12920] issue 
> we configured "dfs.client.datanode-restart.timeout" without time unit. No log 
> file is full of
> {noformat}
> 2019-06-22 20:13:14,605 | INFO  | pool-12-thread-1 | No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS 
> org.apache.hadoop.conf.Configuration.logDeprecation(Configuration.java:1409){noformat}
> No need to log this, just give the behavior in property description.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14630) Configuration.getTimeDurationHelper() should not log time unit warning in info log.

2019-07-03 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14630:
-

 Summary: Configuration.getTimeDurationHelper() should not log time 
unit warning in info log.
 Key: HDFS-14630
 URL: https://issues.apache.org/jira/browse/HDFS-14630
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Surendra Singh Lilhore


To solve [HDFS-12920|https://issues.apache.org/jira/browse/HDFS-12920] issue we 
configured "dfs.client.datanode-restart.timeout" without time unit. No log file 
is full of
{noformat}
2019-06-22 20:13:14,605 | INFO  | pool-12-thread-1 | No unit for 
dfs.client.datanode-restart.timeout(30) assuming SECONDS 
org.apache.hadoop.conf.Configuration.logDeprecation(Configuration.java:1409){noformat}
No need to log this, just give the behavior in property description.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: RouterAdmin should be able to remove expired routers from Routers Information

2019-07-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878318#comment-16878318
 ] 

Hadoop QA commented on HDFS-14593:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 15s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.store.driver.TestStateStoreZK |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973628/HDFS-14593.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f7fceb280d13 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / acd2d52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27143/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Created] (HDDS-1762) Reorganize docker-compose examples

2019-07-03 Thread Istvan Fajth (JIRA)
Istvan Fajth created HDDS-1762:
--

 Summary: Reorganize docker-compose examples
 Key: HDDS-1762
 URL: https://issues.apache.org/jira/browse/HDDS-1762
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: docker
Affects Versions: 0.4.0
Reporter: Istvan Fajth
Assignee: Istvan Fajth


The current docker example directories are all containing their own 
docker-compose.yaml, docker-config, .env, and sometimes test.sh files.

There are a lot of duplicates, and if there is something new that needs to be 
set, it is error prone to set it in all the directories.

The aim of this ticket is to organize the common parts into one place, and use 
the inheritance and dynamic configuration possibilities of docker to have all 
things written in one place, so that maintainability get easier. On the 
contrary, it would be a bit harder to understand the things forr the first 
sight, so documentation needs to be improved as well to guide the reader more 
on what is happening where.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: RouterAdmin should be able to remove expired routers from Routers Information

2019-07-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878288#comment-16878288
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

Based on the discussion, I uploaded the 3rd patch. I will add the explanation 
of the patch later.

> RBF: RouterAdmin should be able to remove expired routers from Routers 
> Information
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: RouterAdmin should be able to remove expired routers from Routers Information

2019-07-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878287#comment-16878287
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

Thank you very much for sharing your opinion, [~ayushtkn] and [~elgoiri]!
{quote}Doesn't the dead DN also stays in the NN UI for indefinite period? if we 
don't restart NN I guess so.
{quote}
Yes, it does. And it is worse for the router case because expired routers keep 
existing even if we restart routers.

> RBF: RouterAdmin should be able to remove expired routers from Routers 
> Information
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14593) RBF: RouterAdmin should be able to remove expired routers from Routers Information

2019-07-03 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14593:

Attachment: HDFS-14593.003.patch

> RBF: RouterAdmin should be able to remove expired routers from Routers 
> Information
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1550) MiniOzoneCluster is not shutting down all the threads during shutdown.

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?focusedWorklogId=271987=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271987
 ]

ASF GitHub Bot logged work on HDDS-1550:


Author: ASF GitHub Bot
Created on: 04/Jul/19 02:56
Start Date: 04/Jul/19 02:56
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #1050: HDDS-1550. 
MiniOzoneCluster is not shutting down all the threads during shutdown. 
Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1050#discussion_r300213230
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
 ##
 @@ -203,7 +203,7 @@ void initializeConfiguration() throws IOException {
   1, TimeUnit.SECONDS);
   conf.setTimeDuration(HddsConfigKeys.HDDS_HEARTBEAT_INTERVAL, 1,
   TimeUnit.SECONDS);
-  conf.setInt(OzoneConfigKeys.OZONE_CONTAINER_CACHE_SIZE, 8);
+  conf.setInt(OzoneConfigKeys.OZONE_CONTAINER_CACHE_SIZE, 2);
 
 Review comment:
   Thanks for the review @bharatviswa504. This value is being reduced otherwise 
we run into too many open files exception.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271987)
Time Spent: 2h 50m  (was: 2h 40m)

> MiniOzoneCluster is not shutting down all the threads during shutdown.
> --
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14458) Report pmem stats to namenode

2019-07-03 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14458:
--
Attachment: HDFS-14458.002.patch

> Report pmem stats to namenode
> -
>
> Key: HDFS-14458
> URL: https://issues.apache.org/jira/browse/HDFS-14458
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14458.000.patch, HDFS-14458.001.patch, 
> HDFS-14458.002.patch
>
>
> Currently, two important stats should be reported to NameNode: cache used and 
> cache capacity. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1717) MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1717?focusedWorklogId=271983=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271983
 ]

ASF GitHub Bot logged work on HDDS-1717:


Author: ASF GitHub Bot
Created on: 04/Jul/19 02:36
Start Date: 04/Jul/19 02:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1056: HDDS-1717. 
Remove OMFailoverProxyProvider's dependency on hadoop-3.2
URL: https://github.com/apache/hadoop/pull/1056#issuecomment-508316289
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 481 | trunk passed |
   | +1 | compile | 240 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 864 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 310 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 502 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 264 | the patch passed |
   | +1 | javac | 264 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | -1 | findbugs | 315 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 240 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1166 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6212 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.ozone.om.ha.OMFailoverProxyProvider.currentProxyOMNodeId; 
locked 87% of time  Unsynchronized access at OMFailoverProxyProvider.java:87% 
of time  Unsynchronized access at OMFailoverProxyProvider.java:[line 182] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1056 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f83dce0d307a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / acd2d52 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/1/testReport/ |
   | Max. process+thread count | 5402 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For 

[jira] [Work logged] (HDDS-1742) Merge ozone-perf and ozonetrace example clusters

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1742?focusedWorklogId=271982=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271982
 ]

ASF GitHub Bot logged work on HDDS-1742:


Author: ASF GitHub Bot
Created on: 04/Jul/19 02:34
Start Date: 04/Jul/19 02:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1057: HDDS-1742 Merge 
ozone-perf and ozonetrace example clusters
URL: https://github.com/apache/hadoop/pull/1057#issuecomment-508315926
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 579 | trunk passed |
   | +1 | compile | 300 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 925 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | -0 | patch | 1172 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 282 | the patch passed |
   | +1 | javac | 282 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 298 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1662 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 5927 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1057/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1057 |
   | Optional Tests | dupname asflicense mvnsite compile javac javadoc 
mvninstall unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux 578c8748f154 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / acd2d52 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1057/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1057/1/testReport/ |
   | Max. process+thread count | 5185 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1057/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271982)
Time Spent: 0.5h  (was: 20m)

> Merge ozone-perf and ozonetrace example clusters
> 
>
> Key: HDDS-1742
> URL: https://issues.apache.org/jira/browse/HDDS-1742
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>

[jira] [Commented] (HDFS-12703) Exceptions are fatal to decommissioning monitor

2019-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878267#comment-16878267
 ] 

Íñigo Goiri commented on HDFS-12703:


BTW, this precondition was added in HDFS-9390.
Not an expert in this part of the code but I don't think the Precondition was 
fully covered.
Let's try to stick to the precondition and have a unit test to trigger it.
We should also log the dn admin state in the triggered exception.

> Exceptions are fatal to decommissioning monitor
> ---
>
> Key: HDFS-12703
> URL: https://issues.apache.org/jira/browse/HDFS-12703
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Xue Liu
>Priority: Critical
> Attachments: HDFS-12703.001.patch, HDFS-12703.002.patch, 
> HDFS-12703.003.patch, HDFS-12703.004.patch
>
>
> The {{DecommissionManager.Monitor}} runs as an executor scheduled task.  If 
> an exception occurs, all decommissioning ceases until the NN is restarted.  
> Per javadoc for {{executor#scheduleAtFixedRate}}: *If any execution of the 
> task encounters an exception, subsequent executions are suppressed*.  The 
> monitor thread is alive but blocked waiting for an executor task that will 
> never come.  The code currently disposes of the future so the actual 
> exception that aborted the task is gone.
> Failover is insufficient since the task is also likely dead on the standby.  
> Replication queue init after the transition to active will fix the under 
> replication of blocks on currently decommissioning nodes but future nodes 
> never decommission.  The standby must be bounced prior to failover – and 
> hopefully the error condition does not reoccur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-07-03 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878265#comment-16878265
 ] 

Weiwei Yang commented on HDFS-12748:


Thanks for the +1, [~hanishakoneru], I am gonna commit this if no further 
comments from others.

[~xkrogen], please take a look once you have time, we can still revisit this 
afterward.

Thanks [~hanishakoneru], [~xkrogen].

> NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
> 
>
> Key: HDFS-12748
> URL: https://issues.apache.org/jira/browse/HDFS-12748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, 
> HDFS-12748.003.patch, HDFS-12748.004.patch, HDFS-12748.005.patch
>
>
> In our production environment, the standby NN often do fullgc, through mat we 
> found the largest object is FileSystem$Cache, which contains 7,844,890 
> DistributedFileSystem.
> By view hierarchy of method FileSystem.get() , I found only 
> NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating 
> different DistributedFileSystem every time instead of get a FileSystem from 
> cache.
> {code:java}
> case GETHOMEDIRECTORY: {
>   final String js = JsonUtil.toJsonString("Path",
>   FileSystem.get(conf != null ? conf : new Configuration())
>   .getHomeDirectory().toUri().getPath());
>   return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
> }
> {code}
> When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc.
> {code:java}
> case GETHOMEDIRECTORY: {
>   FileSystem fs = null;
>   try {
> fs = FileSystem.get(conf != null ? conf : new Configuration());
> final String js = JsonUtil.toJsonString("Path",
> fs.getHomeDirectory().toUri().getPath());
> return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
>   } finally {
> if (fs != null) {
>   fs.close();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12703) Exceptions are fatal to decommissioning monitor

2019-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878263#comment-16878263
 ] 

Íñigo Goiri commented on HDFS-12703:


I took a closer look and I think we should just capture the exceptions at the 
{{run()}} level.
Then we need a unit test for this.
It looks like the easy case to repro might be {{Preconditions.checkState(false, 
"A node is in an invalid state!");}}.
I think we can artificially set the DNs in that state.
[~xuel1], can we do these changes?

> Exceptions are fatal to decommissioning monitor
> ---
>
> Key: HDFS-12703
> URL: https://issues.apache.org/jira/browse/HDFS-12703
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Xue Liu
>Priority: Critical
> Attachments: HDFS-12703.001.patch, HDFS-12703.002.patch, 
> HDFS-12703.003.patch, HDFS-12703.004.patch
>
>
> The {{DecommissionManager.Monitor}} runs as an executor scheduled task.  If 
> an exception occurs, all decommissioning ceases until the NN is restarted.  
> Per javadoc for {{executor#scheduleAtFixedRate}}: *If any execution of the 
> task encounters an exception, subsequent executions are suppressed*.  The 
> monitor thread is alive but blocked waiting for an executor task that will 
> never come.  The code currently disposes of the future so the actual 
> exception that aborted the task is gone.
> Failover is insufficient since the task is also likely dead on the standby.  
> Replication queue init after the transition to active will fix the under 
> replication of blocks on currently decommissioning nodes but future nodes 
> never decommission.  The standby must be bounced prior to failover – and 
> hopefully the error condition does not reoccur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271971
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 04/Jul/19 01:14
Start Date: 04/Jul/19 01:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#issuecomment-508302907
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 499 | trunk passed |
   | +1 | compile | 252 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 852 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 329 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 520 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 78 | Maven dependency ordering for patch |
   | +1 | mvninstall | 445 | the patch passed |
   | +1 | compile | 267 | the patch passed |
   | +1 | javac | 267 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 89 | hadoop-ozone generated 3 new + 9 unchanged - 0 fixed = 
12 total (was 9) |
   | -1 | findbugs | 327 | hadoop-ozone generated 3 new + 0 unchanged - 0 fixed 
= 3 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 236 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1447 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6545 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Should 
org.apache.hadoop.ozone.recon.api.types.ContainersResponse$ContainersResponseData
 be a _static_ inner class?  At ContainersResponse.java:inner class?  At 
ContainersResponse.java:[lines 81-91] |
   |  |  Should 
org.apache.hadoop.ozone.recon.api.types.KeysResponse$KeysResponseData be a 
_static_ inner class?  At KeysResponse.java:inner class?  At 
KeysResponse.java:[lines 80-90] |
   |  |  Dead store to containersCount in 
org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl.incrementContainerCount()
  At 
ContainerDBServiceProviderImpl.java:org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl.incrementContainerCount()
  At ContainerDBServiceProviderImpl.java:[line 383] |
   | Failed junit tests | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1055 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8d22e75eee7e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / acd2d52 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/1/testReport/ |
   | Max. process+thread count | 4120 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/1/console |
   | versions | git=2.7.4 maven=3.3.9 

[jira] [Updated] (HDDS-1742) Merge ozone-perf and ozonetrace example clusters

2019-07-03 Thread Istvan Fajth (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDDS-1742:
---
Status: Patch Available  (was: Open)

> Merge ozone-perf and ozonetrace example clusters
> 
>
> Key: HDDS-1742
> URL: https://issues.apache.org/jira/browse/HDDS-1742
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have multiple example clusters in hadoop-ozone/dist/src/main/compose to 
> demonstrate how different type of configuration can be set with ozone.
> But some of them can be consolidated. I propose to combine ozonetrace to 
> ozoneperf to one ozoneperf which includes all the required components for a 
> local performance testing:
>  # opentracing (jaeger component in docker-compose + environment variables)
>  # monitoring (grafana + prometheus)
>  # perf profile (as of now it's enabled only in the ozone cluster[1])
>  
> [1]
> {code:java}
> cat compose/ozone/docker-config | grep prof
> OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true
> ASYNC_PROFILER_HOME=/opt/profiler
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1742) Merge ozone-perf and ozonetrace example clusters

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1742?focusedWorklogId=271969=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271969
 ]

ASF GitHub Bot logged work on HDDS-1742:


Author: ASF GitHub Bot
Created on: 04/Jul/19 00:54
Start Date: 04/Jul/19 00:54
Worklog Time Spent: 10m 
  Work Description: fapifta commented on issue #1057: HDDS-1742 Merge 
ozone-perf and ozonetrace example clusters
URL: https://github.com/apache/hadoop/pull/1057#issuecomment-508300070
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271969)
Time Spent: 20m  (was: 10m)

> Merge ozone-perf and ozonetrace example clusters
> 
>
> Key: HDDS-1742
> URL: https://issues.apache.org/jira/browse/HDDS-1742
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have multiple example clusters in hadoop-ozone/dist/src/main/compose to 
> demonstrate how different type of configuration can be set with ozone.
> But some of them can be consolidated. I propose to combine ozonetrace to 
> ozoneperf to one ozoneperf which includes all the required components for a 
> local performance testing:
>  # opentracing (jaeger component in docker-compose + environment variables)
>  # monitoring (grafana + prometheus)
>  # perf profile (as of now it's enabled only in the ozone cluster[1])
>  
> [1]
> {code:java}
> cat compose/ozone/docker-config | grep prof
> OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true
> ASYNC_PROFILER_HOME=/opt/profiler
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1717) MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2

2019-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878252#comment-16878252
 ] 

Hanisha Koneru commented on HDDS-1717:
--

I have tried resolving this without duplicating the classes. I have refactored 
OMProxyInfo to not extend FailoverProxyProvider.ProxyInfo.

> MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2
> -
>
> Key: HDDS-1717
> URL: https://issues.apache.org/jira/browse/HDDS-1717
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
> Environment: Ozone : 10 Node (1 SCM, 1 OM, 10 DN)
> HDP : 5 Node
> Both cluster are on separate nodes and hosted on HDP Ycloud.
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: syslog_mapred.err
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Mapreduce Jobs are failing with exception ??Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception??
> Ozone hadoop-ozone-filesystem-lib-current.jar copied to HDP cluster's hadoop 
> and mapreduce classpath under :
> {code:java}
> /usr/hdp/3.1.0.0-78/hadoop/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> {code}
> Excerpt from exception :
> {code:java}
> 2019-06-21 10:07:57,982 ERROR [main] 
> org.apache.hadoop.ozone.client.OzoneClientFactory: Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:134)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:103)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:143)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
>   at 
> org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
>   at 
> 

[jira] [Work logged] (HDDS-1742) Merge ozone-perf and ozonetrace example clusters

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1742?focusedWorklogId=271968=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271968
 ]

ASF GitHub Bot logged work on HDDS-1742:


Author: ASF GitHub Bot
Created on: 04/Jul/19 00:54
Start Date: 04/Jul/19 00:54
Worklog Time Spent: 10m 
  Work Description: fapifta commented on pull request #1057: HDDS-1742 
Merge ozone-perf and ozonetrace example clusters
URL: https://github.com/apache/hadoop/pull/1057
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271968)
Time Spent: 10m
Remaining Estimate: 0h

> Merge ozone-perf and ozonetrace example clusters
> 
>
> Key: HDDS-1742
> URL: https://issues.apache.org/jira/browse/HDDS-1742
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have multiple example clusters in hadoop-ozone/dist/src/main/compose to 
> demonstrate how different type of configuration can be set with ozone.
> But some of them can be consolidated. I propose to combine ozonetrace to 
> ozoneperf to one ozoneperf which includes all the required components for a 
> local performance testing:
>  # opentracing (jaeger component in docker-compose + environment variables)
>  # monitoring (grafana + prometheus)
>  # perf profile (as of now it's enabled only in the ozone cluster[1])
>  
> [1]
> {code:java}
> cat compose/ozone/docker-config | grep prof
> OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true
> ASYNC_PROFILER_HOME=/opt/profiler
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1742) Merge ozone-perf and ozonetrace example clusters

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1742:
-
Labels: pull-request-available  (was: )

> Merge ozone-perf and ozonetrace example clusters
> 
>
> Key: HDDS-1742
> URL: https://issues.apache.org/jira/browse/HDDS-1742
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Minor
>  Labels: pull-request-available
>
> We have multiple example clusters in hadoop-ozone/dist/src/main/compose to 
> demonstrate how different type of configuration can be set with ozone.
> But some of them can be consolidated. I propose to combine ozonetrace to 
> ozoneperf to one ozoneperf which includes all the required components for a 
> local performance testing:
>  # opentracing (jaeger component in docker-compose + environment variables)
>  # monitoring (grafana + prometheus)
>  # perf profile (as of now it's enabled only in the ozone cluster[1])
>  
> [1]
> {code:java}
> cat compose/ozone/docker-config | grep prof
> OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true
> ASYNC_PROFILER_HOME=/opt/profiler
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1717) MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1717?focusedWorklogId=271964=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271964
 ]

ASF GitHub Bot logged work on HDDS-1717:


Author: ASF GitHub Bot
Created on: 04/Jul/19 00:51
Start Date: 04/Jul/19 00:51
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #1056: 
HDDS-1717. Remove OMFailoverProxyProvider's dependency on hadoop-3.2
URL: https://github.com/apache/hadoop/pull/1056
 
 
   Refactors OMProxyInfo to not extend FailoverProxyProvider.ProxyInfo (which 
is a final class before hadoop-3.2).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271964)
Time Spent: 10m
Remaining Estimate: 0h

> MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2
> -
>
> Key: HDDS-1717
> URL: https://issues.apache.org/jira/browse/HDDS-1717
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
> Environment: Ozone : 10 Node (1 SCM, 1 OM, 10 DN)
> HDP : 5 Node
> Both cluster are on separate nodes and hosted on HDP Ycloud.
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: syslog_mapred.err
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Mapreduce Jobs are failing with exception ??Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception??
> Ozone hadoop-ozone-filesystem-lib-current.jar copied to HDP cluster's hadoop 
> and mapreduce classpath under :
> {code:java}
> /usr/hdp/3.1.0.0-78/hadoop/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> {code}
> Excerpt from exception :
> {code:java}
> 2019-06-21 10:07:57,982 ERROR [main] 
> org.apache.hadoop.ozone.client.OzoneClientFactory: Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:134)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:103)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:143)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
>   at 
> org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
>   at 
> 

[jira] [Updated] (HDDS-1717) MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1717:
-
Labels: pull-request-available  (was: )

> MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2
> -
>
> Key: HDDS-1717
> URL: https://issues.apache.org/jira/browse/HDDS-1717
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
> Environment: Ozone : 10 Node (1 SCM, 1 OM, 10 DN)
> HDP : 5 Node
> Both cluster are on separate nodes and hosted on HDP Ycloud.
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: syslog_mapred.err
>
>
> Mapreduce Jobs are failing with exception ??Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception??
> Ozone hadoop-ozone-filesystem-lib-current.jar copied to HDP cluster's hadoop 
> and mapreduce classpath under :
> {code:java}
> /usr/hdp/3.1.0.0-78/hadoop/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> {code}
> Excerpt from exception :
> {code:java}
> 2019-06-21 10:07:57,982 ERROR [main] 
> org.apache.hadoop.ozone.client.OzoneClientFactory: Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:134)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:103)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:143)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
>   at 
> org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
> Caused by: java.lang.VerifyError: Cannot inherit from final class
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at 

[jira] [Updated] (HDDS-1717) MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2

2019-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-1717:
-
Summary: MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2  
(was: MR Job fails with exception)

> MR Job fails as OMFailoverProxyProvider has dependency hadoop-3.2
> -
>
> Key: HDDS-1717
> URL: https://issues.apache.org/jira/browse/HDDS-1717
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
> Environment: Ozone : 10 Node (1 SCM, 1 OM, 10 DN)
> HDP : 5 Node
> Both cluster are on separate nodes and hosted on HDP Ycloud.
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: syslog_mapred.err
>
>
> Mapreduce Jobs are failing with exception ??Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception??
> Ozone hadoop-ozone-filesystem-lib-current.jar copied to HDP cluster's hadoop 
> and mapreduce classpath under :
> {code:java}
> /usr/hdp/3.1.0.0-78/hadoop/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> {code}
> Excerpt from exception :
> {code:java}
> 2019-06-21 10:07:57,982 ERROR [main] 
> org.apache.hadoop.ozone.client.OzoneClientFactory: Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:134)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:103)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:143)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
>   at 
> org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
> Caused by: java.lang.VerifyError: Cannot inherit from final class
>   at java.lang.ClassLoader.defineClass1(Native 

[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271957=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271957
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 04/Jul/19 00:11
Start Date: 04/Jul/19 00:11
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1055: 
HDDS-1705. Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300192039
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -271,8 +332,54 @@ public void deleteContainerMapping(ContainerKeyPrefix 
containerKeyPrefix)
 containerKeyTable.delete(containerKeyPrefix);
   }
 
+  /**
+   * Get total count of containers.
+   *
+   * @return total count of containers.
+   */
+  @Override
+  public long getCountForContainers() {
+GlobalStats containerCountRecord =
+globalStatsDao.fetchOneByKey(CONTAINER_COUNT_KEY);
+
+return (containerCountRecord == null) ? 0L :
+containerCountRecord.getValue();
+  }
+
   @Override
   public TableIterator getContainerTableIterator() {
 return containerKeyTable.iterator();
   }
+
+  /**
+   * Store the total count of containers into the container DB store.
+   *
+   * @param count count of the containers present in the system.
+   */
+  @Override
+  public void storeContainerCount(Long count) {
+// Get the current timestamp
+Timestamp now =
 
 Review comment:
   At some point in the future, we might show the last updated timestamp in the 
UI and that is why using SQL timestamp here. As long as this value is only used 
by the application, we can use System.currentMillis() and that is what we are 
doing for lastUpdatedTimestamp in ReconInternalSchema.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271957)
Time Spent: 2h 40m  (was: 2.5h)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-07-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878239#comment-16878239
 ] 

Hadoop QA commented on HDDS-1554:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 31 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
33s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
14s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
40s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 10s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | 

[jira] [Work logged] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1712?focusedWorklogId=271952=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271952
 ]

ASF GitHub Bot logged work on HDDS-1712:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:58
Start Date: 03/Jul/19 23:58
Worklog Time Spent: 10m 
  Work Description: eyanghwx commented on pull request #1053: HDDS-1712.  
Test pull request acl.
URL: https://github.com/apache/hadoop/pull/1053
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271952)
Time Spent: 0.5h  (was: 20m)

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271948=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271948
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:51
Start Date: 03/Jul/19 23:51
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1055: 
HDDS-1705. Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300189170
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -80,41 +92,77 @@
 @PrepareForTest(ReconUtils.class)
 public class TestContainerKeyService extends AbstractOMMetadataManagerTest {
 
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
   private ContainerDBServiceProvider containerDbServiceProvider;
   private OMMetadataManager omMetadataManager;
   private ReconOMMetadataManager reconOMMetadataManager;
   private Injector injector;
   private OzoneManagerServiceProviderImpl ozoneManagerServiceProvider;
   private ContainerKeyService containerKeyService;
+  private boolean setUpIsDone = false;
+
+  private Injector getInjector() {
+return injector;
+  }
 
   @Before
   public void setUp() throws Exception {
 omMetadataManager = initializeNewOmMetadataManager();
-injector = Guice.createInjector(new AbstractModule() {
-  @Override
-  protected void configure() {
-try {
-  bind(OzoneConfiguration.class).toInstance(
-  getTestOzoneConfiguration());
-  reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
-  
bind(ReconOMMetadataManager.class).toInstance(reconOMMetadataManager);
-  bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
-  in(Singleton.class);
-  bind(ContainerDBServiceProvider.class).to(
-  ContainerDBServiceProviderImpl.class).in(Singleton.class);
-  ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
-  getTestOzoneConfiguration());
-  bind(OzoneManagerServiceProvider.class)
-  .toInstance(ozoneManagerServiceProvider);
-  containerKeyService = new ContainerKeyService();
-  bind(ContainerKeyService.class).toInstance(containerKeyService);
-} catch (IOException e) {
-  Assert.fail();
+File tempDir = temporaryFolder.newFolder();
+AbstractSqlDatabaseTest.DataSourceConfigurationProvider
+configurationProvider =
+new AbstractSqlDatabaseTest.DataSourceConfigurationProvider(tempDir);
+
+JooqPersistenceModule jooqPersistenceModule =
+new JooqPersistenceModule(configurationProvider);
+
+injector = Guice.createInjector(jooqPersistenceModule,
+new AbstractModule() {
+@Override
+public void configure() {
+  try {
+bind(DataSourceConfiguration.class)
+.toProvider(configurationProvider);
+OzoneConfiguration configuration = getTestOzoneConfiguration();
+bind(OzoneConfiguration.class).toInstance(configuration);
+
+ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
+configuration);
+
+reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
+bind(ReconOMMetadataManager.class)
+.toInstance(reconOMMetadataManager);
+
+bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
+in(Singleton.class);
+bind(ContainerDBServiceProvider.class)
+.to(ContainerDBServiceProviderImpl.class).in(Singleton.class);
+
+bind(OzoneManagerServiceProvider.class)
+.toInstance(ozoneManagerServiceProvider);
+containerKeyService = new ContainerKeyService();
+bind(ContainerKeyService.class).toInstance(containerKeyService);
+  } catch (IOException e) {
+Assert.fail();
+  }
 }
-  }
-});
-containerDbServiceProvider = injector.getInstance(
-ContainerDBServiceProvider.class);
+  });
+
+// The following setup is run only once
+if (!setUpIsDone) {
 
 Review comment:
   Yes, but @BeforeClass annotated methods need to be static and cannot be used 
in this case since members of the class need to be updated or accessed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271948)
Time Spent: 2.5h  

[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271947
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:50
Start Date: 03/Jul/19 23:50
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300189038
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -271,8 +332,54 @@ public void deleteContainerMapping(ContainerKeyPrefix 
containerKeyPrefix)
 containerKeyTable.delete(containerKeyPrefix);
   }
 
+  /**
+   * Get total count of containers.
+   *
+   * @return total count of containers.
+   */
+  @Override
+  public long getCountForContainers() {
+GlobalStats containerCountRecord =
+globalStatsDao.fetchOneByKey(CONTAINER_COUNT_KEY);
+
+return (containerCountRecord == null) ? 0L :
+containerCountRecord.getValue();
+  }
+
   @Override
   public TableIterator getContainerTableIterator() {
 return containerKeyTable.iterator();
   }
+
+  /**
+   * Store the total count of containers into the container DB store.
+   *
+   * @param count count of the containers present in the system.
+   */
+  @Override
+  public void storeContainerCount(Long count) {
+// Get the current timestamp
+Timestamp now =
 
 Review comment:
   But Ozone will rely on JVM timestamp, and since time is relative, as long as 
we don't change the impl midway this point is moot, isn't it :-)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271947)
Time Spent: 2h 20m  (was: 2h 10m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271945
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:49
Start Date: 03/Jul/19 23:49
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300188797
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestContainerKeyMapperTask.java
 ##
 @@ -73,33 +82,67 @@
   private ReconOMMetadataManager reconOMMetadataManager;
   private Injector injector;
   private OzoneManagerServiceProviderImpl ozoneManagerServiceProvider;
+  private boolean setUpIsDone = false;
+
+  private Injector getInjector() {
+return injector;
+  }
 
   @Before
   public void setUp() throws Exception {
 omMetadataManager = initializeNewOmMetadataManager();
-injector = Guice.createInjector(new AbstractModule() {
-  @Override
-  protected void configure() {
-try {
-  bind(OzoneConfiguration.class).toInstance(
-  getTestOzoneConfiguration());
-
-  reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
-  
bind(ReconOMMetadataManager.class).toInstance(reconOMMetadataManager);
-  ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
-  getTestOzoneConfiguration());
-  bind(OzoneManagerServiceProvider.class)
-  .toInstance(ozoneManagerServiceProvider);
-
-  bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
-  in(Singleton.class);
-  bind(ContainerDBServiceProvider.class).to(
-  ContainerDBServiceProviderImpl.class).in(Singleton.class);
-} catch (IOException e) {
-  Assert.fail();
+
+File tempDir = temporaryFolder.newFolder();
+AbstractSqlDatabaseTest.DataSourceConfigurationProvider
+configurationProvider =
+new AbstractSqlDatabaseTest.DataSourceConfigurationProvider(tempDir);
+
+JooqPersistenceModule jooqPersistenceModule =
+new JooqPersistenceModule(configurationProvider);
+
+injector = Guice.createInjector(jooqPersistenceModule,
 
 Review comment:
   Isn't this code duplicated? Can we add an interface with default impl which 
is support in Java 8? Or a static helper somewhere.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271945)
Time Spent: 2h 10m  (was: 2h)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271943=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271943
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:46
Start Date: 03/Jul/19 23:46
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300188458
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
 ##
 @@ -144,24 +150,35 @@ private void  deleteOMKeyFromContainerDB(String key)
 Table.KeyValue> containerIterator =
 containerDBServiceProvider.getContainerTableIterator();
 
-Set keysToDeDeleted = new HashSet<>();
+Set keysToBeDeleted = new HashSet<>();
 
 while (containerIterator.hasNext()) {
   Table.KeyValue keyValue =
   containerIterator.next();
   String keyPrefix = keyValue.getKey().getKeyPrefix();
   if (keyPrefix.equals(key)) {
-keysToDeDeleted.add(keyValue.getKey());
+keysToBeDeleted.add(keyValue.getKey());
   }
 }
 
-for (ContainerKeyPrefix containerKeyPrefix : keysToDeDeleted) {
+for (ContainerKeyPrefix containerKeyPrefix : keysToBeDeleted) {
   containerDBServiceProvider.deleteContainerMapping(containerKeyPrefix);
+
+  // decrement count and update containerKeyCount.
+  Long containerID = containerKeyPrefix.getContainerId();
+  long keyCount =
+  containerDBServiceProvider.getKeyCountForContainer(containerID);
+  if (keyCount > 0) {
+containerDBServiceProvider.storeContainerKeyCount(containerID,
 
 Review comment:
   Is this to keep counts up to date? Seems this change is not needed for this 
test, right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271943)
Time Spent: 2h  (was: 1h 50m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271942=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271942
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:45
Start Date: 03/Jul/19 23:45
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300188348
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -80,41 +92,77 @@
 @PrepareForTest(ReconUtils.class)
 public class TestContainerKeyService extends AbstractOMMetadataManagerTest {
 
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
   private ContainerDBServiceProvider containerDbServiceProvider;
   private OMMetadataManager omMetadataManager;
   private ReconOMMetadataManager reconOMMetadataManager;
   private Injector injector;
   private OzoneManagerServiceProviderImpl ozoneManagerServiceProvider;
   private ContainerKeyService containerKeyService;
+  private boolean setUpIsDone = false;
+
+  private Injector getInjector() {
+return injector;
+  }
 
   @Before
   public void setUp() throws Exception {
 omMetadataManager = initializeNewOmMetadataManager();
-injector = Guice.createInjector(new AbstractModule() {
-  @Override
-  protected void configure() {
-try {
-  bind(OzoneConfiguration.class).toInstance(
-  getTestOzoneConfiguration());
-  reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
-  
bind(ReconOMMetadataManager.class).toInstance(reconOMMetadataManager);
-  bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
-  in(Singleton.class);
-  bind(ContainerDBServiceProvider.class).to(
-  ContainerDBServiceProviderImpl.class).in(Singleton.class);
-  ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
-  getTestOzoneConfiguration());
-  bind(OzoneManagerServiceProvider.class)
-  .toInstance(ozoneManagerServiceProvider);
-  containerKeyService = new ContainerKeyService();
-  bind(ContainerKeyService.class).toInstance(containerKeyService);
-} catch (IOException e) {
-  Assert.fail();
+File tempDir = temporaryFolder.newFolder();
+AbstractSqlDatabaseTest.DataSourceConfigurationProvider
+configurationProvider =
+new AbstractSqlDatabaseTest.DataSourceConfigurationProvider(tempDir);
+
+JooqPersistenceModule jooqPersistenceModule =
+new JooqPersistenceModule(configurationProvider);
+
+injector = Guice.createInjector(jooqPersistenceModule,
+new AbstractModule() {
+@Override
+public void configure() {
+  try {
+bind(DataSourceConfiguration.class)
+.toProvider(configurationProvider);
+OzoneConfiguration configuration = getTestOzoneConfiguration();
+bind(OzoneConfiguration.class).toInstance(configuration);
+
+ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
+configuration);
+
+reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
+bind(ReconOMMetadataManager.class)
+.toInstance(reconOMMetadataManager);
+
+bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
+in(Singleton.class);
+bind(ContainerDBServiceProvider.class)
+.to(ContainerDBServiceProviderImpl.class).in(Singleton.class);
+
+bind(OzoneManagerServiceProvider.class)
+.toInstance(ozoneManagerServiceProvider);
+containerKeyService = new ContainerKeyService();
+bind(ContainerKeyService.class).toInstance(containerKeyService);
+  } catch (IOException e) {
+Assert.fail();
+  }
 }
-  }
-});
-containerDbServiceProvider = injector.getInstance(
-ContainerDBServiceProvider.class);
+  });
+
+// The following setup is run only once
+if (!setUpIsDone) {
 
 Review comment:
   @BeforeClass annotated methods are guaranteed to be run once by junit 
framework.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271942)
Time Spent: 1h 50m  (was: 1h 40m)

> Recon: Add estimatedTotalCount to the response of containers 

[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271941=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271941
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:44
Start Date: 03/Jul/19 23:44
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1055: 
HDDS-1705. Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300188150
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -271,8 +332,54 @@ public void deleteContainerMapping(ContainerKeyPrefix 
containerKeyPrefix)
 containerKeyTable.delete(containerKeyPrefix);
   }
 
+  /**
+   * Get total count of containers.
+   *
+   * @return total count of containers.
+   */
+  @Override
+  public long getCountForContainers() {
+GlobalStats containerCountRecord =
+globalStatsDao.fetchOneByKey(CONTAINER_COUNT_KEY);
+
+return (containerCountRecord == null) ? 0L :
+containerCountRecord.getValue();
+  }
+
   @Override
   public TableIterator getContainerTableIterator() {
 return containerKeyTable.iterator();
   }
+
+  /**
+   * Store the total count of containers into the container DB store.
+   *
+   * @param count count of the containers present in the system.
+   */
+  @Override
+  public void storeContainerCount(Long count) {
+// Get the current timestamp
+Timestamp now =
 
 Review comment:
   Yes. The assumption here is that the SQL database will be running in UTC 
timezone as a standard practice for all production databases and we don't want 
to mess with the timezone by using System.currentMillis(). 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271941)
Time Spent: 1h 40m  (was: 1.5h)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271940=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271940
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:42
Start Date: 03/Jul/19 23:42
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300187951
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -271,8 +332,54 @@ public void deleteContainerMapping(ContainerKeyPrefix 
containerKeyPrefix)
 containerKeyTable.delete(containerKeyPrefix);
   }
 
+  /**
+   * Get total count of containers.
+   *
+   * @return total count of containers.
+   */
+  @Override
+  public long getCountForContainers() {
+GlobalStats containerCountRecord =
+globalStatsDao.fetchOneByKey(CONTAINER_COUNT_KEY);
+
+return (containerCountRecord == null) ? 0L :
+containerCountRecord.getValue();
+  }
+
   @Override
   public TableIterator getContainerTableIterator() {
 return containerKeyTable.iterator();
   }
+
+  /**
+   * Store the total count of containers into the container DB store.
+   *
+   * @param count count of the containers present in the system.
+   */
+  @Override
+  public void storeContainerCount(Long count) {
+// Get the current timestamp
+Timestamp now =
+using(sqlConfiguration).fetchValue(select(currentTimestamp()));
+GlobalStats containerCountRecord =
+globalStatsDao.fetchOneByKey(CONTAINER_COUNT_KEY);
+GlobalStats globalStatsRecord =
+new GlobalStats(CONTAINER_COUNT_KEY, count, now);
+
+// Insert a new record for CONTAINER_COUNT_KEY if it does not exist
+if (containerCountRecord == null) {
+  globalStatsDao.insert(globalStatsRecord);
+} else {
+  globalStatsDao.update(globalStatsRecord);
+}
+  }
+
+  /**
+   * Increment the total count for containers in the system.
+   */
+  @Override
+  public void incrementContainerCount() {
+long containersCount = getCountForContainers();
+storeContainerCount(++containersCount);
 
 Review comment:
   Since we are counting the total, we should do a batch commit here, if Jooq 
has this functionality. Maybe future Jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271940)
Time Spent: 1.5h  (was: 1h 20m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271939=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271939
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:40
Start Date: 03/Jul/19 23:40
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300187614
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -271,8 +332,54 @@ public void deleteContainerMapping(ContainerKeyPrefix 
containerKeyPrefix)
 containerKeyTable.delete(containerKeyPrefix);
   }
 
+  /**
+   * Get total count of containers.
+   *
+   * @return total count of containers.
+   */
+  @Override
+  public long getCountForContainers() {
+GlobalStats containerCountRecord =
+globalStatsDao.fetchOneByKey(CONTAINER_COUNT_KEY);
+
+return (containerCountRecord == null) ? 0L :
+containerCountRecord.getValue();
+  }
+
   @Override
   public TableIterator getContainerTableIterator() {
 return containerKeyTable.iterator();
   }
+
+  /**
+   * Store the total count of containers into the container DB store.
+   *
+   * @param count count of the containers present in the system.
+   */
+  @Override
+  public void storeContainerCount(Long count) {
+// Get the current timestamp
+Timestamp now =
 
 Review comment:
   Any reason to ask sql timestamp vs System.currentMillis()?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271939)
Time Spent: 1h 20m  (was: 1h 10m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271937=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271937
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:38
Start Date: 03/Jul/19 23:38
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300187343
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -112,16 +134,55 @@ public void storeContainerKeyMapping(ContainerKeyPrefix 
containerKeyPrefix,
 containerKeyTable.put(containerKeyPrefix, count);
   }
 
+  /**
+   * Store the containerID -> no. of keys count into the container DB store.
+   *
+   * @param containerID the containerID.
+   * @param count count of the keys within the given containerID.
+   * @throws IOException
+   */
+  @Override
+  public void storeContainerKeyCount(Long containerID, Long count)
+  throws IOException {
+containerKeyCountTable.put(containerID, count);
+  }
+
+  /**
+   * Get the total count of keys within the given containerID.
+   *
+   * @param containerID the given containerID.
+   * @return count of keys within the given containerID.
+   * @throws IOException
+   */
+  @Override
+  public long getKeyCountForContainer(Long containerID) throws IOException {
+Long keyCount = containerKeyCountTable.get(containerID);
+return keyCount == null ? 0L : keyCount;
+  }
+
+  /**
+   * Get if a containerID exists or not.
+   *
+   * @param containerID the given containerID.
+   * @return if the given ContainerID exists or not.
+   * @throws IOException
+   */
+  @Override
+  public boolean isContainerExists(Long containerID) throws IOException {
 
 Review comment:
   doesContainerExists sounds better IMO.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271937)
Time Spent: 1h  (was: 50m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271938
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:38
Start Date: 03/Jul/19 23:38
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300187375
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##
 @@ -112,16 +134,55 @@ public void storeContainerKeyMapping(ContainerKeyPrefix 
containerKeyPrefix,
 containerKeyTable.put(containerKeyPrefix, count);
   }
 
+  /**
+   * Store the containerID -> no. of keys count into the container DB store.
+   *
+   * @param containerID the containerID.
+   * @param count count of the keys within the given containerID.
+   * @throws IOException
+   */
+  @Override
+  public void storeContainerKeyCount(Long containerID, Long count)
+  throws IOException {
+containerKeyCountTable.put(containerID, count);
+  }
+
+  /**
+   * Get the total count of keys within the given containerID.
+   *
+   * @param containerID the given containerID.
+   * @return count of keys within the given containerID.
+   * @throws IOException
+   */
+  @Override
+  public long getKeyCountForContainer(Long containerID) throws IOException {
+Long keyCount = containerKeyCountTable.get(containerID);
+return keyCount == null ? 0L : keyCount;
+  }
+
+  /**
+   * Get if a containerID exists or not.
+   *
+   * @param containerID the given containerID.
+   * @return if the given ContainerID exists or not.
+   * @throws IOException
+   */
+  @Override
+  public boolean isContainerExists(Long containerID) throws IOException {
+Long keyCount = containerKeyCountTable.get(containerID);
+return keyCount != null;
 
 Review comment:
   Can be simplified to 1 statement.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271938)
Time Spent: 1h 10m  (was: 1h)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271936=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271936
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:30
Start Date: 03/Jul/19 23:30
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300185879
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServer.java
 ##
 @@ -63,6 +70,8 @@ public Void call() throws Exception {
 OzoneConfiguration ozoneConfiguration = createOzoneConfiguration();
 OzoneConfigurationProvider.setConfiguration(ozoneConfiguration);
 
+JooqConfigurationProvider.setConfiguration(sqlConfiguration);
 
 Review comment:
   This is an anti-pattern that leads to un-clean code. The only reason to do 
this for OzoneConfiguration is because of mixing non DI code with DI. Since the 
member itself is provided by the injector, the JooqConfiguration could just be 
provided explicitly by a module like ReconController or the one created by the 
Sql test classes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271936)
Time Spent: 50m  (was: 40m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271935
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:28
Start Date: 03/Jul/19 23:28
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r300185879
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServer.java
 ##
 @@ -63,6 +70,8 @@ public Void call() throws Exception {
 OzoneConfiguration ozoneConfiguration = createOzoneConfiguration();
 OzoneConfigurationProvider.setConfiguration(ozoneConfiguration);
 
+JooqConfigurationProvider.setConfiguration(sqlConfiguration);
 
 Review comment:
   This is an anti-pattern, only reason to do this for OzoneConfiguration is 
because of mixing non DI code with DI. This could just be provided explicitly 
by a module like ReconController or the one created by the Sql test classes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271935)
Time Spent: 40m  (was: 0.5h)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271934
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:24
Start Date: 03/Jul/19 23:24
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#issuecomment-508286423
 
 
   @swagle @elek @avijayanhwx Please review when you find time.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271934)
Time Spent: 0.5h  (was: 20m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1705:
-
Labels: pull-request-available  (was: )

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271933=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271933
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:24
Start Date: 03/Jul/19 23:24
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1055: HDDS-1705. 
Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#issuecomment-508286360
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271933)
Time Spent: 20m  (was: 10m)

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?focusedWorklogId=271932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271932
 ]

ASF GitHub Bot logged work on HDDS-1705:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:24
Start Date: 03/Jul/19 23:24
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1055: 
HDDS-1705. Recon: Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055
 
 
   …containers and containers/{id} endpoints
   
   This PR adds the following features to Recon
   
   - Initialize Recon SQL Schemas while starting up Recon Server
   - Get TotalCount in containers and keys API Response
   - Move /api/containers/{containerId} --> /api/containers/{containerId}/keys
   
   This patch was tested manually by bringing up ozone in a local dev 
environment and checking whether sqlite instance gets updated with total number 
of containers as expected. Also, the API responses of two endpoints were tested 
in a browser with different limits and prevKey combinations.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271932)
Time Spent: 10m
Remaining Estimate: 0h

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?focusedWorklogId=271922=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271922
 ]

ASF GitHub Bot logged work on HDDS-1761:


Author: ASF GitHub Bot
Created on: 03/Jul/19 23:13
Start Date: 03/Jul/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1052: HDDS-1761. Fix 
class hierarchy for KeyRequest and FileRequest classes.
URL: https://github.com/apache/hadoop/pull/1052#issuecomment-508284464
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 517 | trunk passed |
   | +1 | compile | 262 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 353 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 554 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 461 | the patch passed |
   | +1 | compile | 275 | the patch passed |
   | +1 | javac | 275 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | the patch passed |
   | +1 | findbugs | 567 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 253 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2232 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7286 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1052/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1052 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a601e36bba90 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8965ddc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1052/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1052/1/testReport/ |
   | Max. process+thread count | 4515 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1052/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271922)
Time Spent: 20m  (was: 10m)

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> 

[jira] [Commented] (HDFS-14483) Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-07-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878213#comment-16878213
 ] 

Hadoop QA commented on HDFS-14483:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
 8s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
52s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
57s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} branch-2.9 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.9 has 
1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. 

[jira] [Work logged] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1712?focusedWorklogId=271901=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271901
 ]

ASF GitHub Bot logged work on HDDS-1712:


Author: ASF GitHub Bot
Created on: 03/Jul/19 22:28
Start Date: 03/Jul/19 22:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1053: HDDS-1712.  Test 
pull request acl.
URL: https://github.com/apache/hadoop/pull/1053#issuecomment-508275804
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 248 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1504 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 454 | the patch passed |
   | +1 | compile | 257 | the patch passed |
   | +1 | javac | 257 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 695 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 227 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 234 | hadoop-hdds in the patch failed. |
   | -1 | unit | 143 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 3945 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1053/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1053 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint |
   | uname | Linux 99296647a209 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8965ddc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1053/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1053/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1053/1/testReport/ |
   | Max. process+thread count | 489 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1053/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271901)
Time Spent: 20m  (was: 10m)

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This 

[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878204#comment-16878204
 ] 

Hanisha Koneru commented on HDFS-12748:
---

Hi [~xkrogen], does patch v05 look good to you?

> NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
> 
>
> Key: HDFS-12748
> URL: https://issues.apache.org/jira/browse/HDFS-12748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, 
> HDFS-12748.003.patch, HDFS-12748.004.patch, HDFS-12748.005.patch
>
>
> In our production environment, the standby NN often do fullgc, through mat we 
> found the largest object is FileSystem$Cache, which contains 7,844,890 
> DistributedFileSystem.
> By view hierarchy of method FileSystem.get() , I found only 
> NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating 
> different DistributedFileSystem every time instead of get a FileSystem from 
> cache.
> {code:java}
> case GETHOMEDIRECTORY: {
>   final String js = JsonUtil.toJsonString("Path",
>   FileSystem.get(conf != null ? conf : new Configuration())
>   .getHomeDirectory().toUri().getPath());
>   return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
> }
> {code}
> When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc.
> {code:java}
> case GETHOMEDIRECTORY: {
>   FileSystem fs = null;
>   try {
> fs = FileSystem.get(conf != null ? conf : new Configuration());
> final String js = JsonUtil.toJsonString("Path",
> fs.getHomeDirectory().toUri().getPath());
> return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
>   } finally {
> if (fs != null) {
>   fs.close();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878199#comment-16878199
 ] 

Hudson commented on HDDS-1741:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16860 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16860/])
HDDS-1741. Fix prometheus configuration in ozoneperf example cluster (elek: rev 
acd2d524e85c226441aa3aeaa72696e9c7347b3b)
* (edit) hadoop-ozone/dist/src/main/compose/common/prometheus/prometheus.yml


> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1550) MiniOzoneCluster is not shutting down all the threads during shutdown.

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?focusedWorklogId=271894=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271894
 ]

ASF GitHub Bot logged work on HDDS-1550:


Author: ASF GitHub Bot
Created on: 03/Jul/19 21:54
Start Date: 03/Jul/19 21:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1050: 
HDDS-1550. MiniOzoneCluster is not shutting down all the threads during 
shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1050#discussion_r300166984
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
 ##
 @@ -203,7 +203,7 @@ void initializeConfiguration() throws IOException {
   1, TimeUnit.SECONDS);
   conf.setTimeDuration(HddsConfigKeys.HDDS_HEARTBEAT_INTERVAL, 1,
   TimeUnit.SECONDS);
-  conf.setInt(OzoneConfigKeys.OZONE_CONTAINER_CACHE_SIZE, 8);
+  conf.setInt(OzoneConfigKeys.OZONE_CONTAINER_CACHE_SIZE, 2);
 
 Review comment:
   One question: Any reason for changing this value?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271894)
Time Spent: 2h 40m  (was: 2.5h)

> MiniOzoneCluster is not shutting down all the threads during shutdown.
> --
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271892=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271892
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 21:51
Start Date: 03/Jul/19 21:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-508267354
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 98 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 594 | trunk passed |
   | +1 | compile | 277 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 935 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 338 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 555 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 528 | the patch passed |
   | +1 | compile | 283 | the patch passed |
   | +1 | cc | 283 | the patch passed |
   | +1 | javac | 283 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   | +1 | findbugs | 552 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 393 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2734 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 8449 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5a1dd809da58 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 34747c3 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/10/testReport/ |
   | Max. process+thread count | 3650 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271892)
Time Spent: 6h  (was: 5h 50m)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> 

[jira] [Commented] (HDFS-14629) Property value Hard Coded in DNConf.java

2019-07-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878179#comment-16878179
 ] 

Hadoop QA commented on HDFS-14629:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14629 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973601/HDFS-14629.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2d2d88e06529 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 34747c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27141/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27141/testReport/ |
| Max. process+thread count | 3665 (vs. ulimit of 1) |
| 

[jira] [Work logged] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?focusedWorklogId=271885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271885
 ]

ASF GitHub Bot logged work on HDDS-1741:


Author: ASF GitHub Bot
Created on: 03/Jul/19 21:38
Start Date: 03/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1045: HDDS-1741 Fix 
prometheus configuration in ozoneperf example cluster
URL: https://github.com/apache/hadoop/pull/1045
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271885)
Time Spent: 1h  (was: 50m)

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1741:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1712:
-
Labels: pull-request-available  (was: )

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1712?focusedWorklogId=271884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271884
 ]

ASF GitHub Bot logged work on HDDS-1712:


Author: ASF GitHub Bot
Created on: 03/Jul/19 21:35
Start Date: 03/Jul/19 21:35
Worklog Time Spent: 10m 
  Work Description: eyanghwx commented on issue #1053: HDDS-1712.  Test 
pull request acl.
URL: https://github.com/apache/hadoop/pull/1053#issuecomment-508263214
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271884)
Time Spent: 10m
Remaining Estimate: 0h

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878160#comment-16878160
 ] 

Hudson commented on HDDS-1731:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16859 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16859/])
HDDS-1731. Implement File CreateFile Request to use Cache and (github: rev 
8965ddcf59eeebe1b26c7581607b86477bfb276a)
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMFileCreateRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java


> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1761:
-
Fix Version/s: 0.5.0
   Status: Patch Available  (was: In Progress)

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1603) Handle Ratis Append Failure in Container State Machine

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1603?focusedWorklogId=271877=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271877
 ]

ASF GitHub Bot logged work on HDDS-1603:


Author: ASF GitHub Bot
Created on: 03/Jul/19 20:53
Start Date: 03/Jul/19 20:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1019: HDDS-1603. 
Handle Ratis Append Failure in Container State Machine. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1019#issuecomment-508250959
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 78 | Maven dependency ordering for branch |
   | +1 | mvninstall | 505 | trunk passed |
   | +1 | compile | 252 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 833 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 320 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 511 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 435 | the patch passed |
   | +1 | compile | 270 | the patch passed |
   | +1 | cc | 270 | the patch passed |
   | +1 | javac | 270 | the patch passed |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 607 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | the patch passed |
   | +1 | findbugs | 500 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1398 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6303 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1019 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux eead6f9ba932 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 34747c3 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/3/testReport/ |
   | Max. process+thread count | 5367 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271877)
Time Spent: 40m  (was: 0.5h)

> Handle Ratis Append Failure in Container State Machine
> --
>
> Key: HDDS-1603
> URL: 

[jira] [Work logged] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?focusedWorklogId=271878=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271878
 ]

ASF GitHub Bot logged work on HDDS-1761:


Author: ASF GitHub Bot
Created on: 03/Jul/19 20:53
Start Date: 03/Jul/19 20:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1052: 
HDDS-1761. Fix class hierarchy for KeyRequest and FileRequest classes.
URL: https://github.com/apache/hadoop/pull/1052
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271878)
Time Spent: 10m
Remaining Estimate: 0h

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1761:
-
Labels: pull-request-available  (was: )

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1554) Create disk tests for fault injection test

2019-07-03 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1554:

Attachment: HDDS-1554.011.patch

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1741:
---
Status: Patch Available  (was: Open)

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271866
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 20:15
Start Date: 03/Jul/19 20:15
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1044: 
HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271866)
Time Spent: 5h 50m  (was: 5h 40m)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1731:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271865
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 20:15
Start Date: 03/Jul/19 20:15
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-508239467
 
 
   Test failures are not related to this patch.
   I will commit this to the trunk.
   Thank You @arp7 for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271865)
Time Spent: 5h 40m  (was: 5.5h)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14621) Distcp can not preserve timestamp with -delete option

2019-07-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878133#comment-16878133
 ] 

Hadoop QA commented on HDFS-14621:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
1s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14621 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973584/HDFS-14621.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5be2e4ffe93d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 34747c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27138/testReport/ |
| Max. process+thread count | 309 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27138/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Distcp can not preserve timestamp with -delete  option
> --
>
> Key: HDFS-14621
> URL: 

[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878124#comment-16878124
 ] 

Íñigo Goiri commented on HDFS-14625:


It would be nice to extend the description of the DefaultAuditLogger.
Not sure what the issue with Yetus was either.
It looks like it didn't do anything:
https://builds.apache.org/job/PreCommit-HDFS-Build/27140/console

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878103#comment-16878103
 ] 

Hadoop QA commented on HDFS-14625:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-14625 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973602/HDFS-14625.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27140/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1761:
-
Summary: Fix class hierarchy for KeyRequest and FileRequest classes.  (was: 
Fix class hierarchy for OMKeyCreateRequest and OMFileCreateRequest)

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1761) Fix class hierarchy for OMKeyCreateRequest and OMFileCreateRequest

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-1761:


Assignee: Bharat Viswanadham

> Fix class hierarchy for OMKeyCreateRequest and OMFileCreateRequest
> --
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1761) Fix class hierarchy for OMKeyCreateRequest and OMFileCreateRequest

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1761 started by Bharat Viswanadham.

> Fix class hierarchy for OMKeyCreateRequest and OMFileCreateRequest
> --
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1760) Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1760:
-
Description: 
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

The second alternative is to make these parameters mandatory. But that changes 
the current behavior. And this breaks compatibility between old release.

 

This Jira is created based on [~arpaga] comment during the review of HDDS-1731

Or any other suggestions to solve this issue.

  was:
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

The second alternative is to make these parameters mandatory. But that changes 
the current behavior. And this breaks compatibility between old release.

 

Or any other suggestions to solve this issue.


> Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and 
> type
> ---
>
> Key: HDDS-1760
> URL: https://issues.apache.org/jira/browse/HDDS-1760
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OzoneManager, when trying to createFile, createKey when the user has not 
> passed replication type and factor, we use dfs.container.ratis.enabled to set 
> default replication type and factor.
>  
> If this property (dfs.container.ratis.enabled) is set to false, we use 
> replication type as standalone and replication factor as 1. If it is true, 
> use replication type as RATIS and factor as 3.
>  
> I think now this is not correct, as now with RATIS type we can have both 
> replication factor 1 and 3. 
>  
> *The proposal is to change as below.*
> 1. Eliminate the above property.
> 2. If the replication type and replication factor are not set the default to 
> RATIS and one. 
>  
> The second alternative is to make these parameters mandatory. But that 
> changes the current behavior. And this breaks compatibility between old 
> release.
>  
> This Jira is created based on [~arpaga] comment during the review of HDDS-1731
> Or any other suggestions to solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1761) Fix class hierarchy for OMKeyCreateRequest and OMFileCreateRequest

2019-07-03 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1761:


 Summary: Fix class hierarchy for OMKeyCreateRequest and 
OMFileCreateRequest
 Key: HDDS-1761
 URL: https://issues.apache.org/jira/browse/HDDS-1761
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


The patch looks mostly fine to me. A few minor comments. -and one type error 
that needs to be fixed.-

I would like to see the class hierarchy refactored in a follow up patch. 
{{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
should both extend an abstract class that encapsulates the common functionality.

Generally deriving from _concrete_ classes is a bad idea.

 

This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878045#comment-16878045
 ] 

Eric Yang edited comment on HDDS-1661 at 7/3/19 6:11 PM:
-

[~anu] When doing a directory moving, the better strategy is to provide a shell 
script that runs git mv command to avoid posting large patch and provide better 
history for the linear-age of the code.  Besides the git mv script, there are 
some shell integration scripts that should be modified because the integration 
script are semi-hard coded relative path to reference hadoop-hdds project.
If we can avoid generating shell scripts as wrapper on top of maven, it will 
save us a lot of work.  This is the reason that I voiced by concern in 
HDDS-1735 about the shell script wrappers.  If the project continue in current 
direction, it will be hard to move later on.  I hope you would reconsider.


was (Author: eyang):
[~anu] When doing a directory moving, the better strategy is to provide a shell 
script that runs git mv command to avoid posting large patch and provide better 
history for the linear-age of the code.  Besides the git hv script, there are 
some shell integration scripts that should be modified because the integration 
script are semi-hard coded to reference hadoop-hdds project.If we can avoid 
generating shell scripts as wrapper on top of maven, it will save us a lot of 
work.  This is the reason that I voiced by concern in HDDS-1735 about the shell 
script wrappers.

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878045#comment-16878045
 ] 

Eric Yang edited comment on HDDS-1661 at 7/3/19 6:11 PM:
-

[~anu] When doing a directory moving, the better strategy is to provide a shell 
script that runs git mv command to avoid posting large patch and provide better 
history for the linear-age of the code.  Besides the git mv script, there are 
some shell integration scripts that should be modified because the integration 
script are semi-hard coded relative path to reference hadoop-hdds project.
If we can avoid generating shell scripts as wrapper on top of maven, it will 
save us a lot of work.  This is the reason that I voiced my concern in 
HDDS-1735 about the shell script wrappers.  If the project continue in current 
direction, it will be hard to move later on.  I hope you would reconsider.


was (Author: eyang):
[~anu] When doing a directory moving, the better strategy is to provide a shell 
script that runs git mv command to avoid posting large patch and provide better 
history for the linear-age of the code.  Besides the git mv script, there are 
some shell integration scripts that should be modified because the integration 
script are semi-hard coded relative path to reference hadoop-hdds project.
If we can avoid generating shell scripts as wrapper on top of maven, it will 
save us a lot of work.  This is the reason that I voiced by concern in 
HDDS-1735 about the shell script wrappers.  If the project continue in current 
direction, it will be hard to move later on.  I hope you would reconsider.

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878045#comment-16878045
 ] 

Eric Yang commented on HDDS-1661:
-

[~anu] When doing a directory moving, the better strategy is to provide a shell 
script that runs git mv command to avoid posting large patch and provide better 
history for the linear-age of the code.  Besides the git hv script, there are 
some shell integration scripts that should be modified because the integration 
script are semi-hard coded to reference hadoop-hdds project.If we can avoid 
generating shell scripts as wrapper on top of maven, it will save us a lot of 
work.  This is the reason that I voiced by concern in HDDS-1735 about the shell 
script wrappers.

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1760) Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1760:
-
Description: 
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

The second alternative is to make these parameters mandatory. But that changes 
the current behavior. And this breaks compatibility between old release.

 

Or any other suggestions to solve this issue.

  was:
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

Or any other suggestions to solve this issue.


> Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and 
> type
> ---
>
> Key: HDDS-1760
> URL: https://issues.apache.org/jira/browse/HDDS-1760
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OzoneManager, when trying to createFile, createKey when the user has not 
> passed replication type and factor, we use dfs.container.ratis.enabled to set 
> default replication type and factor.
>  
> If this property (dfs.container.ratis.enabled) is set to false, we use 
> replication type as standalone and replication factor as 1. If it is true, 
> use replication type as RATIS and factor as 3.
>  
> I think now this is not correct, as now with RATIS type we can have both 
> replication factor 1 and 3. 
>  
> *The proposal is to change as below.*
> 1. Eliminate the above property.
> 2. If the replication type and replication factor are not set the default to 
> RATIS and one. 
>  
> The second alternative is to make these parameters mandatory. But that 
> changes the current behavior. And this breaks compatibility between old 
> release.
>  
> Or any other suggestions to solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1760) Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1760:
-
Summary: Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication 
factor and type  (was: Using DFS_CONTAINER_RATIS_ENABLED_KEY to find 
replication factor and type)

> Avoid using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and 
> type
> ---
>
> Key: HDDS-1760
> URL: https://issues.apache.org/jira/browse/HDDS-1760
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OzoneManager, when trying to createFile, createKey when the user has not 
> passed replication type and factor, we use dfs.container.ratis.enabled to set 
> default replication type and factor.
>  
> If this property (dfs.container.ratis.enabled) is set to false, we use 
> replication type as standalone and replication factor as 1. If it is true, 
> use replication type as RATIS and factor as 3.
>  
> I think now this is not correct, as now with RATIS type we can have both 
> replication factor 1 and 3. 
>  
> *The proposal is to change as below.*
> 1. Eliminate the above property.
> 2. If the replication type and replication factor are not set the default to 
> RATIS and one. 
>  
> Or any other suggestions to solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1760) Using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1760:
-
Description: 
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

Any other suggestions to solve this issue.

  was:
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 


> Using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type
> -
>
> Key: HDDS-1760
> URL: https://issues.apache.org/jira/browse/HDDS-1760
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OzoneManager, when trying to createFile, createKey when the user has not 
> passed replication type and factor, we use dfs.container.ratis.enabled to set 
> default replication type and factor.
>  
> If this property (dfs.container.ratis.enabled) is set to false, we use 
> replication type as standalone and replication factor as 1. If it is true, 
> use replication type as RATIS and factor as 3.
>  
> I think now this is not correct, as now with RATIS type we can have both 
> replication factor 1 and 3. 
>  
> *The proposal is to change as below.*
> 1. Eliminate the above property.
> 2. If the replication type and replication factor are not set the default to 
> RATIS and one. 
>  
> Any other suggestions to solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1760) Using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type

2019-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1760:
-
Description: 
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

Or any other suggestions to solve this issue.

  was:
In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 

 

Any other suggestions to solve this issue.


> Using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type
> -
>
> Key: HDDS-1760
> URL: https://issues.apache.org/jira/browse/HDDS-1760
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OzoneManager, when trying to createFile, createKey when the user has not 
> passed replication type and factor, we use dfs.container.ratis.enabled to set 
> default replication type and factor.
>  
> If this property (dfs.container.ratis.enabled) is set to false, we use 
> replication type as standalone and replication factor as 1. If it is true, 
> use replication type as RATIS and factor as 3.
>  
> I think now this is not correct, as now with RATIS type we can have both 
> replication factor 1 and 3. 
>  
> *The proposal is to change as below.*
> 1. Eliminate the above property.
> 2. If the replication type and replication factor are not set the default to 
> RATIS and one. 
>  
> Or any other suggestions to solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1760) Using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication factor and type

2019-07-03 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1760:


 Summary: Using DFS_CONTAINER_RATIS_ENABLED_KEY to find replication 
factor and type
 Key: HDDS-1760
 URL: https://issues.apache.org/jira/browse/HDDS-1760
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In OzoneManager, when trying to createFile, createKey when the user has not 
passed replication type and factor, we use dfs.container.ratis.enabled to set 
default replication type and factor.

 

If this property (dfs.container.ratis.enabled) is set to false, we use 
replication type as standalone and replication factor as 1. If it is true, use 
replication type as RATIS and factor as 3.

 

I think now this is not correct, as now with RATIS type we can have both 
replication factor 1 and 3. 

 

*The proposal is to change as below.*

1. Eliminate the above property.

2. If the replication type and replication factor are not set the default to 
RATIS and one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271760=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271760
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:35
Start Date: 03/Jul/19 17:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-508187695
 
 
   > The patch looks mostly fine to me. A few minor comments. ~and one type 
error that needs to be fixed.~
   > 
   > I would like to see the class hierarchy refactored in a follow up patch. 
`OMFileCreateRequest` should not extend `OMKeyCreateRequest`. Instead they 
should both extend an abstract class that encapsulates the common functionality.
   > 
   > Generally deriving from _concrete_ classes is a bad idea.
   
   Will open a new Jira to address this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271760)
Time Spent: 5.5h  (was: 5h 20m)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271759=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271759
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:34
Start Date: 03/Jul/19 17:34
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1044: 
HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r300078451
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return 

[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271757=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271757
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:32
Start Date: 03/Jul/19 17:32
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1044: HDDS-1731. Implement 
File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-508186773
 
 
   +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271757)
Time Spent: 5h 10m  (was: 5h)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271752=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271752
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:26
Start Date: 03/Jul/19 17:26
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1044: HDDS-1731. Implement 
File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-508183660
 
 
   The patch looks mostly fine to me. A few minor comments. ~~and one type 
error that needs to be fixed.~~
   
   I would like to see the class hierarchy refactored in a follow up patch. 
`OMFileCreateRequest` should not extend `OMKeyCreateRequest`. Instead they 
should both extend an abstract class that encapsulates the common functionality.
   
   Generally deriving from _concrete_ classes is a bad idea.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271752)
Time Spent: 5h  (was: 4h 50m)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271751
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:25
Start Date: 03/Jul/19 17:25
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r300072511
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return 

[jira] [Assigned] (HDDS-1586) Allow Ozone RPC client to read with topology awareness

2019-07-03 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reassigned HDDS-1586:
--

Assignee: Sammi Chen

> Allow Ozone RPC client to read with topology awareness
> --
>
> Key: HDDS-1586
> URL: https://issues.apache.org/jira/browse/HDDS-1586
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> The idea is to leverage the node location from the block locations and perfer 
> read from closer block replicas. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271747=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271747
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:22
Start Date: 03/Jul/19 17:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1044: HDDS-1731. Implement 
File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-508183660
 
 
   The patch looks mostly fine to me. A few minor comments and one type error 
that needs to be fixed.
   
   I would like to see the class hierarchy refactored in a follow up patch. 
`OMFileCreateRequest` should not extend `OMKeyCreateRequest`. Instead they 
should both extend an abstract class that encapsulates the common functionality.
   
   Generally deriving from _concrete_ classes is a bad idea.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 271747)
Time Spent: 4h 40m  (was: 4.5h)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=271746=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271746
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 03/Jul/19 17:20
Start Date: 03/Jul/19 17:20
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r300072511
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return 

[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878000#comment-16878000
 ] 

Anu Engineer commented on HDDS-1661:


Sorry, I stand corrected. The rename and separation of HDDS into an independent 
project was 808KB patch. Not 400KB as I mentioned in the previous comment, and 
5 committers instead of 6. I just checked.

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-03 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14625:
-
Attachment: HDFS-14625.patch
Status: Patch Available  (was: Open)

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14629) Property value Hard Coded in DNConf.java

2019-07-03 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14629:
-
Attachment: HDFS-14629.patch
Status: Patch Available  (was: Open)

> Property value Hard Coded in DNConf.java
> 
>
> Key: HDFS-14629
> URL: https://issues.apache.org/jira/browse/HDFS-14629
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Trivial
> Attachments: HDFS-14629.patch
>
>
> {code:java}
> DataNode.LOG.info("dfs.blockreport.initialDelay is "
> + "greater than or equal to" + "dfs.blockreport.intervalMsec."
> + " Setting initial delay to 0 msec:");{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877980#comment-16877980
 ] 

Anu Engineer commented on HDDS-1661:


{quote}I think this is an odd behavior for open source development. I respect 
your reasons to keep the status quote, but it is awful for people who like to 
contribute, but can't.
{quote}
As usual, you go on a tangent again. A Complete mischaracterization of Ozone. 
We have 1700 JIRAs under HDDS project. We have almost a 600 under HDFS-7240; so 
you think that these 2300 JIRAs out there do not represent open development and 
community involvement?. You come to every week community call; you are present; 
you know who deeply we discuss each issue; including listening to you; yet you 
think we are "awful for people who like to contribute". Seriously?

Let me tell you how I see it from my perspective. You will file a Jira and 
insist that it has to be done in a specific way. For example, this JIRA; We are 
saying that we did this separation of Ozone and HDDS specifically since the 
community voted to do so, and asked us to merge them as different projects. 
They even asked us to rename HDSL to HDDS. So the community was more focused on 
HDDS. They want us to prove that HDDS can be a viable substrate for HDFS.

We told you the historical origins of why we separated out HDDS(there is a 
400KB patch with 6 committers name on that separation, We worked around the 
clock and over a video link that ran for more than 3 days to make that happen); 
but you want to go back, and I am of the opinion that the community asked us to 
prove that HDDS is a viable substrate once; let us do that with Ozone GA and 
then come back with proof of what we have.

Ok, At that point of technical discussion, you start attacking us – saying this 
like "we are brainwashing people", now you accuse us of not allowing you to 
contribute. It is very difficult to have a technical discussion with you. From 
the moment you hear a dissenting technical opinion; you start resorting to 
personal attacks. In fact, your technical argument disappears completely.

Then you will comment about something that is not related to the original 
question; I am going to wait for something like 
findbugs/checkstyle/docker/shell script. Something irrelevant to this 
discussion, along with a healthy dose of insulting verbiage.

Which will trigger a long reply from me ..and the cycle continues ... 

Eric, This is NOT a productive way for community development.  If you like I 
can summarize my interactions with you Jira by Jira, and all of them always 
devolve to this pattern. I am writing this down with the hope that perhaps you 
don't realize what is happening, and this will help you to become a more 
effective contributor.

 

 

 

 

 

 

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14483) Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-07-03 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-14483:
-
Attachment: HDFS-14483.branch-2.9.v2 (2).patch

> Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch, 
> HDFS-14483.branch-2.9.v1.patch, HDFS-14483.branch-2.9.v1.patch, 
> HDFS-14483.branch-2.9.v2 (2).patch, HDFS-14483.branch-2.9.v2.patch, 
> HDFS-14483.branch-2.9.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14483) Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-07-03 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877963#comment-16877963
 ] 

stack commented on HDFS-14483:
--

I looked back over recent hdfs qa builds 
https://builds.apache.org/job/PreCommit-HDFS-Build/. I see that

 TestWebHdfsTimeouts
 TestDirectoryScanner
 
... are definetly flakey.

The others I am not so sure. If I go back in build history, I see that they 
fail only w/ this patch in place seemingly (I went back through all builds 
before the first build above.. up here 
https://builds.apache.org/job/PreCommit-HDFS-Build/). Let me retry the patch.



> Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch, 
> HDFS-14483.branch-2.9.v1.patch, HDFS-14483.branch-2.9.v1.patch, 
> HDFS-14483.branch-2.9.v2.patch, HDFS-14483.branch-2.9.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877944#comment-16877944
 ] 

Eric Yang edited comment on HDDS-1735 at 7/3/19 4:13 PM:
-

[~elek] rat.sh runs maven command in hadoop-hdds and hadoop-ozone separately, 
but license errors are only checked for hadoop-ozone project.  This ignores any 
detection in hadoop-hdds project.  Can you confirm this is correct?



was (Author: eyang):
[~elek] rat.sh runs maven command in hadoop-hdds and hadoop-ozone separately, 
but only the copy in hadoop-ozone is checked for error.  This means to ignore 
any detection in hadoop-hdds project.  Can you confirm this is correct?


> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877944#comment-16877944
 ] 

Eric Yang commented on HDDS-1735:
-

[~elek] rat.sh runs maven command in hadoop-hdds and hadoop-ozone separately, 
but only the copy in hadoop-ozone is checked for error.  This means to ignore 
any detection in hadoop-hdds project.  Can you confirm this is correct?


> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877933#comment-16877933
 ] 

Eric Yang commented on HDDS-1661:
-

{quote}What do you mean? "brain wash current developers with odd behaviors" I 
do not even understand what you are trying to insinuate here. If you cannot 
communicate respectfully, please stop trolling and go away. {quote}

My definition of odd behavior in this case is firmly believe in something that 
will not happen and also prevent people from making code changes to make the 
project better.  I am not trolling, and happy to provide patch to make this 
happen.  However, if no Ozone developer agree with me, then you are exactly on 
my point that you like to build Ozone on your own, and you don't want new Ozone 
developer adoption.  I think this is an odd behavior for open source 
development.  I respect your reasons to keep the status quote, but it is awful 
for people who like to contribute, but can't.

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14621) Distcp can not preserve timestamp with -delete option

2019-07-03 Thread ludun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludun updated HDFS-14621:
-
Attachment: HDFS-14621.003.patch

> Distcp can not preserve timestamp with -delete  option
> --
>
> Key: HDFS-14621
> URL: https://issues.apache.org/jira/browse/HDFS-14621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.7, 3.1.2
>Reporter: ludun
>Priority: Major
> Attachments: HDFS-14261.001.patch, HDFS-14621.002.patch, 
> HDFS-14621.003.patch
>
>
> Use distcp with  -prbugpcaxt and -delete to copy data between cluster.
> hadoop distcp -Dmapreduce.job.queuename="QueueA" -prbugpcaxt -update -delete  
> hdfs://sourcecluster/user/hive/warehouse/sum.db 
> hdfs://destcluster/user/hive/warehouse/sum.db
> After distcp, we found  the timestamp of dest is different from source, and 
> the timestamp of some directory was the time distcp running.
> Check the code of distcp, in CopyCommitter, it preserves time first then 
> process -delete option which will change the timestamp of dest directory. So 
> we should process -delete option first. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1586) Allow Ozone RPC client to read with topology awareness

2019-07-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1586?focusedWorklogId=271685=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-271685
 ]

ASF GitHub Bot logged work on HDDS-1586:


Author: ASF GitHub Bot
Created on: 03/Jul/19 15:16
Start Date: 03/Jul/19 15:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #931: HDDS-1586. Allow 
Ozone RPC client to read with topology awareness.
URL: https://github.com/apache/hadoop/pull/931#issuecomment-508137612
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 327 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 530 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 278 | the patch passed |
   | +1 | cc | 278 | the patch passed |
   | +1 | javac | 278 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 32 | There were no new shellcheck issues. |
   | -1 | whitespace | 0 | The patch 6  line(s) with tabs. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   | +1 | findbugs | 547 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 280 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1731 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7349 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-931/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/931 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml shellcheck shelldocs 
yamllint |
   | uname | Linux e1d558648b61 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a2a8be1 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-931/9/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-931/9/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-931/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-931/9/testReport/ |
   | Max. process+thread count | 5045 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/common hadoop-ozone/dist hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-931/9/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

[jira] [Commented] (HDFS-14621) Distcp can not preserve timestamp with -delete option

2019-07-03 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877830#comment-16877830
 ] 

Ayush Saxena commented on HDFS-14621:
-

bunch of checkstyle warnings,

https://builds.apache.org/job/PreCommit-HDFS-Build/27137/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt

Give a check, mostly are for line length and can be solved

> Distcp can not preserve timestamp with -delete  option
> --
>
> Key: HDFS-14621
> URL: https://issues.apache.org/jira/browse/HDFS-14621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.7, 3.1.2
>Reporter: ludun
>Priority: Major
> Attachments: HDFS-14261.001.patch, HDFS-14621.002.patch
>
>
> Use distcp with  -prbugpcaxt and -delete to copy data between cluster.
> hadoop distcp -Dmapreduce.job.queuename="QueueA" -prbugpcaxt -update -delete  
> hdfs://sourcecluster/user/hive/warehouse/sum.db 
> hdfs://destcluster/user/hive/warehouse/sum.db
> After distcp, we found  the timestamp of dest is different from source, and 
> the timestamp of some directory was the time distcp running.
> Check the code of distcp, in CopyCommitter, it preserves time first then 
> process -delete option which will change the timestamp of dest directory. So 
> we should process -delete option first. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >