[jira] [Updated] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-04-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14369:
-
Attachment: HDFS-14369-HDFS-13891.005.patch

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch, 
> HDFS-14369-HDFS-13891.001.patch, HDFS-14369-HDFS-13891.002.patch, 
> HDFS-14369-HDFS-13891.003.patch, HDFS-14369-HDFS-13891.004.patch, 
> HDFS-14369-HDFS-13891.005.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1390?focusedWorklogId=223455=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223455
 ]

ASF GitHub Bot logged work on HDDS-1390:


Author: ASF GitHub Bot
Created on: 05/Apr/19 05:47
Start Date: 05/Apr/19 05:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #698: HDDS-1390 - 
Remove hk2 dependency exclusions from ozone s3gateway mod…
URL: https://github.com/apache/hadoop/pull/698#issuecomment-480155785
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1125 | trunk passed |
   | +1 | compile | 117 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 1994 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 38 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 64 | the patch passed |
   | +1 | compile | 121 | the patch passed |
   | +1 | javac | 121 | the patch passed |
   | +1 | mvnsite | 59 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 765 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 33 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 35 | s3gateway in the patch passed. |
   | -1 | unit | 731 | integration-test in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 4041 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.om.TestScmChillMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/698 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux 11ba88b816af 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77fe51e |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/2/testReport/ |
   | Max. process+thread count | 4128 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223455)
Time Spent: 0.5h  (was: 20m)

> Remove hk2 dependency exclusions from ozone s3gateway module.
> -
>
> Key: HDDS-1390
> URL: https://issues.apache.org/jira/browse/HDDS-1390
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1390-000.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Some hk2 transitive dependencies were mistakenly excluded in HDDS-1358 to 
> solve maven enforcer plugin issues. This jira cleans that up. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223450=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223450
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 05/Apr/19 05:29
Start Date: 05/Apr/19 05:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480153136
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 9 | https://github.com/apache/hadoop/pull/689 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/689 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223450)
Time Spent: 4h 50m  (was: 4h 40m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1394) Thread in MiniOzoneCluster not exiting

2019-04-04 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1394:

Labels: MiniOzoneChaosCluster  (was: )

> Thread in MiniOzoneCluster not exiting
> --
>
> Key: HDDS-1394
> URL: https://issues.apache.org/jira/browse/HDDS-1394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: threaddump.log
>
>
> Threads in MiniOzoneChaosCluster are not exiting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1394) Thread in MiniOzoneCluster not exiting

2019-04-04 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1394:

Attachment: threaddump.log

> Thread in MiniOzoneCluster not exiting
> --
>
> Key: HDDS-1394
> URL: https://issues.apache.org/jira/browse/HDDS-1394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
> Attachments: threaddump.log
>
>
> Threads in MiniOzoneChaosCluster are not exiting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810496#comment-16810496
 ] 

Hudson commented on HDDS-1379:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16354 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16354/])
HDDS-1379. Convert all OM Volume related operations to HA model. (#689) 
(github: rev 77fe51e13666f7e10ce5fa7bf53b35cdcd4602b6)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeOwnerChangeResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerStateMachine.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmDeleteVolumeResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmMetrics.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/RequestHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandler.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManager.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java


> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14407) Fix misuse of SLF4j logging API in DatasetVolumeChecker#checkAllVolumes

2019-04-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810495#comment-16810495
 ] 

Hudson commented on HDFS-14407:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16354 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16354/])
HDFS-14407. Fix misuse of SLF4j logging API in (aajisaka: rev 
67020f09502a4f07342dee457e47bb52b03441ae)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java


> Fix misuse of SLF4j logging API in DatasetVolumeChecker#checkAllVolumes
> ---
>
> Key: HDFS-14407
> URL: https://issues.apache.org/jira/browse/HDFS-14407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14407.001.patch
>
>
>  Wrong:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms" +
> maxAllowedTimeForCheckMs);
> {code}
> Correct:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms",
> maxAllowedTimeForCheckMs);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-04-04 Thread Amithsha (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810493#comment-16810493
 ] 

Amithsha edited comment on HDFS-14220 at 4/5/19 5:06 AM:
-

Yes, we had a scenario where 2 writes happening on the same rack which went bad.

So both the writing process for a block is slow. In this case, we modified the 
code to specify the no of replicas per rack.
If this can be configurable then it will easy for the users to specify the no 
based on their environment.


was (Author: amithsha):
Yes, we had a scenario where 2 writes happening on the same rack which went bad.

So both the writing process for a block is slow. In this case, we modified the 
code to specify the no of replica per rack.

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Priority: Trivial
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1394) Thread in MiniOzoneCluster not exiting

2019-04-04 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1394:
---

 Summary: Thread in MiniOzoneCluster not exiting
 Key: HDDS-1394
 URL: https://issues.apache.org/jira/browse/HDDS-1394
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh


Threads in MiniOzoneChaosCluster are not exiting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-04-04 Thread Amithsha (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810493#comment-16810493
 ] 

Amithsha commented on HDFS-14220:
-

Yes, we had a scenario where 2 writes happening on the same rack which went bad.

So both the writing process for a block is slow. In this case, we modified the 
code to specify the no of replica per rack.

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Priority: Trivial
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?focusedWorklogId=223435=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223435
 ]

ASF GitHub Bot logged work on HDDS-1332:


Author: ASF GitHub Bot
Created on: 05/Apr/19 04:37
Start Date: 05/Apr/19 04:37
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #697: [HDDS-1332] Attempt 
to fix flaky test testStartStopDatanodeStateMachine
URL: https://github.com/apache/hadoop/pull/697#issuecomment-480145646
 
 
   @arp7 @nandakumar131 please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223435)
Time Spent: 1h  (was: 50m)

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1379:
-
Fix Version/s: 0.5.0

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1379:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~arpitagarwal] for the review.

I have committed this to trunk.

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223429=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223429
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 05/Apr/19 04:09
Start Date: 05/Apr/19 04:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #689: 
HDDS-1379. Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223429)
Time Spent: 4h 40m  (was: 4.5h)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223427=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223427
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 05/Apr/19 04:06
Start Date: 05/Apr/19 04:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480141525
 
 
   For TestScmChillMode we have a jira open to fix the issue. For other the 
failure is due to cluster startup failure to Failed to bind. I think it is a 
random failure. This is not related to this patch.
   
   I will commit this.
   Thank You @arp7  for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223427)
Time Spent: 4.5h  (was: 4h 20m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223426=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223426
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 05/Apr/19 04:05
Start Date: 05/Apr/19 04:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480141525
 
 
   For TestScmChillMode we have a jira open to fix the issue. For other the 
failure is due to cluster startup failure to Failed to bind. I think it is a 
random failure. This is not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223426)
Time Spent: 4h 20m  (was: 4h 10m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223425=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223425
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 05/Apr/19 04:00
Start Date: 05/Apr/19 04:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480140918
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1016 | trunk passed |
   | +1 | compile | 102 | trunk passed |
   | +1 | checkstyle | 45 | trunk passed |
   | +1 | mvnsite | 97 | trunk passed |
   | +1 | shadedclient | 732 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 93 | trunk passed |
   | +1 | javadoc | 66 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 92 | the patch passed |
   | +1 | compile | 99 | the patch passed |
   | +1 | cc | 99 | the patch passed |
   | +1 | javac | 99 | the patch passed |
   | +1 | checkstyle | 21 | the patch passed |
   | +1 | mvnsite | 80 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 668 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 109 | the patch passed |
   | -1 | javadoc | 34 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 32 | common in the patch passed. |
   | +1 | unit | 42 | ozone-manager in the patch passed. |
   | -1 | unit | 1308 | integration-test in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4778 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestScmChillMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/689 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux d24b809b7c10 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d2141ef |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/6/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/6/testReport/ |
   | Max. process+thread count | 4263 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223425)
Time Spent: 4h 10m  (was: 4h)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  

[jira] [Updated] (HDFS-14407) Fix misuse of SLF4j logging API in DatasetVolumeChecker#checkAllVolumes

2019-04-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14407:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.3
   2.9.3
   3.2.1
   3.3.0
   3.0.4
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, branch-3.1, branch-3.0, branch-2, and 
branch-2.9. Thanks [~jiwq] for the contribution and thanks [~giovanni.fumarola] 
for the review.

> Fix misuse of SLF4j logging API in DatasetVolumeChecker#checkAllVolumes
> ---
>
> Key: HDFS-14407
> URL: https://issues.apache.org/jira/browse/HDFS-14407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14407.001.patch
>
>
>  Wrong:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms" +
> maxAllowedTimeForCheckMs);
> {code}
> Correct:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms",
> maxAllowedTimeForCheckMs);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14407) Fix misuse of SLF4j logging API in DatasetVolumeChecker#checkAllVolumes

2019-04-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14407:
-
Summary: Fix misuse of SLF4j logging API in 
DatasetVolumeChecker#checkAllVolumes  (was: Fix mistakenly used of SLF4j 
logging in DatasetVolumeChecker#checkAllVolumes)

> Fix misuse of SLF4j logging API in DatasetVolumeChecker#checkAllVolumes
> ---
>
> Key: HDFS-14407
> URL: https://issues.apache.org/jira/browse/HDFS-14407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Attachments: HDFS-14407.001.patch
>
>
>  Wrong:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms" +
> maxAllowedTimeForCheckMs);
> {code}
> Correct:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms",
> maxAllowedTimeForCheckMs);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14407) Fix mistakenly used of SLF4j logging in DatasetVolumeChecker#checkAllVolumes

2019-04-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14407:
-
Summary: Fix mistakenly used of SLF4j logging in 
DatasetVolumeChecker#checkAllVolumes  (was: Fix mistakenly used of SLF4j or 
Log4j2 logging in DatasetVolumeChecker#checkAllVolumes)

> Fix mistakenly used of SLF4j logging in DatasetVolumeChecker#checkAllVolumes
> 
>
> Key: HDFS-14407
> URL: https://issues.apache.org/jira/browse/HDFS-14407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Attachments: HDFS-14407.001.patch
>
>
>  Wrong:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms" +
> maxAllowedTimeForCheckMs);
> {code}
> Correct:
> {code:java}
> LOG.warn("checkAllVolumes timed out after {} ms",
> maxAllowedTimeForCheckMs);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-04-04 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13853:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch, HDFS-13853-HDFS-13891-03.patch, 
> HDFS-13853-HDFS-13891-04.patch, HDFS-13853-HDFS-13891-05.patch, 
> HDFS-13853-HDFS-13891-06.patch, HDFS-13853-HDFS-13891-07.patch, 
> HDFS-13853-HDFS-13891-08.patch, HDFS-13853-HDFS-13891-09.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-04-04 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810467#comment-16810467
 ] 

Ayush Saxena commented on HDFS-13853:
-

Thanx [~elgoiri] for the review.

Committed!!!

 

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch, HDFS-13853-HDFS-13891-03.patch, 
> HDFS-13853-HDFS-13891-04.patch, HDFS-13853-HDFS-13891-05.patch, 
> HDFS-13853-HDFS-13891-06.patch, HDFS-13853-HDFS-13891-07.patch, 
> HDFS-13853-HDFS-13891-08.patch, HDFS-13853-HDFS-13891-09.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-04-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810464#comment-16810464
 ] 

Hudson commented on HDDS-1189:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16353 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16353/])
HDDS-1189. Build failing due to rat check failure introduced by (bharat: rev 
d2141ef886632d5ce53deb98bfbfb01fc24eff4c)
* (edit) hadoop-ozone/ozone-recon-codegen/pom.xml


> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch, HDDS-1189.04.patch, HDDS-1189.05.patch, 
> HDDS-1189.06.patch, HDDS-1189.08.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1390?focusedWorklogId=223413=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223413
 ]

ASF GitHub Bot logged work on HDDS-1390:


Author: ASF GitHub Bot
Created on: 05/Apr/19 02:47
Start Date: 05/Apr/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #698: HDDS-1390 - 
Remove hk2 dependency exclusions from ozone s3gateway mod…
URL: https://github.com/apache/hadoop/pull/698#issuecomment-480130702
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1044 | trunk passed |
   | +1 | compile | 109 | trunk passed |
   | +1 | mvnsite | 77 | trunk passed |
   | +1 | shadedclient | 1915 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 39 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 66 | the patch passed |
   | +1 | compile | 103 | the patch passed |
   | +1 | javac | 103 | the patch passed |
   | +1 | mvnsite | 53 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 40 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 33 | s3gateway in the patch passed. |
   | -1 | unit | 1423 | integration-test in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4564 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.om.TestScmChillMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/698 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux 808918324bc2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d2141ef |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/1/testReport/ |
   | Max. process+thread count | 4998 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-698/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223413)
Time Spent: 20m  (was: 10m)

> Remove hk2 dependency exclusions from ozone s3gateway module.
> -
>
> Key: HDDS-1390
> URL: https://issues.apache.org/jira/browse/HDDS-1390
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: 

[jira] [Commented] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-04-04 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810457#comment-16810457
 ] 

Ajay Kumar commented on HDDS-1333:
--

[~elek] thanks for the patch, i have committed it to ozone-0.4. Cherry-pick to 
trunk is not clean due to HDDS-1300, HDDS-1185. Will resolve this jira once 
cherry pick is done for trunk.

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
> at 

[jira] [Commented] (HDFS-14416) Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN

2019-04-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810450#comment-16810450
 ] 

Hadoop QA commented on HDFS-14416:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964913/HDFS-14416.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b9d961559149 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d2141ef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26585/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26585/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix TestHdfsConfigFields for 

[jira] [Assigned] (HDDS-1315) datanode process dies if it runs out of disk space

2019-04-04 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka reassigned HDDS-1315:
---

Assignee: Supratim Deka

> datanode process dies if it runs out of disk space
> --
>
> Key: HDDS-1315
> URL: https://issues.apache.org/jira/browse/HDDS-1315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Sandeep Nemuri
>Assignee: Supratim Deka
>Priority: Major
>
> As of now the datanode process dies if it runs out of disk space which makes 
> the data present in that DN is inaccessible.
> datanode logs: 
> {code:java}
> 2019-03-11 04:01:27,141 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
> Terminating with exit status 1: 
> fb635e52-e2eb-46b1-b109-a831c10d3bf8-RaftLogWorker failed.
> java.io.FileNotFoundException: 
> /opt/data/meta/ratis/68e315f3-312c-4c9f-a7bd-590194deb5e7/current/log_inprogress_8705582
>  (No space left on device)
>   at java.io.RandomAccessFile.open0(Native Method)
>   at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:243)
>   at 
> org.apache.ratis.server.storage.LogOutputStream.(LogOutputStream.java:66)
>   at 
> org.apache.ratis.server.storage.RaftLogWorker$StartLogSegment.execute(RaftLogWorker.java:436)
>   at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code:java}
> 2019-03-11 04:01:25,531 [grpc-default-executor-9192] INFO   - Operation: 
> WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
> /opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/88/chunks/ba29bb91559179cbf7ab5d86cac47ba1_stream_9fb1e802-dca6-46e0-be12-5ac743d8563d_chunk_1.tmp.11076.8705539:
>  No space left on device : Result: IO_EXCEPTION
> 2019-03-11 04:01:25,543 [grpc-default-executor-9192] INFO   - Operation: 
> WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
> /opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/86/chunks/19ef3c1d36eadbc9538116c68c6e494f_stream_c58e8b91-dc18-4b61-918f-ab1eeda41c02_chunk_1.tmp.11076.8705540:
>  No space left on device : Result: IO_EXCEPTION
> 2019-03-11 04:01:25,546 [grpc-default-executor-9192] INFO   - Operation: 
> WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
> /opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/87/chunks/83a6a81f2f703f49a7e0a1413eebfc4c_stream_cae1ed30-c613-4278-8404-c9e37d0b690f_chunk_1.tmp.11076.8705541:
>  No space left on device : Result: IO_EXCEPTION
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1315) datanode process dies if it runs out of disk space

2019-04-04 Thread Supratim Deka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810444#comment-16810444
 ] 

Supratim Deka commented on HDDS-1315:
-

related to disk full handling across ozone components.

> datanode process dies if it runs out of disk space
> --
>
> Key: HDDS-1315
> URL: https://issues.apache.org/jira/browse/HDDS-1315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Sandeep Nemuri
>Priority: Major
>
> As of now the datanode process dies if it runs out of disk space which makes 
> the data present in that DN is inaccessible.
> datanode logs: 
> {code:java}
> 2019-03-11 04:01:27,141 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
> Terminating with exit status 1: 
> fb635e52-e2eb-46b1-b109-a831c10d3bf8-RaftLogWorker failed.
> java.io.FileNotFoundException: 
> /opt/data/meta/ratis/68e315f3-312c-4c9f-a7bd-590194deb5e7/current/log_inprogress_8705582
>  (No space left on device)
>   at java.io.RandomAccessFile.open0(Native Method)
>   at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:243)
>   at 
> org.apache.ratis.server.storage.LogOutputStream.(LogOutputStream.java:66)
>   at 
> org.apache.ratis.server.storage.RaftLogWorker$StartLogSegment.execute(RaftLogWorker.java:436)
>   at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code:java}
> 2019-03-11 04:01:25,531 [grpc-default-executor-9192] INFO   - Operation: 
> WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
> /opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/88/chunks/ba29bb91559179cbf7ab5d86cac47ba1_stream_9fb1e802-dca6-46e0-be12-5ac743d8563d_chunk_1.tmp.11076.8705539:
>  No space left on device : Result: IO_EXCEPTION
> 2019-03-11 04:01:25,543 [grpc-default-executor-9192] INFO   - Operation: 
> WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
> /opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/86/chunks/19ef3c1d36eadbc9538116c68c6e494f_stream_c58e8b91-dc18-4b61-918f-ab1eeda41c02_chunk_1.tmp.11076.8705540:
>  No space left on device : Result: IO_EXCEPTION
> 2019-03-11 04:01:25,546 [grpc-default-executor-9192] INFO   - Operation: 
> WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
> /opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/87/chunks/83a6a81f2f703f49a7e0a1413eebfc4c_stream_cae1ed30-c613-4278-8404-c9e37d0b690f_chunk_1.tmp.11076.8705541:
>  No space left on device : Result: IO_EXCEPTION
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1206) need to handle in the client when one of the datanode disk goes out of space

2019-04-04 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka reassigned HDDS-1206:
---

Assignee: Supratim Deka  (was: Shashikant Banerjee)

> need to handle in the client when one of the datanode disk goes out of space
> 
>
> Key: HDDS-1206
> URL: https://issues.apache.org/jira/browse/HDDS-1206
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Supratim Deka
>Priority: Major
>
> steps taken :
> 
>  # create 40 datanode cluster.
>  # one of the datanodes has less than 5 GB space.
>  # Started writing key of size 600MB.
> operation failed:
> Error on the client:
> 
> {noformat}
> Fri Mar 1 09:05:28 UTC 2019 Ruuning 
> /root/hadoop_trunk/ozone-0.4.0-SNAPSHOT/bin/ozone sh key put 
> testvol172275910-1551431122-1/testbuck172275910-1551431122-1/test_file24 
> /root/test_files/test_file24
> original md5sum a6de00c9284708585f5a99b0490b0b23
> 2019-03-01 09:05:39,142 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 79 creation failed
>  at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:568)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:535)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$5(BlockOutputStream.java:613)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-03-01 09:05:39,578 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 79 creation failed
>  at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:568)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:535)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$5(BlockOutputStream.java:613)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-03-01 09:05:40,368 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 79 creation failed
>  at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:568)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:535)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$5(BlockOutputStream.java:613)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-03-01 09:05:40,450 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 79 creation failed
>  at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:568)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:535)
>  at 
> 

[jira] [Updated] (HDDS-1339) Implement Ratis Snapshots on OM

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1339:
-
Fix Version/s: 0.5.0

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1339) Implement Ratis Snapshots on OM

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1339.
--
Resolution: Fixed

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1386) Build failing due to rat check failure introduced by HDDS-1189

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1386:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~swagle] for the contribution and [~arpitagarwal] for the review.

I have committed this to trunk.

> Build failing due to rat check failure introduced by HDDS-1189
> --
>
> Key: HDDS-1386
> URL: https://issues.apache.org/jira/browse/HDDS-1386
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1386.01.patch
>
>
> Build failure: 
> https://ci.anzix.net/job/ozone-nightly/52//artifact/build/rat.out/*view*/
> *
> Files with unapproved licenses:
>   
> /Users/swagle/Documents/workspace/hadoop/hadoop-ozone/ozone-recon-codegen/pom.xml
> *



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810434#comment-16810434
 ] 

Hadoop QA commented on HDDS-1390:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 58s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 14s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestMiniChaosOzoneCluster |
|   | hadoop.ozone.om.TestScmChillMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2638/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964911/HDDS-1390-000.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux b66d130fd016 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 215ffc7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2638/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 

[jira] [Commented] (HDDS-1386) Build failing due to rat check failure introduced by HDDS-1189

2019-04-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810432#comment-16810432
 ] 

Bharat Viswanadham commented on HDDS-1386:
--

+1 LGTM.

I will commit this shortly.

> Build failing due to rat check failure introduced by HDDS-1189
> --
>
> Key: HDDS-1386
> URL: https://issues.apache.org/jira/browse/HDDS-1386
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1386.01.patch
>
>
> Build failure: 
> https://ci.anzix.net/job/ozone-nightly/52//artifact/build/rat.out/*view*/
> *
> Files with unapproved licenses:
>   
> /Users/swagle/Documents/workspace/hadoop/hadoop-ozone/ozone-recon-codegen/pom.xml
> *



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223386=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223386
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 05/Apr/19 00:32
Start Date: 05/Apr/19 00:32
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480109833
 
 
   Fixed test failure which is related to this patch.
   Other issues are not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223386)
Time Spent: 4h  (was: 3h 50m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?focusedWorklogId=223383=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223383
 ]

ASF GitHub Bot logged work on HDDS-1372:


Author: ASF GitHub Bot
Created on: 05/Apr/19 00:21
Start Date: 05/Apr/19 00:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #682: HDDS-1372. 
getContainerWithPipeline for a standalone pipeline fails with 
ConcurrentModificationException.
URL: https://github.com/apache/hadoop/pull/682#issuecomment-480108119
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1015 | trunk passed |
   | +1 | compile | 27 | trunk passed |
   | +1 | checkstyle | 18 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 670 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 105 | server-scm in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2935 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-682/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/682 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux c9c093e50fd4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 215ffc7 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-682/2/testReport/ |
   | Max. process+thread count | 450 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-682/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223383)
Time Spent: 1h 20m  (was: 1h 10m)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> 

[jira] [Work logged] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?focusedWorklogId=223348=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223348
 ]

ASF GitHub Bot logged work on HDDS-1332:


Author: ASF GitHub Bot
Created on: 04/Apr/19 23:18
Start Date: 04/Apr/19 23:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #697: [HDDS-1332] 
Attempt to fix flaky test testStartStopDatanodeStateMachine
URL: https://github.com/apache/hadoop/pull/697#issuecomment-480096553
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 998 | trunk passed |
   | +1 | compile | 39 | trunk passed |
   | +1 | checkstyle | 18 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 676 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 26 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 39 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 54 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 50 | container-service in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2915 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/697 |
   | JIRA Issue | HDDS-1332 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b4c485abc744 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 215ffc7 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223348)
Time Spent: 50m  (was: 40m)

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by 

[jira] [Commented] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-04-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810401#comment-16810401
 ] 

Hadoop QA commented on HDDS-1332:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 50s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/697 |
| JIRA Issue | HDDS-1332 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4c485abc744 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 215ffc7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/testReport/ |
| Max. process+thread count | 415 (vs. ulimit of 5500) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-697/1/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Add some logging for flaky test 

[jira] [Work logged] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?focusedWorklogId=223340=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223340
 ]

ASF GitHub Bot logged work on HDDS-1389:


Author: ASF GitHub Bot
Created on: 04/Apr/19 22:56
Start Date: 04/Apr/19 22:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #696: 
HDDS-1389. Fix testSCMChillModeRestrictedOp.
URL: https://github.com/apache/hadoop/pull/696#discussion_r272396982
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeHandler.java
 ##
 @@ -88,16 +88,25 @@ public ChillModeHandler(Configuration configuration,
   @Override
   public void onMessage(ChillModeStatus chillModeStatus,
   EventPublisher publisher) {
-try {
-  isInChillMode.set(chillModeStatus.getChillModeStatus());
-  scmClientProtocolServer.setChillModeStatus(isInChillMode.get());
-  scmBlockManager.setChillModeStatus(isInChillMode.get());
-  Thread.sleep(waitTime);
-} catch (InterruptedException e) {
-  Thread.currentThread().interrupt();
-} finally {
-  replicationManager.start();
+
+isInChillMode.set(chillModeStatus.getChillModeStatus());
+scmClientProtocolServer.setChillModeStatus(isInChillMode.get());
+scmBlockManager.setChillModeStatus(isInChillMode.get());
+
+if (!isInChillMode.get()) {
+  final Thread chillModeExitThread = new Thread(() -> {
+try {
+  Thread.sleep(waitTime);
+} catch (InterruptedException e) {
 
 Review comment:
   Had offline discussion we shall make this daemon thread.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223340)
Time Spent: 0.5h  (was: 20m)

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?focusedWorklogId=223341=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223341
 ]

ASF GitHub Bot logged work on HDDS-1389:


Author: ASF GitHub Bot
Created on: 04/Apr/19 22:56
Start Date: 04/Apr/19 22:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #696: 
HDDS-1389. Fix testSCMChillModeRestrictedOp.
URL: https://github.com/apache/hadoop/pull/696#discussion_r272396982
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeHandler.java
 ##
 @@ -88,16 +88,25 @@ public ChillModeHandler(Configuration configuration,
   @Override
   public void onMessage(ChillModeStatus chillModeStatus,
   EventPublisher publisher) {
-try {
-  isInChillMode.set(chillModeStatus.getChillModeStatus());
-  scmClientProtocolServer.setChillModeStatus(isInChillMode.get());
-  scmBlockManager.setChillModeStatus(isInChillMode.get());
-  Thread.sleep(waitTime);
-} catch (InterruptedException e) {
-  Thread.currentThread().interrupt();
-} finally {
-  replicationManager.start();
+
+isInChillMode.set(chillModeStatus.getChillModeStatus());
+scmClientProtocolServer.setChillModeStatus(isInChillMode.get());
+scmBlockManager.setChillModeStatus(isInChillMode.get());
+
+if (!isInChillMode.get()) {
+  final Thread chillModeExitThread = new Thread(() -> {
+try {
+  Thread.sleep(waitTime);
+} catch (InterruptedException e) {
 
 Review comment:
   Had offline discussion we shall make this daemon thread.
   Adding here to not to miss this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223341)
Time Spent: 40m  (was: 0.5h)

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?focusedWorklogId=223334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223334
 ]

ASF GitHub Bot logged work on HDDS-1389:


Author: ASF GitHub Bot
Created on: 04/Apr/19 22:53
Start Date: 04/Apr/19 22:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #696: HDDS-1389. Fix 
testSCMChillModeRestrictedOp.
URL: https://github.com/apache/hadoop/pull/696#issuecomment-480091773
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1087 | trunk passed |
   | +1 | compile | 50 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 664 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 37 | trunk passed |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 41 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 93 | server-scm in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2962 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-696/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/696 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux f0d0f7d52b4d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 215ffc7 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-696/2/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-696/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223334)
Time Spent: 20m  (was: 10m)

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1390?focusedWorklogId=223325=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223325
 ]

ASF GitHub Bot logged work on HDDS-1390:


Author: ASF GitHub Bot
Created on: 04/Apr/19 22:51
Start Date: 04/Apr/19 22:51
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #698: HDDS-1390 - 
Remove hk2 dependency exclusions from ozone s3gateway mod…
URL: https://github.com/apache/hadoop/pull/698
 
 
   …ule.
   
   Some hk2 transitive dependencies were mistakenly excluded in HDDS-1358Link 
to solve maven enforcer plugin issues. This jira cleans that up.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223325)
Time Spent: 10m
Remaining Estimate: 0h

> Remove hk2 dependency exclusions from ozone s3gateway module.
> -
>
> Key: HDDS-1390
> URL: https://issues.apache.org/jira/browse/HDDS-1390
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1390-000.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some hk2 transitive dependencies were mistakenly excluded in HDDS-1358 to 
> solve maven enforcer plugin issues. This jira cleans that up. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1390:
-
Labels: pull-request-available  (was: )

> Remove hk2 dependency exclusions from ozone s3gateway module.
> -
>
> Key: HDDS-1390
> URL: https://issues.apache.org/jira/browse/HDDS-1390
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1390-000.patch
>
>
> Some hk2 transitive dependencies were mistakenly excluded in HDDS-1358 to 
> solve maven enforcer plugin issues. This jira cleans that up. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14415) Backport HDFS-13799 to branch-2

2019-04-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810364#comment-16810364
 ] 

Hadoop QA commented on HDFS-14415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-14415 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964894/HDFS-14415-branch-2.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fccf5e7dabec 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Work logged] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?focusedWorklogId=223304=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223304
 ]

ASF GitHub Bot logged work on HDDS-1329:


Author: ASF GitHub Bot
Created on: 04/Apr/19 22:32
Start Date: 04/Apr/19 22:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#issuecomment-480087223
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 975 | trunk passed |
   | +1 | mvnsite | 20 | trunk passed |
   | +1 | shadedclient | 1584 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 18 | the patch passed |
   | +1 | mvnsite | 15 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 687 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 22 | The patch does not generate ASF License warnings. |
   | | | 2430 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/687 |
   | Optional Tests |  dupname  asflicense  mvnsite  |
   | uname | Linux 52e1826f9230 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 215ffc7 |
   | maven | version: Apache Maven 3.3.9 |
   | Max. process+thread count | 443 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223304)
Time Spent: 7h 10m  (was: 7h)

> Update documentation for Ozone-0.4.0 release
> 
>
> Key: HDDS-1329
> URL: https://issues.apache.org/jira/browse/HDDS-1329
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> We need to update documenation of Ozone for all the new features which is 
> part of 0.4.0 release. This is a 0.4.0 blocker JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1386) Build failing due to rat check failure introduced by HDDS-1189

2019-04-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810335#comment-16810335
 ] 

Hadoop QA commented on HDDS-1386:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 27s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 44s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2637/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1386 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964884/HDDS-1386.01.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux a7d2c07a007c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 215ffc7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2637/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2637/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2637/testReport/ |
| Max. process+thread count | 5008 (vs. ulimit of 

[jira] [Updated] (HDDS-1393) Convert all OM Bucket related operations to HA model

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1393:
-
Labels:   (was: pull-request-available)

> Convert all OM Bucket related operations to HA model
> 
>
> Key: HDDS-1393
> URL: https://issues.apache.org/jira/browse/HDDS-1393
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In this jira, we shall convert all OM Bucket related operations to OM HA 
> model, which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like bucket not found or some 
> conditions which i have not satisfied like when deleting bucket should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1393) Convert all OM Bucket related operations to HA model

2019-04-04 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1393:


 Summary: Convert all OM Bucket related operations to HA model
 Key: HDDS-1393
 URL: https://issues.apache.org/jira/browse/HDDS-1393
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In this jira, we shall convert all OM related operations to OM HA model, which 
is a 2 step.
 # StartTransaction, where we validate request and check for any errors and 
return the response.
 # ApplyTransaction, where original OM request will have a response which needs 
to be applied to OM DB. This step is just to apply response to Om DB.

In this way, all requests which are failed with like volume not found or some 
conditions which i have not satisfied like when deleting volume should be 
empty, these all will be executed during startTransaction, and if it fails 
these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1393) Convert all OM Bucket related operations to HA model

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1393:
-
Description: 
In this jira, we shall convert all OM Bucket related operations to OM HA model, 
which is a 2 step.
 # StartTransaction, where we validate request and check for any errors and 
return the response.
 # ApplyTransaction, where original OM request will have a response which needs 
to be applied to OM DB. This step is just to apply response to Om DB.

In this way, all requests which are failed with like bucket not found or some 
conditions which i have not satisfied like when deleting bucket should be 
empty, these all will be executed during startTransaction, and if it fails 
these requests will not be written to raft log also.

  was:
In this jira, we shall convert all OM related operations to OM HA model, which 
is a 2 step.
 # StartTransaction, where we validate request and check for any errors and 
return the response.
 # ApplyTransaction, where original OM request will have a response which needs 
to be applied to OM DB. This step is just to apply response to Om DB.

In this way, all requests which are failed with like volume not found or some 
conditions which i have not satisfied like when deleting volume should be 
empty, these all will be executed during startTransaction, and if it fails 
these requests will not be written to raft log also.


> Convert all OM Bucket related operations to HA model
> 
>
> Key: HDDS-1393
> URL: https://issues.apache.org/jira/browse/HDDS-1393
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> In this jira, we shall convert all OM Bucket related operations to OM HA 
> model, which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like bucket not found or some 
> conditions which i have not satisfied like when deleting bucket should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223278=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223278
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:50
Start Date: 04/Apr/19 21:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480076561
 
 
   Thank You @arp7  for review.
   Posted a patch to fix asf license errors and Jenkins find bug issue.
   I will commit this after a clean Jenkins run.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223278)
Time Spent: 3h 50m  (was: 3h 40m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1133) Create robot test for Ozone TDE support

2019-04-04 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1133:


Assignee: Xiaoyu Yao

> Create robot test for Ozone TDE support
> ---
>
> Key: HDDS-1133
> URL: https://issues.apache.org/jira/browse/HDDS-1133
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: beta1, newbie
>
> HDDS-1041 implemented TDE for Ozone and added the KMS server to the 
> compose/ozonesecure cluster definition.
> We need a simple robot framework based test to try out TDE from command line.
> This task requires a working ozonesecure docker-compose cluster first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1385) Make the ozonesecure-mr environment definition version independent

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1385?focusedWorklogId=223253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223253
 ]

ASF GitHub Bot logged work on HDDS-1385:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:10
Start Date: 04/Apr/19 21:10
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #695: HDDS-1385. Make the 
ozonesecure-mr environment definition version
URL: https://github.com/apache/hadoop/pull/695#issuecomment-480064919
 
 
   @elek can you rebase against latest ozone-0.4 after HDDS-1333 is in?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223253)
Time Spent: 20m  (was: 10m)

> Make the ozonesecure-mr environment definition version independent
> --
>
> Key: HDDS-1385
> URL: https://issues.apache.org/jira/browse/HDDS-1385
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The MapReduce example project on branch ozone-0.4 contains 0.5.0-SNAPSHOT 
> references in the dir:
> hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose/ozonesecure-mr
> After HDDS-1333 (which introduce filtering) it will be straightforward to 
> always use the current version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223250=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223250
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:04
Start Date: 04/Apr/19 21:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #689: HDDS-1379. 
Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-480063273
 
 
   Hi @arp7 
   Thanks for the review.
   Fixed Jenkins issues and addressed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223250)
Time Spent: 3h 40m  (was: 3.5h)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223249
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:03
Start Date: 04/Apr/19 21:03
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #689: 
HDDS-1379. Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r272366957
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
 
 Review comment:
   Filed https://issues.apache.org/jira/browse/HDDS-1392
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223249)
Time Spent: 3.5h  (was: 3h 20m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1392) Add metrics and AuditLogging for newly added OM HA methods

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1392:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-505

> Add metrics and AuditLogging for newly added OM HA methods
> --
>
> Key: HDDS-1392
> URL: https://issues.apache.org/jira/browse/HDDS-1392
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Refer HDDS-1379 Jira for which methods need this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1392) Add metrics and AuditLogging for newly added OM HA methods

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1392:
-
Labels: newbie  (was: )

> Add metrics and AuditLogging for newly added OM HA methods
> --
>
> Key: HDDS-1392
> URL: https://issues.apache.org/jira/browse/HDDS-1392
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> Refer HDDS-1379 Jira for which methods need this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1392) Add metrics and AuditLogging for newly added OM HA methods

2019-04-04 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1392:


 Summary: Add metrics and AuditLogging for newly added OM HA methods
 Key: HDDS-1392
 URL: https://issues.apache.org/jira/browse/HDDS-1392
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Refer HDDS-1379 Jira for which methods need this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223247=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223247
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:01
Start Date: 04/Apr/19 21:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #689: 
HDDS-1379. Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r272366237
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
+  if(isAclEnabled) {
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223247)
Time Spent: 3h 20m  (was: 3h 10m)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1379) Convert all OM Volume related operations to HA model

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=223246=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223246
 ]

ASF GitHub Bot logged work on HDDS-1379:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:01
Start Date: 04/Apr/19 21:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #689: 
HDDS-1379. Convert all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r272366183
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
+  if(isAclEnabled) {
+checkAcls(ResourceType.VOLUME, StoreType.OZONE,
+ACLType.CREATE, args.getVolume(), null, null);
+  }
+  VolumeList volumeList = volumeManager.createVolume(args);
+  return volumeList;
+} catch (Exception ex) {
+  throw ex;
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223246)
Time Spent: 3h 10m  (was: 3h)

> Convert all OM Volume related operations to HA model
> 
>
> Key: HDDS-1379
> URL: https://issues.apache.org/jira/browse/HDDS-1379
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=223243=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223243
 ]

ASF GitHub Bot logged work on HDDS-1333:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:00
Start Date: 04/Apr/19 21:00
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #653: HDDS-1333. 
OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
classes
URL: https://github.com/apache/hadoop/pull/653#issuecomment-480061876
 
 
   +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223243)
Time Spent: 6h 50m  (was: 6h 40m)

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> 

[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=223244=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223244
 ]

ASF GitHub Bot logged work on HDDS-1333:


Author: ASF GitHub Bot
Created on: 04/Apr/19 21:00
Start Date: 04/Apr/19 21:00
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #653: HDDS-1333. 
OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
classes
URL: https://github.com/apache/hadoop/pull/653
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223244)
Time Spent: 7h  (was: 6h 50m)

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> 

[jira] [Updated] (HDFS-14416) Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN

2019-04-04 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14416:
--
Attachment: HDFS-14416.001.patch

> Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN
> ---
>
> Key: HDFS-14416
> URL: https://issues.apache.org/jira/browse/HDFS-14416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Íñigo Goiri
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14416.001.patch
>
>
> HDFS-14327 added dfs.client.failover.resolver.useFQDN and it's breaking 
> TestHdfsConfigFields.
> An example report is:
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16118/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14416) Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN

2019-04-04 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14416:
--
Status: Patch Available  (was: Open)

> Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN
> ---
>
> Key: HDFS-14416
> URL: https://issues.apache.org/jira/browse/HDFS-14416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Íñigo Goiri
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14416.001.patch
>
>
> HDFS-14327 added dfs.client.failover.resolver.useFQDN and it's breaking 
> TestHdfsConfigFields.
> An example report is:
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16118/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1389:
--
Target Version/s: 0.5.0

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1389:
--
Status: Patch Available  (was: Open)

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1372:
--
Status: Patch Available  (was: Open)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:252)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1390:

Attachment: HDDS-1390-000.patch
Status: Patch Available  (was: Open)

[~bharatviswa] Can you review this change? 

> Remove hk2 dependency exclusions from ozone s3gateway module.
> -
>
> Key: HDDS-1390
> URL: https://issues.apache.org/jira/browse/HDDS-1390
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1390-000.patch
>
>
> Some hk2 transitive dependencies were mistakenly excluded in HDDS-1358 to 
> solve maven enforcer plugin issues. This jira cleans that up. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1390:

Description: Some hk2 transitive dependencies were mistakenly excluded in 
HDDS-1358 to solve maven enforcer plugin issues. This jira cleans that up. 

> Remove hk2 dependency exclusions from ozone s3gateway module.
> -
>
> Key: HDDS-1390
> URL: https://issues.apache.org/jira/browse/HDDS-1390
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> Some hk2 transitive dependencies were mistakenly excluded in HDDS-1358 to 
> solve maven enforcer plugin issues. This jira cleans that up. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-04-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810238#comment-16810238
 ] 

Íñigo Goiri commented on HDFS-13853:


{{TestHdfsConfigFields}} is being tracked already in HDFS-14416.
+1 on  [^HDFS-13853-HDFS-13891-09.patch].

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch, HDFS-13853-HDFS-13891-03.patch, 
> HDFS-13853-HDFS-13891-04.patch, HDFS-13853-HDFS-13891-05.patch, 
> HDFS-13853-HDFS-13891-06.patch, HDFS-13853-HDFS-13891-07.patch, 
> HDFS-13853-HDFS-13891-08.patch, HDFS-13853-HDFS-13891-09.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14327) Using FQDN instead of IP to access servers with DNS resolving

2019-04-04 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810202#comment-16810202
 ] 

Fengnan Li commented on HDFS-14327:
---

Thanks for the notice [~elgoiri] I will start solving this.

> Using FQDN instead of IP to access servers with DNS resolving
> -
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14327.001.patch, HDFS-14327.002.patch
>
>
> With [HDFS-14118|https://issues.apache.org/jira/browse/HDFS-14118], clients 
> can get the IP of the servers (NN/Routers) and use the IP addresses to access 
> the machine. This will fail in secure environment as Kerberos is using the 
> domain name  (FQDN) in the principal so it won't recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1363) ozone.metadata.dirs doesn't pick multiple dirs

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1363?focusedWorklogId=223194=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223194
 ]

ASF GitHub Bot logged work on HDDS-1363:


Author: ASF GitHub Bot
Created on: 04/Apr/19 18:46
Start Date: 04/Apr/19 18:46
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #691: [HDDS-1363] 
ozone.metadata.dirs doesn't pick multiple dirs
URL: https://github.com/apache/hadoop/pull/691#issuecomment-480018016
 
 
   @nandakumar131 @hanishakoneru please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223194)
Time Spent: 0.5h  (was: 20m)

> ozone.metadata.dirs doesn't pick multiple dirs
> --
>
> Key: HDDS-1363
> URL: https://issues.apache.org/jira/browse/HDDS-1363
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Sandeep Nemuri
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{ozone.metadata.dirs}} doesn't pick comma(,) separated paths.
>  It only picks one path as opposed to the property name 
> _ozone.metadata.dir{color:#FF}s{color}_
> {code:java}
>
>   ozone.metadata.dirs
>   /data/data1/meta,/home/hdfs/data/meta
>
> {code}
> {code:java}
> 2019-03-31 18:44:54,824 WARN server.ServerUtils: ozone.scm.db.dirs is not 
> configured. We recommend adding this setting. Falling back to 
> ozone.metadata.dirs instead.
> SCM initialization succeeded.Current cluster id for 
> sd=/data/data1/meta,/home/hdfs/data/meta/scm;cid=CID-1ad502d1-0104-4055-838b-1208ab78f35c
> 2019-03-31 18:44:55,079 INFO server.StorageContainerManager: SHUTDOWN_MSG:
> {code}
> {code:java}
> [hdfs@localhost ozone-0.5.0-SNAPSHOT]$ ls 
> //data/data1/meta,/home/hdfs/data/meta/scm/current/VERSION
> VERSION
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?focusedWorklogId=223193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223193
 ]

ASF GitHub Bot logged work on HDDS-1332:


Author: ASF GitHub Bot
Created on: 04/Apr/19 18:44
Start Date: 04/Apr/19 18:44
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #697: [HDDS-1332] 
Attempt to fix flaky test testStartStopDatanodeStateMachine
URL: https://github.com/apache/hadoop/pull/697
 
 
   ## What changes were proposed in this pull request?
   
   `testStartStopDatanodeStateMachine` is flaky, causing [occasional pre-commit 
build 
failures](https://builds.apache.org/job/hadoop-multibranch/job/PR-691/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt).
  [HDDS-1332](https://issues.apache.org/jira/browse/HDDS-1332) added some 
logging to find out more about the cause.
   
   I think the problem is not test-specific, and is caused by the following: 
`SCMConnectionManager#scmMachines` is a plain `HashMap`, guarded by a 
`ReadWriteLock` in most places where it's used, except `getValues()`.  The 
method also returns the values collection without any write protection (though 
currently none of the callers modify it).
   
   This is an attempt to fix the cause by acquiring the read lock and creating 
a read-only copy.
   
   https://issues.apache.org/jira/browse/HDDS-1332
   
   ## How was this patch tested?
   
   Ran affected unit tests several times, plus tried `ozone` docker-compose 
cluster.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223193)
Time Spent: 40m  (was: 0.5h)

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?focusedWorklogId=223192=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223192
 ]

ASF GitHub Bot logged work on HDDS-1329:


Author: ASF GitHub Bot
Created on: 04/Apr/19 18:31
Start Date: 04/Apr/19 18:31
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #687: HDDS-1329. 
Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r272311176
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
+Ozone provides ability to audit all read & write operations to OM, SCM and 
Datanodes. Ozone audit leverages the Marker feature which enables user to 
selectively audit only READ or WRITE operations by a simple config change 
without restarting the service(s).
+To enable/disable audit of READ operations, set filter.read.onMatch to NEUTRAL 
or 

[jira] [Work logged] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?focusedWorklogId=223191=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223191
 ]

ASF GitHub Bot logged work on HDDS-1329:


Author: ASF GitHub Bot
Created on: 04/Apr/19 18:31
Start Date: 04/Apr/19 18:31
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #687: HDDS-1329. 
Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r272311088
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
 
 Review comment:
   thanks @dineshchitlangia 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-04-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810176#comment-16810176
 ] 

Hadoop QA commented on HDFS-13853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 8s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
10s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964873/HDFS-13853-HDFS-13891-09.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e0b357490a39 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1389:
-
Labels: pull-request-available  (was: )

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?focusedWorklogId=223187=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223187
 ]

ASF GitHub Bot logged work on HDDS-1389:


Author: ASF GitHub Bot
Created on: 04/Apr/19 18:25
Start Date: 04/Apr/19 18:25
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #696: 
HDDS-1389. Fix testSCMChillModeRestrictedOp.
URL: https://github.com/apache/hadoop/pull/696
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223187)
Time Spent: 10m
Remaining Estimate: 0h

> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1391) Add ability in OM to perform DB backups and serve delta backup files through an API.

2019-04-04 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1391:
---

 Summary: Add ability in OM to perform DB backups and serve delta 
backup files through an API.
 Key: HDDS-1391
 URL: https://issues.apache.org/jira/browse/HDDS-1391
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?focusedWorklogId=223175=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223175
 ]

ASF GitHub Bot logged work on HDDS-1329:


Author: ASF GitHub Bot
Created on: 04/Apr/19 18:02
Start Date: 04/Apr/19 18:02
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #687: 
HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r272300064
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
 
 Review comment:
   +1 for Audit section. Thanks @ajayydv !
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please 

[jira] [Updated] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1389:
-
Description: 
This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
thread for waitTime sleep. Because of that, this is causing a test failure.

 

 

  was:
This test failure is caused by HDDS-1285, as we use the same ChillModeHandler 
thread for waitTime sleep. Because of that, this is causing a test failure.

 

 


> Fix testSCMChillModeRestrictedOp
> 
>
> Key: HDDS-1389
> URL: https://issues.apache.org/jira/browse/HDDS-1389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>
> This test failure is caused by HDDS-1207, as we use the same ChillModeHandler 
> thread for waitTime sleep. Because of that, this is causing a test failure.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14416) Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN

2019-04-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14416:
--

Assignee: Fengnan Li

> Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN
> ---
>
> Key: HDFS-14416
> URL: https://issues.apache.org/jira/browse/HDFS-14416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Íñigo Goiri
>Assignee: Fengnan Li
>Priority: Major
>
> HDFS-14327 added dfs.client.failover.resolver.useFQDN and it's breaking 
> TestHdfsConfigFields.
> An example report is:
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16118/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?focusedWorklogId=223161=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223161
 ]

ASF GitHub Bot logged work on HDDS-1329:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:48
Start Date: 04/Apr/19 17:48
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #687: HDDS-1329. 
Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271927212
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
 
 Review comment:
   cc: @dineshchitlangia  to review the section related to Audit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[jira] [Work logged] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?focusedWorklogId=223160=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223160
 ]

ASF GitHub Bot logged work on HDDS-1329:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:48
Start Date: 04/Apr/19 17:48
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #687: HDDS-1329. 
Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271926885
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
+Ozone provides ability to audit all read & write operations to OM, SCM and 
Datanodes. Ozone audit leverages the Marker feature which enables user to 
selectively audit only READ or WRITE operations by a simple config change 
without restarting the service(s).
+To enable/disable audit of READ operations, set filter.read.onMatch to NEUTRAL 

[jira] [Commented] (HDFS-14416) Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN

2019-04-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810151#comment-16810151
 ] 

Íñigo Goiri commented on HDFS-14416:


I'm not sure why it didn't show in HDFS-14327.
[~fengnanli], do you mind taking a look?

> Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN
> ---
>
> Key: HDFS-14416
> URL: https://issues.apache.org/jira/browse/HDFS-14416
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Íñigo Goiri
>Assignee: Fengnan Li
>Priority: Major
>
> HDFS-14327 added dfs.client.failover.resolver.useFQDN and it's breaking 
> TestHdfsConfigFields.
> An example report is:
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16118/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14416) Fix TestHdfsConfigFields for field dfs.client.failover.resolver.useFQDN

2019-04-04 Thread JIRA
Íñigo Goiri created HDFS-14416:
--

 Summary: Fix TestHdfsConfigFields for field 
dfs.client.failover.resolver.useFQDN
 Key: HDFS-14416
 URL: https://issues.apache.org/jira/browse/HDFS-14416
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Íñigo Goiri


HDFS-14327 added dfs.client.failover.resolver.useFQDN and it's breaking 
TestHdfsConfigFields.
An example report is:
https://builds.apache.org/job/PreCommit-HADOOP-Build/16118/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14327) Using FQDN instead of IP to access servers with DNS resolving

2019-04-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810148#comment-16810148
 ] 

Íñigo Goiri commented on HDFS-14327:


[~fengnanli], it looks like now we are having errors in the HDFS configs:
https://builds.apache.org/job/PreCommit-HADOOP-Build/16118/testReport/
hdfs-default.xml has 1 properties missing in  interface 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys  interface 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys$Failover  interface 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys$StripedRead  class 
org.apache.hadoop.hdfs.DFSConfigKeys  interface 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys$BlockWrite  interface 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys$BlockWrite$ReplaceDatanodeOnFailure
 Entries:   dfs.client.failover.resolver.useFQDN expected:<0> but was:<1>

I'll open a JIRA to follow up.

> Using FQDN instead of IP to access servers with DNS resolving
> -
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14327.001.patch, HDFS-14327.002.patch
>
>
> With [HDFS-14118|https://issues.apache.org/jira/browse/HDFS-14118], clients 
> can get the IP of the servers (NN/Routers) and use the IP addresses to access 
> the machine. This will fail in secure environment as Kerberos is using the 
> domain name  (FQDN) in the principal so it won't recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1390) Remove hk2 dependency exclusions from ozone s3gateway module.

2019-04-04 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1390:
---

 Summary: Remove hk2 dependency exclusions from ozone s3gateway 
module.
 Key: HDDS-1390
 URL: https://issues.apache.org/jira/browse/HDDS-1390
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1389) Fix testSCMChillModeRestrictedOp

2019-04-04 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1389:


 Summary: Fix testSCMChillModeRestrictedOp
 Key: HDDS-1389
 URL: https://issues.apache.org/jira/browse/HDDS-1389
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Nanda kumar


This test failure is caused by HDDS-1285, as we use the same ChillModeHandler 
thread for waitTime sleep. Because of that, this is causing a test failure.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1388) Add a shell script to run MiniOzoneChaosCluster using mvn exec

2019-04-04 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1388:
---

 Summary: Add a shell script to run MiniOzoneChaosCluster using mvn 
exec
 Key: HDDS-1388
 URL: https://issues.apache.org/jira/browse/HDDS-1388
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.5.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


This jira adds a shell script to run MiniOzoneChaosCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1387) ConcurrentModificationException in TestMiniChaosOzoneCluster

2019-04-04 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1387:
-

 Summary: ConcurrentModificationException in 
TestMiniChaosOzoneCluster
 Key: HDDS-1387
 URL: https://issues.apache.org/jira/browse/HDDS-1387
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


TestMiniChaosOzoneCluster is failing with the below exception
{noformat}
[ERROR] org.apache.hadoop.ozone.TestMiniChaosOzoneCluster  Time elapsed: 
265.679 s  <<< ERROR!
java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.stop(MiniOzoneClusterImpl.java:350)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.shutdown(MiniOzoneClusterImpl.java:325)
at 
org.apache.hadoop.ozone.MiniOzoneChaosCluster.shutdown(MiniOzoneChaosCluster.java:130)
at 
org.apache.hadoop.ozone.TestMiniChaosOzoneCluster.shutdown(TestMiniChaosOzoneCluster.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-04-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810138#comment-16810138
 ] 

Íñigo Goiri commented on HDFS-13853:


[^HDFS-13853-HDFS-13891-09.patch] LGTM, let's see what Yetus comes with.

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch, HDFS-13853-HDFS-13891-03.patch, 
> HDFS-13853-HDFS-13891-04.patch, HDFS-13853-HDFS-13891-05.patch, 
> HDFS-13853-HDFS-13891-06.patch, HDFS-13853-HDFS-13891-07.patch, 
> HDFS-13853-HDFS-13891-08.patch, HDFS-13853-HDFS-13891-09.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?focusedWorklogId=223139=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223139
 ]

ASF GitHub Bot logged work on HDDS-1372:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:29
Start Date: 04/Apr/19 17:29
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #682: 
HDDS-1372. getContainerWithPipeline for a standalone pipeline fails with 
ConcurrentModificationException.
URL: https://github.com/apache/hadoop/pull/682#discussion_r272286786
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -194,6 +197,49 @@ public void testGetContainerWithPipeline() throws 
Exception {
 Assert.assertTrue(replicaNodes.contains(dn1));
   }
 
+  @Test
+  public void testGetContainerReplicaWithParallelUpdate() throws Exception {
+testGetContainerWithPipeline();
+final Optional id = containerManager.getContainerIDs()
+.stream().findFirst();
+Assert.assertTrue(id.isPresent());
+final ContainerID cId = id.get();
+final Optional replica = containerManager
+.getContainerReplicas(cId).stream().findFirst();
+Assert.assertTrue(replica.isPresent());
+final ContainerReplica cReplica = replica.get();
+final AtomicBoolean runUpdaterThread =
+new AtomicBoolean(true);
+
+Thread updaterThread = new Thread(() -> {
+  while (runUpdaterThread.get()) {
+try {
+  containerManager.removeContainerReplica(cId, cReplica);
+  containerManager.updateContainerReplica(cId, cReplica);
+} catch (ContainerNotFoundException e) {
+  e.printStackTrace();
+} catch (ContainerReplicaNotFoundException e) {
+  Assert.fail("Missing Container Replica for " + id);
+}
+  }
+});
+
+updaterThread.setDaemon(true);
+updaterThread.start();
+
+IntStream.range(0, 10).forEach(i -> {
 
 Review comment:
   1000 will be very high, let make it 100. I was able to reproduce the issue 
every time with just 10 iterations.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223139)
Time Spent: 1h 10m  (was: 1h)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:252)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at 

[jira] [Work logged] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?focusedWorklogId=223115=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223115
 ]

ASF GitHub Bot logged work on HDDS-1372:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:10
Start Date: 04/Apr/19 17:10
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #682: HDDS-1372. 
getContainerWithPipeline for a standalone pipeline fails with 
ConcurrentModificationException.
URL: https://github.com/apache/hadoop/pull/682#discussion_r272279486
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -194,6 +197,49 @@ public void testGetContainerWithPipeline() throws 
Exception {
 Assert.assertTrue(replicaNodes.contains(dn1));
   }
 
+  @Test
+  public void testGetContainerReplicaWithParallelUpdate() throws Exception {
+testGetContainerWithPipeline();
+final Optional id = containerManager.getContainerIDs()
+.stream().findFirst();
+Assert.assertTrue(id.isPresent());
+final ContainerID cId = id.get();
+final Optional replica = containerManager
+.getContainerReplicas(cId).stream().findFirst();
+Assert.assertTrue(replica.isPresent());
+final ContainerReplica cReplica = replica.get();
+final AtomicBoolean runUpdaterThread =
+new AtomicBoolean(true);
+
+Thread updaterThread = new Thread(() -> {
+  while (runUpdaterThread.get()) {
+try {
+  containerManager.removeContainerReplica(cId, cReplica);
+  containerManager.updateContainerReplica(cId, cReplica);
+} catch (ContainerNotFoundException e) {
+  e.printStackTrace();
+} catch (ContainerReplicaNotFoundException e) {
+  Assert.fail("Missing Container Replica for " + id);
+}
+  }
+});
+
+updaterThread.setDaemon(true);
+updaterThread.start();
+
+IntStream.range(0, 10).forEach(i -> {
 
 Review comment:
   Lets have a few more iterations to make sure we have a higher probability of 
hitting this race. How about a value of 1000 ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223115)
Time Spent: 50m  (was: 40m)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:252)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at 

[jira] [Work logged] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?focusedWorklogId=223116=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223116
 ]

ASF GitHub Bot logged work on HDDS-1372:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:10
Start Date: 04/Apr/19 17:10
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #682: HDDS-1372. 
getContainerWithPipeline for a standalone pipeline fails with 
ConcurrentModificationException.
URL: https://github.com/apache/hadoop/pull/682#discussion_r272278477
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -194,6 +197,49 @@ public void testGetContainerWithPipeline() throws 
Exception {
 Assert.assertTrue(replicaNodes.contains(dn1));
   }
 
+  @Test
+  public void testGetContainerReplicaWithParallelUpdate() throws Exception {
+testGetContainerWithPipeline();
+final Optional id = containerManager.getContainerIDs()
+.stream().findFirst();
+Assert.assertTrue(id.isPresent());
+final ContainerID cId = id.get();
+final Optional replica = containerManager
+.getContainerReplicas(cId).stream().findFirst();
+Assert.assertTrue(replica.isPresent());
+final ContainerReplica cReplica = replica.get();
+final AtomicBoolean runUpdaterThread =
+new AtomicBoolean(true);
+
+Thread updaterThread = new Thread(() -> {
+  while (runUpdaterThread.get()) {
+try {
+  containerManager.removeContainerReplica(cId, cReplica);
+  containerManager.updateContainerReplica(cId, cReplica);
+} catch (ContainerNotFoundException e) {
+  e.printStackTrace();
 
 Review comment:
   This should be converted to an Assert.fail
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223116)
Time Spent: 1h  (was: 50m)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:252)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 

[jira] [Work logged] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?focusedWorklogId=223111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223111
 ]

ASF GitHub Bot logged work on HDDS-1372:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:07
Start Date: 04/Apr/19 17:07
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #682: HDDS-1372. 
getContainerWithPipeline for a standalone pipeline fails with 
ConcurrentModificationException.
URL: https://github.com/apache/hadoop/pull/682#discussion_r272278218
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -194,6 +197,49 @@ public void testGetContainerWithPipeline() throws 
Exception {
 Assert.assertTrue(replicaNodes.contains(dn1));
   }
 
+  @Test
+  public void testGetContainerReplicaWithParallelUpdate() throws Exception {
+testGetContainerWithPipeline();
+final Optional id = containerManager.getContainerIDs()
+.stream().findFirst();
+Assert.assertTrue(id.isPresent());
+final ContainerID cId = id.get();
+final Optional replica = containerManager
+.getContainerReplicas(cId).stream().findFirst();
+Assert.assertTrue(replica.isPresent());
+final ContainerReplica cReplica = replica.get();
+final AtomicBoolean runUpdaterThread =
+new AtomicBoolean(true);
+
+Thread updaterThread = new Thread(() -> {
+  while (runUpdaterThread.get()) {
+try {
+  containerManager.removeContainerReplica(cId, cReplica);
+  containerManager.updateContainerReplica(cId, cReplica);
+} catch (ContainerNotFoundException e) {
+  e.printStackTrace();
 
 Review comment:
   This should be changed to an Assert
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223111)
Time Spent: 0.5h  (was: 20m)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:252)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 

[jira] [Work logged] (HDDS-1372) getContainerWithPipeline for a standalone pipeline fails with ConcurrentModificationException

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1372?focusedWorklogId=223112=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223112
 ]

ASF GitHub Bot logged work on HDDS-1372:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:07
Start Date: 04/Apr/19 17:07
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #682: HDDS-1372. 
getContainerWithPipeline for a standalone pipeline fails with 
ConcurrentModificationException.
URL: https://github.com/apache/hadoop/pull/682#discussion_r272278218
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -194,6 +197,49 @@ public void testGetContainerWithPipeline() throws 
Exception {
 Assert.assertTrue(replicaNodes.contains(dn1));
   }
 
+  @Test
+  public void testGetContainerReplicaWithParallelUpdate() throws Exception {
+testGetContainerWithPipeline();
+final Optional id = containerManager.getContainerIDs()
+.stream().findFirst();
+Assert.assertTrue(id.isPresent());
+final ContainerID cId = id.get();
+final Optional replica = containerManager
+.getContainerReplicas(cId).stream().findFirst();
+Assert.assertTrue(replica.isPresent());
+final ContainerReplica cReplica = replica.get();
+final AtomicBoolean runUpdaterThread =
+new AtomicBoolean(true);
+
+Thread updaterThread = new Thread(() -> {
+  while (runUpdaterThread.get()) {
+try {
+  containerManager.removeContainerReplica(cId, cReplica);
+  containerManager.updateContainerReplica(cId, cReplica);
+} catch (ContainerNotFoundException e) {
+  e.printStackTrace();
 
 Review comment:
   This should be changed to an Assert
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223112)
Time Spent: 40m  (was: 0.5h)

> getContainerWithPipeline for a standalone pipeline fails with 
> ConcurrentModificationException
> -
>
> Key: HDDS-1372
> URL: https://issues.apache.org/jira/browse/HDDS-1372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The excception is hit while fetching a pipeline during read.
> {code}
> 2019-04-03 00:52:50,125 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 16 on 59758, call Call#2270 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getC
> ontainerWithPipeline from 192.168.0.108:60011
> java.util.ConcurrentModificationException
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:252)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 

[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-04-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=223106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-223106
 ]

ASF GitHub Bot logged work on HDDS-1333:


Author: ASF GitHub Bot
Created on: 04/Apr/19 17:00
Start Date: 04/Apr/19 17:00
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #653: HDDS-1333. 
OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
classes
URL: https://github.com/apache/hadoop/pull/653#discussion_r272275936
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/env-compose.robot
 ##
 @@ -13,4 +13,20 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-org.apache.hadoop.fs.ozone.OzoneFileSystem
 
 Review comment:
   Make sense. +1 for this patch. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 223106)
Time Spent: 6h 40m  (was: 6.5h)

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Updated] (HDFS-14415) Backport HDFS-13799 to branch-2

2019-04-04 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14415:

Attachment: HDFS-14415-branch-2.000.patch

> Backport HDFS-13799 to branch-2
> ---
>
> Key: HDFS-14415
> URL: https://issues.apache.org/jira/browse/HDFS-14415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Trivial
> Attachments: HDFS-14415-branch-2.000.patch
>
>
> As multi-SBN feature is already backported to branch-2, this is a follow-up 
> to backport HDFS-13799.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14415) Backport HDFS-13799 to branch-2

2019-04-04 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14415:

Status: Patch Available  (was: Open)

> Backport HDFS-13799 to branch-2
> ---
>
> Key: HDFS-14415
> URL: https://issues.apache.org/jira/browse/HDFS-14415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Trivial
> Attachments: HDFS-14415-branch-2.000.patch
>
>
> As multi-SBN feature is already backported to branch-2, this is a follow-up 
> to backport HDFS-13799.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14415) Backport HDFS-13799 to branch-2

2019-04-04 Thread Chao Sun (JIRA)
Chao Sun created HDFS-14415:
---

 Summary: Backport HDFS-13799 to branch-2
 Key: HDFS-14415
 URL: https://issues.apache.org/jira/browse/HDFS-14415
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chao Sun
Assignee: Chao Sun


As multi-SBN feature is already backported to branch-2, this is a follow-up to 
backport HDFS-13799.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >