[jira] [Commented] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-05-11 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472922#comment-16472922
 ] 

Mukul Kumar Singh commented on HDFS-13398:
--

Hi [~ajaysachdev], I cannot see a patch uploaded to the jira. Can you please 
upload the patch to the jira :)


> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-50) EventQueue: Add a priority based execution model for events in eventqueue.

2018-05-11 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-50:
--
Attachment: HDDS-50.001.patch

> EventQueue: Add a priority based execution model for events in eventqueue.
> --
>
> Key: HDDS-50
> URL: https://issues.apache.org/jira/browse/HDDS-50
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-50.001.patch
>
>
> Currently all the events in SCM are executed with the same priority. This 
> jira will add a priority based execution model where the "niceness" value of 
> an event will determine the priority of the execution of the event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-50) EventQueue: Add a priority based execution model for events in eventqueue.

2018-05-11 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-50:
--
Status: Patch Available  (was: Open)

> EventQueue: Add a priority based execution model for events in eventqueue.
> --
>
> Key: HDDS-50
> URL: https://issues.apache.org/jira/browse/HDDS-50
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-50.001.patch
>
>
> Currently all the events in SCM are executed with the same priority. This 
> jira will add a priority based execution model where the "niceness" value of 
> an event will determine the priority of the execution of the event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472916#comment-16472916
 ] 

Mukul Kumar Singh commented on HDDS-25:
---

[~anu] [~xyao], I will take care of the review comments in HDDS-50.

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1, Acadia
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13339) Volume reference can't release when testVolFailureStatsPreservedOnNNRestart

2018-05-11 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472903#comment-16472903
 ] 

Xiao Chen commented on HDFS-13339:
--

Thanks for the ping, will find cycles to review next week.

Could you please update the title / description of the jira as Daryn suggested?

> Volume reference can't release when testVolFailureStatsPreservedOnNNRestart
> ---
>
> Key: HDFS-13339
> URL: https://issues.apache.org/jira/browse/HDFS-13339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: os: Linux 2.6.32-358.el6.x86_64
> hadoop version: hadoop-3.2.0-SNAPSHOT
> unit: mvn test -Pnative 
> -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Critical
>  Labels: DataNode, volumes
> Attachments: HDFS-13339.001.patch
>
>
> When i execute Unit Test of
>  TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, 
> the process blocks on waitReplication, detail information as follows:
> [INFO] ---
>  [INFO] T E S T S
>  [INFO] ---
>  [INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 307.492 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] 
> testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting)
>  Time elapsed: 307.206 s <<< ERROR!
>  java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 
> 2 replicas
>  at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800)
>  at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-11 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-49:
--
Attachment: HDDS-49.001.patch

> Standalone protocol should use grpc in place of netty.
> --
>
> Key: HDDS-49
> URL: https://issues.apache.org/jira/browse/HDDS-49
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-49.001.patch
>
>
> Currently an Ozone client in standalone communicates with datanode over 
> netty. However while using ratis, grpc is the default protocol. 
> In order to reduce the number of rpc protocol and handling, this jira aims to 
> convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-11 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-49:
--
Status: Patch Available  (was: Open)

> Standalone protocol should use grpc in place of netty.
> --
>
> Key: HDDS-49
> URL: https://issues.apache.org/jira/browse/HDDS-49
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-49.001.patch
>
>
> Currently an Ozone client in standalone communicates with datanode over 
> netty. However while using ratis, grpc is the default protocol. 
> In order to reduce the number of rpc protocol and handling, this jira aims to 
> convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472899#comment-16472899
 ] 

genericqa commented on HDDS-17:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 24s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.container.closer.TestContainerCloser |
|   | hadoop.hdds.scm.block.TestDeletedBlockLog |
|   | hadoop.ozone.container.replication.TestContainerSupervisor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-17 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923122/HDDS-17.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Updated] (HDDS-50) EventQueue: Add a priority based execution model for events in eventqueue.

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-50:
-
Sprint: HDDS Acadia

> EventQueue: Add a priority based execution model for events in eventqueue.
> --
>
> Key: HDDS-50
> URL: https://issues.apache.org/jira/browse/HDDS-50
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
>
> Currently all the events in SCM are executed with the same priority. This 
> jira will add a priority based execution model where the "niceness" value of 
> an event will determine the priority of the execution of the event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-24) Ozone: Rename directory in ozonefs should be atomic

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-24?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-24:
-
Sprint: HDDS Acadia

> Ozone: Rename directory in ozonefs should be atomic
> ---
>
> Key: HDDS-24
> URL: https://issues.apache.org/jira/browse/HDDS-24
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>
> Currently rename in ozonefs is not atomic. While rename takes place another 
> client might be adding a new file into the directory. Further if rename fails 
> midway the directory will be in an inconsistent state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-48) ContainerIO - Storage Management

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-48?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-48:
-
Sprint: HDDS Acadia

> ContainerIO - Storage Management
> 
>
> Key: HDDS-48
> URL: https://issues.apache.org/jira/browse/HDDS-48
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: ContainerIO-StorageManagement-DesignDoc.pdf, HDDS 
> DataNode Disk Layout.pdf
>
>
> We propose refactoring the HDDS DataNode IO path to enforce clean separation 
> between the Container management and the Storage layers. All components 
> requiring access to HDDS containers on a Datanode should do so via this 
> Storage layer.
> The proposed Storage layer would be responsible for end-to-end disk and 
> volume management. This involves running disk checks and detecting disk 
> failures, distributing data across disks as per the configured policy, 
> collecting performance statistics and verifying the integrity of the data. 
> Attached Design Doc gives an overview of the proposed class diagram.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-49:
-
Sprint: HDDS Acadia

> Standalone protocol should use grpc in place of netty.
> --
>
> Key: HDDS-49
> URL: https://issues.apache.org/jira/browse/HDDS-49
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
>
> Currently an Ozone client in standalone communicates with datanode over 
> netty. However while using ratis, grpc is the default protocol. 
> In order to reduce the number of rpc protocol and handling, this jira aims to 
> convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-26) Fix Ozone Unit Test Failures

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-26?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-26:
-
Sprint: HDDS Acadia

> Fix Ozone Unit Test Failures
> 
>
> Key: HDDS-26
> URL: https://issues.apache.org/jira/browse/HDDS-26
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This is an umbrellas JIRA to fix unit test failures related or unrelated 
> HDDS-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Sprint: HDDS Acadia

> Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)
> --
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-22) Restructure SCM - Datanode protocol

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-22:
-
Sprint: HDDS Acadia

> Restructure SCM - Datanode protocol
> ---
>
> Key: HDDS-22
> URL: https://issues.apache.org/jira/browse/HDDS-22
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> This jira aims at properly defining the SCM - Datanode protocol.
> *EBNF of Heartbeat*
> {noformat}
> Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
> DeltaContainerReports | PipelineReports
>   DatanodeDetails ::= UUID | IpAddress | Hostname | Port
> Port ::= Type | Value
>   NodeReport ::= NodeIOStats | StorageReports
> NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
>   ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
>   KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
>   ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
> StorageReports ::= zero or more StorageReport 
>   StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
> Health ::= Status | ErrorCode | Message
> VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount 
> | ReadTime | WriteTime
>   ContainerReports ::= zero or more ContainerReport
> ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
> Name |  LifeCycleState | ContainerIOStats 
>   ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
>   DeltaContainerReports ::= ContainerID | Used
>   PipelineReport ::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
> EpochID | LogStats | LogFailed
> RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
> EpochChanged
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2) Chill Mode to consider percentage of container reports

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2:

Sprint: HDDS Acadia

> Chill Mode to consider percentage of container reports
> --
>
> Key: HDDS-2
> URL: https://issues.apache.org/jira/browse/HDDS-2
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Chill Mode.pdf, HDDS-02.002.patch, HDDS-02.003.patch, 
> HDDS-2.004.patch, HDFS-13500.00.patch, HDFS-13500.01.patch, 
> HDFS-13500.02.patch
>
>
> To come out of chill mode currenly if one datanode is registered, we come out 
> of chill mode in SCM.
> This needs to be changed to consider percentage of container reports.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-38) Add SCMNodeStorage map in SCM class to store storage statistics per Datanode

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-38?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-38:
-
Sprint: HDDS Acadia

> Add SCMNodeStorage map in SCM class to store storage statistics per Datanode
> 
>
> Key: HDDS-38
> URL: https://issues.apache.org/jira/browse/HDDS-38
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-38.00.patch
>
>
> Currently , the storage stats per Datanode are maintained inside 
> scmNodeManager. This will
> move the scmNodeStats for storage outside SCMNodeManager to simplify 
> refactoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-46) Add unit test to verify loading of HddsDatanodeService as part of datanode startup when ozone is enabled

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-46:
-
Sprint: HDDS Acadia

> Add unit test to verify loading of HddsDatanodeService as part of datanode 
> startup when ozone is enabled
> 
>
> Key: HDDS-46
> URL: https://issues.apache.org/jira/browse/HDDS-46
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> When {{ozone}} is enabled and {{HddsDatanodeService}} plugin is configured, 
> datanode should load {{HddsDatanodeService}} plugin and start Hdds services. 
> We have to add unit test case inside {{hadoop/hadoop-ozone/integration-test}} 
> using {{MiniDFSCluster}} to verify the following scenarios
>  * HddsDatanodeService is loaded as part of datanode startup.
>  * run Freon test against {{MiniDFSCluster}} and validate the data that has 
> been written.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-3:

Sprint: HDDS Acadia

> When datanodes register, send NodeReport and ContainerReport
> 
>
> Key: HDDS-3
> URL: https://issues.apache.org/jira/browse/HDDS-3
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode, SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-3.001.patch, HDFS-13432-HDFS-7240.00.patch, 
> HDFS-13432.01.patch
>
>
> From chillmode Deisgn Notes:
> As part of this Jira, will update register to send NodeReport and 
> ContaineReport.
> Current Datanodes, send one heartbeat per 30 seconds. That means that even if 
> the datanode is ready it will take around a 1 min or longer before the SCM 
> sees the datanode container reports. We can address this partially by making 
> sure that Register call contains both NodeReport and ContainerReport.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-44) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-44?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-44:
-
Sprint: HDDS Acadia

> Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
> -
>
> Key: HDDS-44
> URL: https://issues.apache.org/jira/browse/HDDS-44
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDFS-13383-HDFS-7240.001.patch, 
> HDFS-13383-HDFS-7240.002.patch
>
>
> start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. 
> Starting of datanode fails because of incomplete classpaths as datanode is 
> unable to load all the plugins.
> Setting the class path to the following values does resolve the issue:
> {code}
> export 
> HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-17:
-
Sprint: HDDS Acadia

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch, HDDS-17.002.patch, HDDS-17.003.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-14) Ozone: freon should not retry creating keys immediately after chilli mode failures

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-14?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-14:
-
Sprint: HDDS Acadia

> Ozone: freon should not retry creating keys immediately after chilli mode 
> failures
> --
>
> Key: HDDS-14
> URL: https://issues.apache.org/jira/browse/HDDS-14
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> I've seen may create key failures immediately after spin up the docker based 
> ozone cluster. The error stack does not reveal this is caused by chili mode 
> (SCM log has it).
> freon could handle chili mode better without too many create key retry 
> failures in a short period of time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-21:
-
Sprint: HDDS Acadia

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-21.001.patch, HDDS-21.002.patch, HDDS-21.003.patch, 
> HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-45) Removal of old OzoneRestClient

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-45:
-
Sprint: HDDS Acadia

> Removal of old OzoneRestClient
> --
>
> Key: HDDS-45
> URL: https://issues.apache.org/jira/browse/HDDS-45
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> Once new REST based OzoneClient is ready, the old OzoneRestClient can be 
> removed. This jira is to track the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-50) EventQueue: Add a priority based execution model for events in eventqueue.

2018-05-11 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-50:
-

 Summary: EventQueue: Add a priority based execution model for 
events in eventqueue.
 Key: HDDS-50
 URL: https://issues.apache.org/jira/browse/HDDS-50
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


Currently all the events in SCM are executed with the same priority. This jira 
will add a priority based execution model where the "niceness" value of an 
event will determine the priority of the execution of the event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-11 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-49:
-

 Summary: Standalone protocol should use grpc in place of netty.
 Key: HDDS-49
 URL: https://issues.apache.org/jira/browse/HDDS-49
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


Currently an Ozone client in standalone communicates with datanode over netty. 
However while using ratis, grpc is the default protocol. 

In order to reduce the number of rpc protocol and handling, this jira aims to 
convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472870#comment-16472870
 ] 

Anu Engineer commented on HDDS-17:
--

[~xyao] Thank you for the comments. Patch v3 addresses all the comments. Please 
see below for the details.
{quote}Line 41: "positive int' should be "positive long"
{quote}
Good catch, fixed.
{quote}Line 70: Should we use atomic APIs offered by ConcurrentHashMap like 
putIfAbsent, etc.
{quote}
Fixed.
{quote}Line 99: How do we plan to share the cycle for further report (like 
size/stats update)
{quote}
It will be managed by the {{ContainerStateMap}}
{quote}Line 155: same as Line 70.
{quote}
Fixed.
{quote}  Line 159: Should we return an immutable collection here?
{quote}
Done.

 

It also fixed the findbug issues and a big test case has been broken into 4 
smaller test cases to make it easy to reason about in case of failures.

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch, HDDS-17.002.patch, HDDS-17.003.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-17:
-
Attachment: HDDS-17.003.patch

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch, HDDS-17.002.patch, HDDS-17.003.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472868#comment-16472868
 ] 

Íñigo Goiri commented on HDFS-12615:


I will be presenting this work in the Dataworks Summit.
Is there anyone interested on sharing their experience deploying RBF?

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) docker changes to test secure ozone cluster

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-10:
-
Fix Version/s: 0.3.0

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-21:
-
Fix Version/s: 0.2.1

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-21.001.patch, HDDS-21.002.patch, HDDS-21.003.patch, 
> HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-41) Ozone: C/C++ implementation of ozone client using curl

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-41?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-41.
--
Resolution: Incomplete

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDDS-41
> URL: https://issues.apache.org/jira/browse/HDDS-41
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: OzonePostMerge
> Fix For: 0.2.1
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, HDFS-12340-HDFS-7240.003.patch, 
> HDFS-12340-HDFS-7240.004.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-41) Ozone: C/C++ implementation of ozone client using curl

2018-05-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-41?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472834#comment-16472834
 ] 

Anu Engineer commented on HDDS-41:
--

I am going to close this Jira for time being, please re-open if we decide to 
continue work on this.

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDDS-41
> URL: https://issues.apache.org/jira/browse/HDDS-41
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: OzonePostMerge
> Fix For: 0.2.1
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, HDFS-12340-HDFS-7240.003.patch, 
> HDFS-12340-HDFS-7240.004.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-19) Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-11 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472809#comment-16472809
 ] 

Tsz Wo Nicholas Sze commented on HDDS-19:
-

[~ljain],  RATIS-237 is now committed.  I also have deployed a new snapshot 
0.1.1-alpha-d7d7061-SNAPSHOT.  Please update your patch accordingly.  Thanks.

> Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)
> --
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472800#comment-16472800
 ] 

genericqa commented on HDFS-13547:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 9 new + 31 unchanged - 0 fixed = 40 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Redundant nullcheck of props, which is known to be non-null in 
org.apache.hadoop.security.IngressPortBasedResolver.setConf(Configuration)  
Redundant null check at IngressPortBasedResolver.java:is known to be non-null 
in org.apache.hadoop.security.IngressPortBasedResolver.setConf(Configuration)  
Redundant null check at IngressPortBasedResolver.java:[line 75] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13547 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923090/HDFS-13547.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 51ea968c21fa 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4b4f24a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | 

[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472790#comment-16472790
 ] 

Íñigo Goiri commented on HDFS-13544:


Thanks [~hanishakoneru] for the log lines, looks good.
The unit tests are the usual suspects and there's nothing that should break 
those.
+1 on  [^HDFS-13544.001.patch].

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472783#comment-16472783
 ] 

genericqa commented on HDDS-3:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 13s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.container.closer.TestContainerCloser |
|   | hadoop.hdds.scm.block.TestDeletedBlockLog |
|   | hadoop.ozone.container.replication.TestContainerSupervisor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-3 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923100/HDDS-3.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 5aca353c7988 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 

[jira] [Updated] (HDDS-48) ContainerIO - Storage Management

2018-05-11 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-48?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-48:
---
Attachment: HDDS DataNode Disk Layout.pdf

> ContainerIO - Storage Management
> 
>
> Key: HDDS-48
> URL: https://issues.apache.org/jira/browse/HDDS-48
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: ContainerIO-StorageManagement-DesignDoc.pdf, HDDS 
> DataNode Disk Layout.pdf
>
>
> We propose refactoring the HDDS DataNode IO path to enforce clean separation 
> between the Container management and the Storage layers. All components 
> requiring access to HDDS containers on a Datanode should do so via this 
> Storage layer.
> The proposed Storage layer would be responsible for end-to-end disk and 
> volume management. This involves running disk checks and detecting disk 
> failures, distributing data across disks as per the configured policy, 
> collecting performance statistics and verifying the integrity of the data. 
> Attached Design Doc gives an overview of the proposed class diagram.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-48) ContainerIO - Storage Management

2018-05-11 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-48:
--

 Summary: ContainerIO - Storage Management
 Key: HDDS-48
 URL: https://issues.apache.org/jira/browse/HDDS-48
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Attachments: ContainerIO-StorageManagement-DesignDoc.pdf

We propose refactoring the HDDS DataNode IO path to enforce clean separation 
between the Container management and the Storage layers. All components 
requiring access to HDDS containers on a Datanode should do so via this Storage 
layer.

The proposed Storage layer would be responsible for end-to-end disk and volume 
management. This involves running disk checks and detecting disk failures, 
distributing data across disks as per the configured policy, collecting 
performance statistics and verifying the integrity of the data. 

Attached Design Doc gives an overview of the proposed class diagram.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-11 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472747#comment-16472747
 ] 

Hanisha Koneru commented on HDFS-13544:
---

Thanks [~elgoiri]. Will go ahead with journal id.

Here are some examples of new logs from JournalNode
{code:java}
INFO org.apache.hadoop.hdfs.qjournal.server.Journal: Updating lastPromisedEpoch 
from 0 to 1 for client /127.0.0.1 ; journal id: ns1

INFO org.apache.hadoop.hdfs.qjournal.server.Journal: getSegmentInfo(19): 
EditLogFile(file=/data/jn/ns1/current/edits_inprogress_019,first=019,last=019,inProgress=true,hasCorruptHeader=false)
 -> startTxId: 19 endTxId: 19 isInProgress: true ; journal id: ns1

INFO org.apache.hadoop.hdfs.qjournal.server.Journal: Accepted recovery for 
segment 19: segmentState { startTxId: 19 endTxId: 19 isInProgress: true } 
acceptedInEpoch: 2 ; journal id: ns1{code}

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472729#comment-16472729
 ] 

Hanisha Koneru commented on HDFS-13502:
---

Thanks for this improvement [~apoorvnaik]

Some suggestions:
 # We should reuse the keys from \{{HdfsClientConfigKeys}} instead of defining 
them again in {{HdfsNameServiceResolver}}. We already have all the keys defined 
in that class except for \{{dfs.internal.nameservices}}. We can define this key 
also in {\{HdfsClientConfigKeys}}.
 # We can make the methods in {{HdfsNameServiceResolver}} as static. And avoid 
initializing an instance of this class in HdfsUtils.
 # Please resolve the checkstyle errors. I think most of them are for 
indentation.
 # Do we need to add hdfs-site.xml here? I don’t see it being used anywhere.
 # Can you please rename HdfsNameServiceResolverTest to 
TestHdfsNameServiceResolver to be consistent with Hdfs naming convention.

> Utility to resolve NameServiceId in federated cluster
> -
>
> Key: HDFS-13502
> URL: https://issues.apache.org/jira/browse/HDFS-13502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Apoorv Naik
>Assignee: Apoorv Naik
>Priority: Major
> Attachments: 
> 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
>
>
> A Utility class in HDFS, that would act as a reverse lookup for : 
> HDFS URLs would be beneficial for deployments having multiple namenodes and 
> nameservices.
>  
> Consumers would benefit by having a unified namespace across the Federated 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472719#comment-16472719
 ] 

Hudson commented on HDDS-40:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14179 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14179/])
HDDS-40. Separating packaging of Ozone/HDDS from the main Hadoop. (aengineer: 
rev 4b4f24ad5f2b457ad215d469bf28cf9a799812bc)
* (add) hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
* (edit) hadoop-dist/src/main/compose/ozone/.env
* (edit) hadoop-dist/src/main/compose/ozone/docker-compose.yaml
* (add) dev-support/bin/ozone-dist-layout-stitching
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeUtils.java
* (add) hadoop-ozone/acceptance-test/dev-support/bin/robot.sh
* (add) dev-support/bin/ozone-dist-tar-stitching
* (edit) hadoop-dist/pom.xml
* (edit) hadoop-ozone/acceptance-test/pom.xml
* (edit) dev-support/bin/dist-layout-stitching
* (edit) .gitignore
* (edit) hadoop-ozone/acceptance-test/README.md
* (edit) hadoop-ozone/acceptance-test/src/test/compose/.env
* (edit) hadoop-ozone/acceptance-test/src/test/compose/docker-compose.yaml
* (edit) 
hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot


> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-40.001.patch, HDDS-40.002.patch, HDDS-40.003.patch, 
> HDDS-40.004.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472700#comment-16472700
 ] 

genericqa commented on HDFS-13502:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 182 new + 1 unchanged - 0 fixed = 183 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923089/0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 40299fcfd8b5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ca612e3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24184/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24184/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 1) |
| modules | C: 

[jira] [Comment Edited] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472694#comment-16472694
 ] 

Bharat Viswanadham edited comment on HDDS-3 at 5/11/18 9:30 PM:


[~xyao] Rebased the patch.

 *Ran Ozone Acceptace Test:*

==
 Acceptance 
 ==
 Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
 ==
 Daemons are running without error | PASS |
 --
 Check if datanode is connected to the scm | PASS |
 --
 Scale it up to 5 datanodes | PASS |
 --
 Test rest interface | PASS |
 --
 Test ozone cli | PASS |
 --
 Check webui static resources | PASS |
 --
 Start freon testing | PASS |
 --
 Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
 7 critical tests, 7 passed, 0 failed
 7 tests total, 7 passed, 0 failed
 ==
 Acceptance | PASS |
 7 critical tests, 7 passed, 0 failed
 7 tests total, 7 passed, 0 failed
 =


was (Author: bharatviswa):
[~xyao] Rebased the patch.

 

==
Acceptance 
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes | PASS |
--
Test rest interface | PASS |
--
Test ozone cli | PASS |
--
Check webui static resources | PASS |
--
Start freon testing | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Acceptance | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
=

> When datanodes register, send NodeReport and ContainerReport
> 
>
> Key: HDDS-3
> URL: https://issues.apache.org/jira/browse/HDDS-3
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode, SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-3.001.patch, HDFS-13432-HDFS-7240.00.patch, 
> HDFS-13432.01.patch
>
>
> From chillmode Deisgn Notes:
> As part of this Jira, will update register to send NodeReport and 
> ContaineReport.
> Current Datanodes, send one heartbeat per 30 seconds. That means that even if 
> the datanode is ready it will take around a 1 min or longer before the SCM 
> sees the datanode container reports. We can address this partially by making 
> sure that Register call contains both NodeReport and ContainerReport.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472694#comment-16472694
 ] 

Bharat Viswanadham commented on HDDS-3:
---

[~xyao] Rebased the patch.

 

==
Acceptance 
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes | PASS |
--
Test rest interface | PASS |
--
Test ozone cli | PASS |
--
Check webui static resources | PASS |
--
Start freon testing | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Acceptance | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
=

> When datanodes register, send NodeReport and ContainerReport
> 
>
> Key: HDDS-3
> URL: https://issues.apache.org/jira/browse/HDDS-3
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode, SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-3.001.patch, HDFS-13432-HDFS-7240.00.patch, 
> HDFS-13432.01.patch
>
>
> From chillmode Deisgn Notes:
> As part of this Jira, will update register to send NodeReport and 
> ContaineReport.
> Current Datanodes, send one heartbeat per 30 seconds. That means that even if 
> the datanode is ready it will take around a 1 min or longer before the SCM 
> sees the datanode container reports. We can address this partially by making 
> sure that Register call contains both NodeReport and ContainerReport.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-11 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3:
--
Attachment: HDDS-3.001.patch

> When datanodes register, send NodeReport and ContainerReport
> 
>
> Key: HDDS-3
> URL: https://issues.apache.org/jira/browse/HDDS-3
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode, SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-3.001.patch, HDFS-13432-HDFS-7240.00.patch, 
> HDFS-13432.01.patch
>
>
> From chillmode Deisgn Notes:
> As part of this Jira, will update register to send NodeReport and 
> ContaineReport.
> Current Datanodes, send one heartbeat per 30 seconds. That means that even if 
> the datanode is ready it will take around a 1 min or longer before the SCM 
> sees the datanode container reports. We can address this partially by making 
> sure that Register call contains both NodeReport and ContainerReport.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-40:
-
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution, and especially taking care of 
README.md. I have committed this to trunk.

> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-40.001.patch, HDDS-40.002.patch, HDDS-40.003.patch, 
> HDDS-40.004.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472634#comment-16472634
 ] 

Elek, Marton commented on HDDS-40:
--

No more hdsl in the latest patch + rebased on the apache/trunk.

> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch, HDDS-40.002.patch, HDDS-40.003.patch, 
> HDDS-40.004.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-40:
-
Attachment: HDDS-40.004.patch

> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch, HDDS-40.002.patch, HDDS-40.003.patch, 
> HDDS-40.004.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13547) Add ingress port based sasl resolver

2018-05-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13547:
--
Status: Patch Available  (was: Open)

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13547.001.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-05-11 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472588#comment-16472588
 ] 

Chen Liang commented on HDFS-13547:
---

v001 patch adds only ingress port to the interface as for now.

[~benoyantony] I was looking at what are some other parameters that we may add 
to the interface. But I found those either potentially unavailable such as user 
name, or I can't see a valid use case such as client port. It is difficult for 
me to justify adding those candidate parameters I looked. So without having 
other actual use case, I think we refrain from adding more parameters as for 
now. What do you think?

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13547.001.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13547) Add ingress port based sasl resolver

2018-05-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13547:
--
Attachment: HDFS-13547.001.patch

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13547.001.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-05-11 Thread Ajay Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472582#comment-16472582
 ] 

Ajay Sachdev edited comment on HDFS-13398 at 5/11/18 8:14 PM:
--

I have attached apache hadoop trunk multithreading diff. Please take a look at 
code review and provide comments.

 

Thanks

Ajay


was (Author: ajaynmalu23):
I have attached apache hadoop trunk multithreading diff. Please take a look at 
code review and provide comments.

 

Thanks

Ajay

> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-05-11 Thread Ajay Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472582#comment-16472582
 ] 

Ajay Sachdev commented on HDFS-13398:
-

I have attached apache hadoop trunk multithreading diff. Please take a look at 
code review and provide comments.

 

Thanks

Ajay

> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread Apoorv Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Naik updated HDFS-13502:
---
Attachment: 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch

> Utility to resolve NameServiceId in federated cluster
> -
>
> Key: HDFS-13502
> URL: https://issues.apache.org/jira/browse/HDFS-13502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Apoorv Naik
>Assignee: Apoorv Naik
>Priority: Major
> Attachments: 
> 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
>
>
> A Utility class in HDFS, that would act as a reverse lookup for : 
> HDFS URLs would be beneficial for deployments having multiple namenodes and 
> nameservices.
>  
> Consumers would benefit by having a unified namespace across the Federated 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread Apoorv Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Naik updated HDFS-13502:
---
Attachment: (was: 
0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch)

> Utility to resolve NameServiceId in federated cluster
> -
>
> Key: HDFS-13502
> URL: https://issues.apache.org/jira/browse/HDFS-13502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Apoorv Naik
>Assignee: Apoorv Naik
>Priority: Major
> Attachments: 
> 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
>
>
> A Utility class in HDFS, that would act as a reverse lookup for : 
> HDFS URLs would be beneficial for deployments having multiple namenodes and 
> nameservices.
>  
> Consumers would benefit by having a unified namespace across the Federated 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13547) Add ingress port based sasl resolver

2018-05-11 Thread Chen Liang (JIRA)
Chen Liang created HDFS-13547:
-

 Summary: Add ingress port based sasl resolver
 Key: HDFS-13547
 URL: https://issues.apache.org/jira/browse/HDFS-13547
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Reporter: Chen Liang
Assignee: Chen Liang


This Jira extends the SASL properties resolver interface to take an ingress 
port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472547#comment-16472547
 ] 

Xiaoyu Yao commented on HDDS-3:
---

[~bharatviswa], thanks for working on this. Can you rebase the patch to trunk 
as the latest one does not apply any more?

> When datanodes register, send NodeReport and ContainerReport
> 
>
> Key: HDDS-3
> URL: https://issues.apache.org/jira/browse/HDDS-3
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode, SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDFS-13432-HDFS-7240.00.patch, HDFS-13432.01.patch
>
>
> From chillmode Deisgn Notes:
> As part of this Jira, will update register to send NodeReport and 
> ContaineReport.
> Current Datanodes, send one heartbeat per 30 seconds. That means that even if 
> the datanode is ready it will take around a 1 min or longer before the SCM 
> sees the datanode container reports. We can address this partially by making 
> sure that Register call contains both NodeReport and ContainerReport.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2) Chill Mode to consider percentage of container reports

2018-05-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472546#comment-16472546
 ] 

Xiaoyu Yao commented on HDDS-2:
---

[~bharatviswa], thanks for working on this. Can you rebase the patch to trunk 
as the latest one does not apply any more?

> Chill Mode to consider percentage of container reports
> --
>
> Key: HDDS-2
> URL: https://issues.apache.org/jira/browse/HDDS-2
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Chill Mode.pdf, HDDS-02.002.patch, HDDS-02.003.patch, 
> HDDS-2.004.patch, HDFS-13500.00.patch, HDFS-13500.01.patch, 
> HDFS-13500.02.patch
>
>
> To come out of chill mode currenly if one datanode is registered, we come out 
> of chill mode in SCM.
> This needs to be changed to consider percentage of container reports.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472529#comment-16472529
 ] 

Xiaoyu Yao commented on HDDS-5:
---

Thanks [~ajayydv] for working on this. The patch looks good to me. 

Can you fix the Jenkins unit test failures related to this change? +1 after 
that.

 

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, HDDS-5-HDDS-4.01.patch, 
> initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472528#comment-16472528
 ] 

Anu Engineer commented on HDDS-25:
--

[~msingh] I know you have a follow up patch in works, Do you want to take care 
of [~xyao] 's comments in that patch?

 

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472513#comment-16472513
 ] 

Xiaoyu Yao commented on HDDS-25:


Thanks [~elek] for working on this. Sorry, I'm late in the review.

The patch looks good to me. I just have few comments that we can address in 
follow up JIRAs.

 

EventQueue.java

Line 144/166: the counter processed is not updated during the loop.

We can calculate and update it around line 148. Otherwise, it is always 0.

 

Line 179: you might want to try it multiple times and enforce a timeout for the 
executor close. 

Some badly designed JVM shutdown hooks could make the executor close wait 
forever.

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472503#comment-16472503
 ] 

Hudson commented on HDDS-25:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14175 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14175/])
HDDS-25. Simple async event processing for SCM. Contributed by Elek, 
(aengineer: rev ba12e8805e2ae6f125042bfb1d6b3cfc10faf9ed)
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
* (add) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueueChain.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/TypedEvent.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/package-info.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/Event.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventPublisher.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventHandler.java
* (add) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventExecutor.java


> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472468#comment-16472468
 ] 

genericqa commented on HDDS-39:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 50s{color} 
| {color:red} root generated 11 new + 1477 unchanged - 0 fixed = 1488 total 
(was 1477) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-hdds/common generated 18 new + 1 unchanged - 0 
fixed = 19 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 

[jira] [Updated] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-25:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~msingh] ,[~shashikant] Thanks for review comments. [~elek] Thanks for 
contribtuion. The patch v5 is beautiful and a pleasure to read. Thanks for 
getting this done. Really appreciate it.

 

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472444#comment-16472444
 ] 

Hudson commented on HDDS-39:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14173/])
HDDS-39. Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 
(aengineer: rev c1d64d60f6ef3cb9ed89669501ca5b1efbab3c28)
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/ratis/shaded/com/google/protobuf/ShadedProtoUtil.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/background/BlockDeletingService.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/BlockID.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyUtils.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/FileUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerData.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ChunkManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ChunkUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/KeyManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientMetrics.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientMetrics.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/StorageContainerException.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientInitializer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/ratis/shaded/com/google/protobuf/package-info.java
* (edit) hadoop-hdds/common/pom.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
* (edit) 

[jira] [Commented] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472441#comment-16472441
 ] 

genericqa commented on HDFS-13502:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 185 new + 1 unchanged - 0 fixed = 186 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Private method 
org.apache.hadoop.hdfs.client.HdfsNameServiceResolver.getNameServiceID(String) 
is never called  At HdfsNameServiceResolver.java:called  At 
HdfsNameServiceResolver.java:[line 130] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923063/0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 358e7ca54e89 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d50c4d7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-11 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472403#comment-16472403
 ] 

Xiao Chen commented on HDFS-13539:
--

Ping [~ajayydv], any comments?

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch, HDFS-13539.02.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-39:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1, I have committed this change to the trunk. [~msingh] Thanks for the 
contribution.

> Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using 
> maven protoc plugin
> --
>
> Key: HDDS-39
> URL: https://issues.apache.org/jira/browse/HDDS-39
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-39.003.patch, HDDS-39.004.patch, 
> HDFS-13389-HDFS-7240.001.patch, HDFS-13389-HDFS-7240.002.patch
>
>
> Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
> this can be changed to use proto3 compiler.
> This change will help in performance improvement as well because currently in 
> the client path, the xceiver client ratis converts proto2 classes to proto3 
> using byte string manipulation.
> Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
> version will still remain 2.5 as this proto compilation will be done through 
> the following plugin. 
> https://www.xolstice.org/protobuf-maven-plugin/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13494) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-11 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13494:
-
 Environment: JDK 8u171
Hadoop Flags:   (was: Reviewed)
 Summary: Configure serialFilter to avoid UnrecoverableKeyException 
caused by JDK-8189997  (was: Set empty value to serialFilter to avoid 
UnrecoverableKeyException on JDK 8u171)

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HDFS-13494
> URL: https://issues.apache.org/jira/browse/HDFS-13494
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HDFS-13494.001.patch, HDFS-13494.002.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This could be the cause the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13494) Set empty value to serialFilter to avoid UnrecoverableKeyException on JDK 8u171

2018-05-11 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472381#comment-16472381
 ] 

Akira Ajisaka commented on HDFS-13494:
--

bq. The filter pattern uses the same format as jdk.serialFilter. The default 
pattern allows java.lang.Enum, java.security.KeyRep, java.security.KeyRep$Type, 
and javax.crypto.spec.SecretKeySpec but rejects all the others.

In Apache Hadoop, JavaKeyStoreProvider stores secret key inside 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata and 
serialize/deserialize it, and the class is not allowed by default. Therefore 
the test fails.

This configuration passes the test.
{code}
System.setProperty("jceks.key.serialFilter", 
"java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata;!*");
{code}

It would be better we can configure this parameter instead of using hard-coded 
value in KeyProvider.java. What do you think, [~gabor.bota]?

> Set empty value to serialFilter to avoid UnrecoverableKeyException on JDK 
> 8u171
> ---
>
> Key: HDFS-13494
> URL: https://issues.apache.org/jira/browse/HDFS-13494
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HDFS-13494.001.patch, HDFS-13494.002.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This could be the cause the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-47) Add acceptance tests for Ozone Shell

2018-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472337#comment-16472337
 ] 

Hudson commented on HDDS-47:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14172 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14172/])
HDDS-47. Add acceptance tests for Ozone Shell. Contributed by Lokesh 
(aengineer: rev 3a93af731ee09307b6f07e0fc739d1b5653cf69d)
* (edit) 
hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java


> Add acceptance tests for Ozone Shell
> 
>
> Key: HDDS-47
> URL: https://issues.apache.org/jira/browse/HDDS-47
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-47.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472327#comment-16472327
 ] 

Anu Engineer commented on HDDS-40:
--

One of the earlier commits have broken this patch, can you please rebase and 
post this again.

> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch, HDDS-40.002.patch, HDDS-40.003.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472324#comment-16472324
 ] 

Anu Engineer commented on HDDS-40:
--

+1, I agree that this new approach is better and more user-friendly. They just 
have to invoke a script. There is a minor typo -Phdsl instead of -Phdds in the 
readme. I will fix it while committing.

 

> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch, HDDS-40.002.patch, HDDS-40.003.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13079) Provide a config to start namenode in safemode state upto a certain transaction id

2018-05-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472323#comment-16472323
 ] 

Daryn Sharp commented on HDFS-13079:


This appears to be a tricky to correctly wield "feature" (stop services, set 
conf, start one, save namespace, stop services, remove conf, purge edits, start 
service, re-bootstrap other, etc) that will be of little use to production ops. 
 By the time someone tells you their oopsy, the blocks are invalidated.

Does it make more sense to provide a dfsadmin command to effectively truncate 
the edit logs and/or move newer images out of the way?  Non-destructively of 
course.  Effectively achieves what you want but doesn't disturb normal code 
paths.

> Provide a config to start namenode in safemode state upto a certain 
> transaction id
> --
>
> Key: HDFS-13079
> URL: https://issues.apache.org/jira/browse/HDFS-13079
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13079.001.patch, HDFS-13079.002.patch, 
> HDFS-13079.003.patch
>
>
> In some cases it necessary to rollback the Namenode back to a certain 
> transaction id. This is especially needed when the user issues a {{rm -Rf 
> -skipTrash}} by mistake.
> Rolling back to a transaction id helps in taking a peek at the filesystem at 
> a particular instant. This jira proposes to provide a configuration variable 
> using which the namenode can be started upto a certain transaction id. The 
> filesystem will be in a readonly safemode which cannot be overridden 
> manually. It will only be overridden by removing the config value from the 
> config file. Please also note that this will not cause any changes in the 
> filesystem state, the filesystem will be in safemode state and no changes to 
> the filesystem state will be allowed.
> Please note that in case a checkpoint has already happened and the requested 
> transaction id has been subsumed in an FSImage, then the namenode will be 
> started with the next nearest transaction id. Further FSImage files and edits 
> will be ignored.
> If the checkpoint hasn't happen then the namenode will be started with the 
> exact transaction id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472307#comment-16472307
 ] 

Mukul Kumar Singh commented on HDDS-25:
---

Thanks for the updated patch, +1 the patch looks good to me. I will spawn off 
another bug to address prioritized execution of events.

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-47) Add acceptance tests for Ozone Shell

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-47:
-
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

[~ljain] Thank you for the contribution. I have committed this to trunk

> Add acceptance tests for Ozone Shell
> 
>
> Key: HDDS-47
> URL: https://issues.apache.org/jira/browse/HDDS-47
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-47.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread Apoorv Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Naik updated HDFS-13502:
---
Attachment: 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch

> Utility to resolve NameServiceId in federated cluster
> -
>
> Key: HDFS-13502
> URL: https://issues.apache.org/jira/browse/HDFS-13502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Apoorv Naik
>Assignee: Apoorv Naik
>Priority: Major
> Attachments: 
> 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
>
>
> A Utility class in HDFS, that would act as a reverse lookup for : 
> HDFS URLs would be beneficial for deployments having multiple namenodes and 
> nameservices.
>  
> Consumers would benefit by having a unified namespace across the Federated 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread Apoorv Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Naik updated HDFS-13502:
---
Status: Patch Available  (was: In Progress)

> Utility to resolve NameServiceId in federated cluster
> -
>
> Key: HDFS-13502
> URL: https://issues.apache.org/jira/browse/HDFS-13502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Apoorv Naik
>Assignee: Apoorv Naik
>Priority: Major
> Attachments: 
> 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
>
>
> A Utility class in HDFS, that would act as a reverse lookup for : 
> HDFS URLs would be beneficial for deployments having multiple namenodes and 
> nameservices.
>  
> Consumers would benefit by having a unified namespace across the Federated 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13502) Utility to resolve NameServiceId in federated cluster

2018-05-11 Thread Apoorv Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13502 started by Apoorv Naik.
--
> Utility to resolve NameServiceId in federated cluster
> -
>
> Key: HDFS-13502
> URL: https://issues.apache.org/jira/browse/HDFS-13502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Apoorv Naik
>Assignee: Apoorv Naik
>Priority: Major
> Attachments: 
> 0001-HDFS-13502-Utility-to-resolve-NameServiceId-in-feder.patch
>
>
> A Utility class in HDFS, that would act as a reverse lookup for : 
> HDFS URLs would be beneficial for deployments having multiple namenodes and 
> nameservices.
>  
> Consumers would benefit by having a unified namespace across the Federated 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472293#comment-16472293
 ] 

Hudson commented on HDFS-13542:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14171/])
HDFS-13542. TestBlockManager#testNeededReplicationWhileAppending fails 
(inigoiri: rev d50c4d71dc42576f96ae5c268856fd1a7795f936)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472288#comment-16472288
 ] 

Íñigo Goiri commented on HDFS-13480:


Agree with [~linyiqun], we shouldn't change {{DFS_ROUTER_HEARTBEAT_ENABLE}}.
I think we should have:
*  dfs.federation.router.heartbeat.enable: heartbeat the state of the Router.
*  dfs.federation.router.namenode.heartbeat.enable: heartbeat the state of the 
Namenode.
We should update all the documentation including the md and the xml.


> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-11 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472278#comment-16472278
 ] 

Mukul Kumar Singh commented on HDDS-21:
---

Thanks for the updated patch [~ljain]. The v3 patch looks really good to me. 
Some minor comments, I am +1 after that.

1) Header.java: 68, lets name the variable as  OZONE_RENAME_TO_KEY_PARAM_NAME
2) TestOzoneRestClient:392 - lets move lines 418-419 before the expected 
exception message, so that all the statements are executed.
3) KeyHandler.java:254 - please remove the extra "-" in the javadoc.



> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-21.001.patch, HDDS-21.002.patch, HDDS-21.003.patch, 
> HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13494) Set empty value to serialFilter to avoid UnrecoverableKeyException on JDK 8u171

2018-05-11 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13494:
-
Priority: Critical  (was: Major)
Hadoop Flags: Reviewed
 Summary: Set empty value to serialFilter to avoid 
UnrecoverableKeyException on JDK 8u171  (was: TestKeyProviderFactory test 
errors on JDK 8u171 (java.security.UnrecoverableKeyException))

> Set empty value to serialFilter to avoid UnrecoverableKeyException on JDK 
> 8u171
> ---
>
> Key: HDFS-13494
> URL: https://issues.apache.org/jira/browse/HDFS-13494
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HDFS-13494.001.patch, HDFS-13494.002.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This could be the cause the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472264#comment-16472264
 ] 

Íñigo Goiri commented on HDFS-13542:


Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.
Thank you very much [~huanbang1993].

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472245#comment-16472245
 ] 

Íñigo Goiri commented on HDFS-13542:


Thanks [~huanbang1993] for  [^HDFS-13542.001.patch].
The change seems straightforward and it only makes the tests more resilient.
LGTM +1
Committing all the way to branch-2.9.

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13339) Volume reference can't release when testVolFailureStatsPreservedOnNNRestart

2018-05-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472223#comment-16472223
 ] 

Daryn Sharp commented on HDFS-13339:


Creating new thread pool/factory instances is going to cause thread leaks, at 
least until the threads timeout.  Which is likely to cause pre-mature promotion 
of the objects and increase GC pressure later.  Use a shared instance.

This also looks like a legitimate non-test related bug?  If yes, the 
description is deceiving and should be revised to remove the reference to a 
test.

> Volume reference can't release when testVolFailureStatsPreservedOnNNRestart
> ---
>
> Key: HDFS-13339
> URL: https://issues.apache.org/jira/browse/HDFS-13339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: os: Linux 2.6.32-358.el6.x86_64
> hadoop version: hadoop-3.2.0-SNAPSHOT
> unit: mvn test -Pnative 
> -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Critical
>  Labels: DataNode, volumes
> Attachments: HDFS-13339.001.patch
>
>
> When i execute Unit Test of
>  TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, 
> the process blocks on waitReplication, detail information as follows:
> [INFO] ---
>  [INFO] T E S T S
>  [INFO] ---
>  [INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 307.492 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] 
> testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting)
>  Time elapsed: 307.206 s <<< ERROR!
>  java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 
> 2 replicas
>  at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800)
>  at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472192#comment-16472192
 ] 

genericqa commented on HDDS-21:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} objectstore-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 35s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.impl.TestContainerDeletionChoosingPolicy |
|   | hadoop.ozone.TestStorageContainerManager |
|   | 

[jira] [Commented] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-05-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472166#comment-16472166
 ] 

Daryn Sharp commented on HDFS-13489:


On further inspection, this method appears to possibly be at odds with the 
semantics of {{getSnapshottableDirListing}}.  That method returns only the 
snapshottable directories owned by the user.  I don't know if there's perceived 
security value in not letting a non-owner detect snapshot roots but someone 
with snapshot expertise should comment.

Otherwise why not have the client call {{getSnapshottableDirListing}} and 
prefix match against the given path?

> Get base snapshotable path if exists for a given path
> -
>
> Key: HDFS-13489
> URL: https://issues.apache.org/jira/browse/HDFS-13489
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Harkrishn Patro
>Assignee: Harkrishn Patro
>Priority: Major
> Attachments: HDFS-13489.001.patch, HDFS-13489.002.patch, 
> HDFS-13489.003.patch, HDFS-13489.004.patch, HDFS-13489.005.patch, 
> HDFS-13489.006.patch, HDFS-13489.007.patch
>
>
> Currently, hdfs only lists the snapshotable paths in the filesystem. This 
> feature would add the functionality of figuring out if a given path is 
> snapshotable or not. If yes, it would return the base snapshotable path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-11 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-39:
--
Attachment: HDDS-39.004.patch

> Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using 
> maven protoc plugin
> --
>
> Key: HDDS-39
> URL: https://issues.apache.org/jira/browse/HDDS-39
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-39.003.patch, HDDS-39.004.patch, 
> HDFS-13389-HDFS-7240.001.patch, HDFS-13389-HDFS-7240.002.patch
>
>
> Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
> this can be changed to use proto3 compiler.
> This change will help in performance improvement as well because currently in 
> the client path, the xceiver client ratis converts proto2 classes to proto3 
> using byte string manipulation.
> Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
> version will still remain 2.5 as this proto compilation will be done through 
> the following plugin. 
> https://www.xolstice.org/protobuf-maven-plugin/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-05-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472145#comment-16472145
 ] 

Daryn Sharp commented on HDFS-13489:


*Please* pay attention to security.  Not not call {{getINodesInPath}}!  This is 
bypasses all permission checks.  See the javadoc:
{code}
* Resolves the given path into inodes.  Reserved paths are not handled and
* permissions are not verified.  Client supplied paths should be
* resolved via {@link #resolvePath(FSPermissionChecker, String, DirOp)}.
* This method should only be used by internal methods.
{code}
Use {{fsd.resolvePath(pc, src, DirOp.READ_LINK)}}.


> Get base snapshotable path if exists for a given path
> -
>
> Key: HDFS-13489
> URL: https://issues.apache.org/jira/browse/HDFS-13489
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Harkrishn Patro
>Assignee: Harkrishn Patro
>Priority: Major
> Attachments: HDFS-13489.001.patch, HDFS-13489.002.patch, 
> HDFS-13489.003.patch, HDFS-13489.004.patch, HDFS-13489.005.patch, 
> HDFS-13489.006.patch, HDFS-13489.007.patch
>
>
> Currently, hdfs only lists the snapshotable paths in the filesystem. This 
> feature would add the functionality of figuring out if a given path is 
> snapshotable or not. If yes, it would return the base snapshotable path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472138#comment-16472138
 ] 

genericqa commented on HDDS-25:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923031/HDDS-25.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5bd903538532 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a922b9c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/82/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/framework U: hadoop-hdds/framework |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/82/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed 

[jira] [Commented] (HDDS-47) Add acceptance tests for Ozone Shell

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472139#comment-16472139
 ] 

genericqa commented on HDDS-47:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-47 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923022/HDDS-47.001.patch |
| Optional Tests |  asflicense  unit  compile  javac  javadoc  mvninstall  
mvnsite  shadedclient  findbugs  checkstyle  |
| uname | Linux 56c95f381c90 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472120#comment-16472120
 ] 

genericqa commented on HDDS-40:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist hadoop-ozone/acceptance-test . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 6 new + 0 unchanged - 0 fixed = 6 
total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 12s{color} | {color:orange} The patch generated 10 new + 104 unchanged - 0 
fixed = 114 total (was 104) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist hadoop-ozone/acceptance-test . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 19s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 25s{color} | 
{color:black} {color} |
\\
\\

[jira] [Commented] (HDFS-13494) TestKeyProviderFactory test errors on JDK 8u171 (java.security.UnrecoverableKeyException)

2018-05-11 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472112#comment-16472112
 ] 

Akira Ajisaka commented on HDFS-13494:
--

+1, thanks!

> TestKeyProviderFactory test errors on JDK 8u171 
> (java.security.UnrecoverableKeyException)
> -
>
> Key: HDFS-13494
> URL: https://issues.apache.org/jira/browse/HDFS-13494
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13494.001.patch, HDFS-13494.002.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This could be the cause the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472012#comment-16472012
 ] 

Elek, Marton commented on HDDS-25:
--

Thanks a lot [~shashikant] to check the comments. I fixed the comments in tthe 
last patch.

Regarding the tracing: Yes, it's a good question. I think some kind of timeout 
could be added to the EventExecutor if needed and it could be transparent to 
the EventHandlers. But the promise of the Actor based architecture is that you 
can't create a deadlock. If there is no Future and every actor (=handler) has 
an own threadpool/mailbox, it's very hard to do a deadlock. Even we don't need 
synchronization in the EventHandlers if the EventExecutor could guarantee that 
every handler will be executed only on one thread. 

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-25) Simple async event processing for SCM

2018-05-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-25:
-
Attachment: HDDS-25.005.patch

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch, HDDS-25.003.patch, HDDS-25.004.patch, 
> HDDS-25.005.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-11 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471909#comment-16471909
 ] 

Lokesh Jain commented on HDDS-21:
-

[~msingh] Thanks for reviewing the patch! v3 patch addresses your comments.

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-21.001.patch, HDDS-21.002.patch, HDDS-21.003.patch, 
> HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-11 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-21:

Attachment: HDDS-21.003.patch

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-21.001.patch, HDDS-21.002.patch, HDDS-21.003.patch, 
> HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-47) Add acceptance tests for Ozone Shell

2018-05-11 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-47:

Status: Patch Available  (was: Open)

> Add acceptance tests for Ozone Shell
> 
>
> Key: HDDS-47
> URL: https://issues.apache.org/jira/browse/HDDS-47
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-47.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >