[jira] [Commented] (HDFS-9941) Do not log StandbyException on NN, other minor logging fixes

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15190259#comment-15190259
 ] 

ASF GitHub Bot commented on HDFS-9941:
--

GitHub user arp7 opened a pull request:

https://github.com/apache/hadoop/pull/83

HDFS-9941. Do not log StandbyException on NN, other minor logging fixes.

v1 patch includes the following fixes:
# Suppress StandbyException log messages for NameNode.
# {{saveAllocatedBlock}} logs the block locations (DN xfer addresses).
# {{logBlockReplicationInfo}} logs to the blockStateChangeLog instead of 
{{DecomissionManager#LOG}}. Also added a log level guard.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arp7/hadoop HDFS-9941

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/83.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #83


commit fbf1bf7fd6d530687b2e9610b8c160e9c7a3fdcb
Author: Arpit Agarwal 
Date:   2016-03-10T19:17:17Z

HDFS-9941. Do not log StandbyException on NN, other minor logging fixes.




> Do not log StandbyException on NN, other minor logging fixes
> 
>
> Key: HDFS-9941
> URL: https://issues.apache.org/jira/browse/HDFS-9941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The NameNode can skip logging StandbyException messages. These are seen 
> regularly in normal operation and convey no useful information.
> We no longer log the locations of newly allocated blocks in 2.8.0. The DN IDs 
> can be useful for debugging so let's add that back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10256) Use GenericTestUtils.getTestDir method in tests for temporary directories

2016-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224092#comment-15224092
 ] 

ASF GitHub Bot commented on HDFS-10256:
---

GitHub user vinayakumarb opened a pull request:

https://github.com/apache/hadoop/pull/90

HDFS-10256. Use GenericTestUtils.getTestDir method in tests for temporary 
directories

Separated the HDFS changes from HADOOP-12984

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vinayakumarb/hadoop features/HDFS-10256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/90.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #90


commit c029cee003492b68954be8a20cfc23ad22bedaa5
Author: Vinayakumar B 
Date:   2016-04-04T12:54:37Z

HADOOP-12984. Add GenericTestUtils.getTestDir method and use it for 
temporary directory in tests

commit 1e7c43187deb18fe3bd56250fc6620d82d0602bb
Author: Vinayakumar B 
Date:   2016-04-04T13:04:24Z

HDFS-10256. Use GenericTestUtils.getTestDir method in tests for temporary 
directories




> Use GenericTestUtils.getTestDir method in tests for temporary directories
> -
>
> Key: HDFS-10256
> URL: https://issues.apache.org/jira/browse/HDFS-10256
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9895) Push up DataNode#conf to base class

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15254907#comment-15254907
 ] 

ASF GitHub Bot commented on HDFS-9895:
--

GitHub user xiaobingo opened a pull request:

https://github.com/apache/hadoop/pull/92

HDFS-9895. Push up DataNode#conf to base class

Please kindly review the patch v001, see also 
https://issues.apache.org/jira/browse/HDFS-9895.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiaobingo/hadoop HDFS-9895

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/92.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #92


commit 2103ab5d37dde7ed807c9a3447a089f96071c595
Author: Xiaobing Zhou 
Date:   2016-04-22T23:30:00Z

HDFS-9895. Push up DataNode#conf to base class




> Push up DataNode#conf to base class
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10224) Implement asynchronous rename for DistributedFileSystem

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15254940#comment-15254940
 ] 

ASF GitHub Bot commented on HDFS-10224:
---

GitHub user xiaobingo opened a pull request:

https://github.com/apache/hadoop/pull/93

HDFS-10224. Implement asynchronous rename for DistributedFileSystem

Please kindly review patch v007, thanks.
See also patch v007 in https://issues.apache.org/jira/browse/HDFS-10224

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiaobingo/hadoop HDFS-10224

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/93.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #93


commit 87037ee1c65e02908d12a9db03bf4463e4c66041
Author: Xiaobing Zhou 
Date:   2016-04-22T23:56:06Z

HDFS-10224. Implement asynchronous rename for DistributedFileSystem




> Implement asynchronous rename for DistributedFileSystem
> ---
>
> Key: HDFS-10224
> URL: https://issues.apache.org/jira/browse/HDFS-10224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10224-HDFS-9924.000.patch, 
> HDFS-10224-HDFS-9924.001.patch, HDFS-10224-HDFS-9924.002.patch, 
> HDFS-10224-HDFS-9924.003.patch, HDFS-10224-HDFS-9924.004.patch, 
> HDFS-10224-HDFS-9924.005.patch, HDFS-10224-HDFS-9924.006.patch, 
> HDFS-10224-HDFS-9924.007.patch, HDFS-10224-and-HADOOP-12909.000.patch
>
>
> This is proposed to implement an asynchronous DistributedFileSystem based on 
> AsyncFileSystem APIs in HADOOP-12910. In addition, rename is implemented in 
> this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9895) Push up DataNode#conf to base class

2016-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255303#comment-15255303
 ] 

ASF GitHub Bot commented on HDFS-9895:
--

Github user arp7 commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/92#discussion_r60831863
  
--- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
 ---
@@ -113,71 +112,71 @@
 
   // Allow LAZY_PERSIST writes from non-local clients?
   private final boolean allowNonLocalLazyPersist;
-
+  private final DataNode dn;
   private final int volFailuresTolerated;
   private final int volsConfigured;
 
-  public DNConf(Configuration conf) {
-this.conf = conf;
-socketTimeout = conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY,
+  public DNConf(final DataNode dn) {
--- End diff --

The dn.getConf() object is not referenced outside the constructor so you 
can just pass a reference to that object. Also DNConf need not keep a reference 
to the dn. I think you can just revert all changes to this file.


> Push up DataNode#conf to base class
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9895) Push up DataNode#conf to base class

2016-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255304#comment-15255304
 ] 

ASF GitHub Bot commented on HDFS-9895:
--

Github user arp7 commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/92#discussion_r60831889
  
--- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 ---
@@ -1239,9 +1236,8 @@ void startDataNode(Configuration conf,
 synchronized (this) {
   this.dataDirs = dataDirs;
 }
-this.conf = conf;
-this.dnConf = new DNConf(conf);
-checkSecureConfig(dnConf, conf, resources);
+this.dnConf = new DNConf(this);
--- End diff --

If we revert changes to DNConf we can just replace this with `this.dnConf = 
new DNConf(getConf())`.


> Push up DataNode#conf to base class
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10224) Implement asynchronous rename for DistributedFileSystem

2016-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255870#comment-15255870
 ] 

ASF GitHub Bot commented on HDFS-10224:
---

Github user xiaobingo closed the pull request at:

https://github.com/apache/hadoop/pull/93


> Implement asynchronous rename for DistributedFileSystem
> ---
>
> Key: HDFS-10224
> URL: https://issues.apache.org/jira/browse/HDFS-10224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10224-HDFS-9924.000.patch, 
> HDFS-10224-HDFS-9924.001.patch, HDFS-10224-HDFS-9924.002.patch, 
> HDFS-10224-HDFS-9924.003.patch, HDFS-10224-HDFS-9924.004.patch, 
> HDFS-10224-HDFS-9924.005.patch, HDFS-10224-HDFS-9924.006.patch, 
> HDFS-10224-HDFS-9924.007.patch, HDFS-10224-HDFS-9924.008.patch, 
> HDFS-10224-and-HADOOP-12909.000.patch
>
>
> This is proposed to implement an asynchronous DistributedFileSystem based on 
> AsyncFileSystem APIs in HADOOP-12910. In addition, rename is implemented in 
> this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277390#comment-15277390
 ] 

ASF GitHub Bot commented on HDFS-10382:
---

GitHub user ramtinb opened a pull request:

https://github.com/apache/hadoop/pull/94

HDFS-10382 In WebHDFS numeric usernames do not work with DataNode

In WebHDFS for cat operation, we have 2 sequential HTTP requests.
The first HTTP request is handled by NN and the second one by DN.
Unlike the NN, the DN is not using the suggested domain pattern from the 
configuration!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ramtinb/hadoop feature/HDFS-10382

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/94.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #94


commit 6cbb8f60702b8c3211e52e71681fc8d981fe0525
Author: Ramtin Boustani 
Date:   2016-05-10T00:05:05Z

HDFS-10382 In WebHDFS numeric usernames do not work with DataNode




> In WebHDFS numeric usernames do not work with DataNode
> --
>
> Key: HDFS-10382
> URL: https://issues.apache.org/jira/browse/HDFS-10382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: ramtin
>Assignee: ramtin
>
> Operations like {code:java}curl -i 
> -L"http://:/webhdfs/v1/?user.name=0123&op=OPEN"{code} that 
> directed to DataNode fail because of not reading the suggested domain pattern 
> from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8562) HDFS Performance is impacted by FileInputStream Finalizer

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991047#comment-14991047
 ] 

ASF GitHub Bot commented on HDFS-8562:
--

GitHub user hash-X opened a pull request:

https://github.com/apache/hadoop/pull/42

AltFileInputStream.java replace FileInputStream.java in apache/hadoop/HDFS



A brief description
Long Stop-The-World GC pauses due to Final Reference processing are 
observed.
So, Where are those Final Reference come from ?

1 : `Finalizer`
2 : `FileInputStream`

How to solve this problem ?

Here is the detailed description,and I give a solution on this.
https://issues.apache.org/jira/browse/HDFS-8562

FileInputStream have a method of finalize , and it can cause GC pause for a 
long time.In our test,G1 as our GC. So,in AltFileInputStream , no finalize. A 
new design for a inputstream use in windows and non-windows.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hash-X/hadoop AltFileInputStream

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/42.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #42


commit 8d64ef0feb8c8d8f5d5823ccaa428a1b58f6fd04
Author: zhangminglei 
Date:   2015-07-19T09:50:19Z

Add some code.

commit 3ccf4c70c40cf1ba921d76b949317b5fd6752e3c
Author: zhangminglei 
Date:   2015-07-19T09:56:49Z

I cannot replace FileInputStream to NewFileInputStream in a casual 
way,cause the act
of change can damage other part of the HDFS.For example,When I test my code
using a Single Node (psedo-distributed) Cluster."Failed to load an FSImage
file." will happen when I start HDFS Daemons.At start,I replace
many FileInputStream which happend as an arg or constructor to
NewFileInputStream,but it seems like wrong.So,I have to do this in another
way.

commit 4da55130586ee9803a09162f7e2482b533aa12d9
Author: zhangminglei 
Date:   2015-07-19T10:30:11Z

Replace FIS to NFIS( NewFileInputStream ) is not recommend I
think,although there is a man named Alan Bateman from
https://bugs.openjdk.java.net/browse/JDK-8080225
suggest that.But test shows it is not good.Some problem may happen.And
these test consume so long time.Every time I change the source code,I need 
to
build the whole project (maybe it is not needed).But I install the new
version hadoop on my computer.So,build the whole project is needed.Maybe
should have a good way to do it I think.

commit 06b1509e0ad6dd74cf7c903e6ed6f2ec74d9b341
Author: zhangminglei 
Date:   2015-07-19T11:06:37Z

Replace FIS to NFIS,If test success,just do these first.It is not as
simple as that.

commit 2a79cd9c3b012556af7db5bdbf96663a1c30dcc4
Author: zhangminglei 
Date:   2015-07-20T02:36:55Z

Add a LOG info in DataXceiver for test.

commit 436c998ae21b3fe843b2d5ba6506e37ff2a34ab2
Author: zhangminglei 
Date:   2015-07-20T06:01:41Z

Rename NewFileInputStream to AltFileInputStream.

commit 14de2788ea2407c6ee252a69cfd3b4f6132c6faa
Author: zhangminglei 
Date:   2015-07-20T06:16:32Z

replace License header to Apache.

commit 387f7624a96716abef2062986f05523199e1927e
Author: zhangminglei 
Date:   2015-07-20T07:16:25Z

Remove open method in AltFileInputStream.java.

commit 52b029fac56bc054add1eac836e6cf71a0735304
Author: zhangminglei 
Date:   2015-07-20T10:14:09Z

Performance between AltFileInputStream and FileInputStream is not do
from this commit.Important question I think whether
AltFileInputStream could convert
to FileInputStream safely.I define a frame plan to do it.But I don't know is
this correct for the problem ? In HDFS code,compulsory conversion to
FileInputStream is happend everywhere.

commit e76d5eb4bf0145a4b28c581ecec07dcee7bae4e5
Author: zhangminglei 
Date:   2015-07-20T13:11:24Z

I think the compulsory conversion is safety,cause the AltFileInputSteam is
the subclass of the InputStream.In the previous version of the HDFS,the
convert to FileInputStream I see is safety cause these method return
InputStream which is the superclass of the FileInputStream.In my version
of HDFS,InputStream is also the superclass of the
AltFileInputStream.So,AltFileInputStream is also a InputStream just like
the FileInputStream is a InputStream too.So,I think it is
safety.Everyone agree ? If not,please give your opinion and tell me What's
wrong with that ? thank you.

commit 959e91c05c11cc1445513d36ec083707f0bba4e1
Author: zhangminglei 
Date:   2015-07-20T14:08:51Z

channel.close() is not needed.Because closeing the stream will in turn
cause the channel to be closed.

commit aa7f82efb29d6ff457dcf6e5b2a74af663682106
Author: zha

[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-11-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994458#comment-14994458
 ] 

ASF GitHub Bot commented on HDFS-9144:
--

GitHub user bobhansen opened a pull request:

https://github.com/apache/hadoop/pull/43

HDFS-9144: libhdfs++ refactoring

Code changes for HDFS-9144 as described in the JIRA.  Removing some 
templates and traits and restructuring the code for more modularity.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bobhansen/hadoop HDFS-9144-merge

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/43.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #43


commit 1fb1ea527c9b5321e6da6c2543859db2ec3eaf7c
Author: Bob Hansen 
Date:   2015-10-22T11:58:41Z

Refactored NameNodeConnection

commit c6cf5175b9c21561bdcbd22be27f50e22a1d3ebd
Author: Bob Hansen 
Date:   2015-10-22T12:01:36Z

Removed fs_ from InputStream

commit 8b8190d334224d8acec9a4bef97d5e0226c1045a
Author: Bob Hansen 
Date:   2015-10-22T13:05:53Z

Moved GetBlockInfo to NN connection

commit 108b54f3079ed21149a59b9222d6d9832ee05d79
Author: Bob Hansen 
Date:   2015-10-22T13:20:56Z

Moved GetBlockLocations to std::function

commit 6d112a17048bcec437701b422209641e56f6196e
Author: Bob Hansen 
Date:   2015-10-22T13:48:02Z

Added comments

commit e57b0ed02e29781f347499f0f3546659870aabab
Author: Bob Hansen 
Date:   2015-10-22T13:52:39Z

Stripped whitespace

commit c9c82125e8c0b742ee3a70d6fdbdedca180cdd4f
Author: Bob Hansen 
Date:   2015-10-27T16:07:33Z

Renamed NameNodeConnection to NameNodeOperations

commit 01499b6027ec771ebf04d4723899ee976b2a6044
Author: Bob Hansen 
Date:   2015-10-27T23:26:26Z

Renamed input_stream and asio_continuation

commit 02c67837fe832e45286a675f1a27fa29e1b80a9a
Author: Bob Hansen 
Date:   2015-10-27T23:30:44Z

Renamed CreatePipeline to Connect

commit 5d28d02e1752be74975647f8dc656776ab9e2cbf
Author: Bob Hansen 
Date:   2015-10-27T23:58:18Z

Rename async_connect to async_request

commit 9d98bf41091c923103cbeeadb5459c3119b50584
Author: Bob Hansen 
Date:   2015-10-28T13:01:38Z

Renamed read_some to read_packet

commit 6ced4a97e297ce0e833db8dbd4b38c91c966d71c
Author: Bob Hansen 
Date:   2015-10-28T13:15:50Z

Renamed async_request to async_request_block

commit f05a771e578969b9b281de4e0c97887f98b0f2cf
Author: Bob Hansen 
Date:   2015-10-28T13:19:09Z

Renamed BlockReader::request to request_block

commit fcf1585bf67f84ef8c0acc72660d2ad250005e3b
Author: Bob Hansen 
Date:   2015-10-28T19:12:39Z

Moved to file_info

commit a3fd975285b25a3eae448e5ac46d0118a14d6610
Author: Bob Hansen 
Date:   2015-10-28T19:16:20Z

Made file_info pointers const

commit 366f488b8e8364eba3f1966b931216d2bf404ae1
Author: Bob Hansen 
Date:   2015-10-28T21:37:46Z

Refactored DataNodeConnection, etc.

commit 418799feb8d12181d9e5bd6b6aa94333bb21e126
Author: Bob Hansen 
Date:   2015-10-29T13:53:46Z

Added shared_ptr to DN_Connection

commit f043e154a261e9ff64f1ead450e3a256ecd023a2
Author: Bob Hansen 
Date:   2015-10-29T15:31:28Z

Moved DNConnection into trait

commit aea859ff34a6768c7df29ec25f1abd2b92835b9e
Author: Bob Hansen 
Date:   2015-10-29T15:32:12Z

Trimmed whitespace

commit 55d7b5dcd92b0fd9d0011e97d8f47e78c3316205
Author: Bob Hansen 
Date:   2015-10-29T17:23:30Z

Re-enabled IS tests

commit 142efabbda38852b431d94096d6cef69f5c96393
Author: Bob Hansen 
Date:   2015-10-29T17:31:05Z

Cleaned up some tests

commit 4bc0f448fe52a762a242428a1331272c9fee3247
Author: Bob Hansen 
Date:   2015-10-29T21:53:57Z

Working on less templates

commit dd16d4fa9f08f55f9d4140219471f002eca5a8ed
Author: Bob Hansen 
Date:   2015-10-29T23:28:01Z

Compiles!

commit 2b14efa8277c66a3e9e0fb67af925501757d39f8
Author: Bob Hansen 
Date:   2015-10-30T20:46:52Z

Fixed DNconnection signature

commit 8d143e789a98431f8cd2cb08db37a0a05f4d9c77
Author: Bob Hansen 
Date:   2015-11-02T16:35:54Z

Fixed segfault in ReadData

commit b6f5454e626c1caa1b76398c9edf220fc1252be9
Author: Bob Hansen 
Date:   2015-11-02T18:36:15Z

Removed BlockReader callback templates

commit 3b5d712b454f5b817c22909bac2f3477a64624fe
Author: Bob Hansen 
Date:   2015-11-02T18:52:16Z

Removed last templates from BlockReader

commit d9b9241f12a957226df7ccacad07d8e1a0d98cca
Author: Bob Hansen 
Date:   2015-11-02T20:56:43Z

Moved entirely over to BlockReader w/out templates

commit 5de0bce35fb52b7a688d3fc4ad02748106fca38e
Author: Bob Hansen 
Date:   2015-11-02T21:06:25Z

Removed unnecessary impls

commit d5baa8784643bdfed454c8a4ba0edb102d73f40a
Author: Bob Hansen 
Date:   2015-11-03T15:00:50Z

Moved DN to its own file




> Refactor libhdfs int

[jira] [Commented] (HDFS-9443) Disabling HDFS client socket cache causes logging message printed to console for CLI commands.

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014719#comment-15014719
 ] 

ASF GitHub Bot commented on HDFS-9443:
--

GitHub user cnauroth opened a pull request:

https://github.com/apache/hadoop/pull/49

HDFS-9443. Disabling HDFS client socket cache causes logging message …

…printed to console for CLI commands.

This is a trivial patch to change the log statement to debug level.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnauroth/hadoop-1 HDFS-9443

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/49.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #49


commit 06fbad10968b112dbd2981ce80747db1c261eafa
Author: cnauroth 
Date:   2015-11-19T23:03:27Z

HDFS-9443. Disabling HDFS client socket cache causes logging message 
printed to console for CLI commands.




> Disabling HDFS client socket cache causes logging message printed to console 
> for CLI commands.
> --
>
> Key: HDFS-9443
> URL: https://issues.apache.org/jira/browse/HDFS-9443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
>
> The HDFS client's socket cache can be disabled by setting 
> {{dfs.client.socketcache.capacity}} to {{0}}.  When this is done, the 
> {{PeerCache}} class logs an info-level message stating that the cache is 
> disabled.  This message is getting printed to the console for CLI commands, 
> which disrupts CLI output.  This issue proposes to downgrade to debug-level 
> logging for this message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9263) tests are using /test/build/data; breaking Jenkins

2015-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021121#comment-15021121
 ] 

ASF GitHub Bot commented on HDFS-9263:
--

GitHub user steveloughran opened a pull request:

https://github.com/apache/hadoop/pull/53

HDFS-9263



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/steveloughran/hadoop 
jenkins/HDFS-9263_build_test_data

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/53.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #53


commit a2a7df4f521602fef3bd49c708b0fecb080c50a1
Author: Steve Loughran 
Date:   2015-10-19T14:06:08Z

HDFS-9263: set up GTU for single place for test dir/path setup

commit c2391515fc2eed5fa1249020585cccd94a6b523d
Author: Steve Loughran 
Date:   2015-10-19T14:07:19Z

HDFS-9263 factoring out refs to build test propertyes & paths in 
hadoop-common

commit 646811a57765532c6b2be701b67a964435b1ed50
Author: Steve Loughran 
Date:   2015-10-19T14:07:37Z

HDFS-9263 hdft-tests

commit cac0e86d6fa51ae97df9995ddea3f0df3ad6601d
Author: Steve Loughran 
Date:   2015-10-19T19:03:35Z

HDFS-9263 mapreduce tests

commit ce93191dbf3e6d631736561d795573369286de88
Author: Steve Loughran 
Date:   2015-10-19T19:03:53Z

HDFS-9263 hadoop archive test

commit 6eb8f0ecb9e220779740e263cdcd68b23ef9e681
Author: Steve Loughran 
Date:   2015-11-22T18:34:40Z

HDFS-9263 MiniDFS cluster to use hardcoded subdir under test dir.




> tests are using /test/build/data; breaking Jenkins
> --
>
> Key: HDFS-9263
> URL: https://issues.apache.org/jira/browse/HDFS-9263
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HDFS-9263-001.patch, HDFS-9263-002.patch
>
>
> Some of the HDFS tests are using the path {{test/build/data}} to store files, 
> so leaking files which fail the new post-build RAT test checks on Jenkins 
> (and dirtying all development systems with paths which {{mvn clean}} will 
> miss.
> fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9459) hadoop-hdfs-native-client fails test build on Windows after transition to ctest.

2015-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025232#comment-15025232
 ] 

ASF GitHub Bot commented on HDFS-9459:
--

GitHub user cnauroth opened a pull request:

https://github.com/apache/hadoop/pull/58

HDFS-9459. hadoop-hdfs-native-client fails test build on Windows afte…

…r transition to ctest.

I verified that the hadoop-hdfs-native-client tests build correctly and run 
successfully after applying this patch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnauroth/hadoop-1 HDFS-9459

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/58.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #58


commit d583992956eb65fc4d1ba93c5c81da85c4592778
Author: cnauroth 
Date:   2015-11-24T19:54:51Z

HDFS-9459. hadoop-hdfs-native-client fails test build on Windows after 
transition to ctest.




> hadoop-hdfs-native-client fails test build on Windows after transition to 
> ctest.
> 
>
> Key: HDFS-9459
> URL: https://issues.apache.org/jira/browse/HDFS-9459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> HDFS-9369 transitioned to usage of {{ctest}} for running the HDFS native 
> tests.  This broke the {{mvn test}} build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9263) tests are using /test/build/data; breaking Jenkins

2015-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025260#comment-15025260
 ] 

ASF GitHub Bot commented on HDFS-9263:
--

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/53#discussion_r45787996
  
--- Diff: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
 ---
@@ -59,6 +59,22 @@
 
   private static final AtomicInteger sequence = new AtomicInteger();
 
+  /**
+   * system property for test data: {@value}
+   */
+  public static final String SYSPROP_TEST_DATA_DIR = "test.build.data";
+
+  /**
+   * Default path for test data: {@value}
+   */
+  public static final String DEFAULT_TEST_DATA_DIR =
+  "target " + File.pathSeparator + "test" + File.pathSeparator + 
"data";
--- End diff --

I think you'll still need to incorporate the fixes I suggested earlier in a 
JIRA comment: remove the extra space character at the end of the `"target "` 
string literal, and switch from `File.pathSeparator` to `File.separator`.


> tests are using /test/build/data; breaking Jenkins
> --
>
> Key: HDFS-9263
> URL: https://issues.apache.org/jira/browse/HDFS-9263
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HDFS-9263-001.patch, HDFS-9263-002.patch
>
>
> Some of the HDFS tests are using the path {{test/build/data}} to store files, 
> so leaking files which fail the new post-build RAT test checks on Jenkins 
> (and dirtying all development systems with paths which {{mvn clean}} will 
> miss.
> fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145269#comment-15145269
 ] 

ASF GitHub Bot commented on HDFS-9801:
--

Github user arp7 closed the pull request at:

https://github.com/apache/hadoop/pull/73


> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15155732#comment-15155732
 ] 

ASF GitHub Bot commented on HDFS-9839:
--

GitHub user arp7 opened a pull request:

https://github.com/apache/hadoop/pull/78

HDFS-9839. Reduce verbosity of processReport logging



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arp7/hadoop HDFS-9839

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/78.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #78


commit 8b23b41ada23168fe2cb71f4a3b920c68e66ee74
Author: Arpit Agarwal 
Date:   2016-02-20T18:43:14Z

HDFS-9839. Reduce verbosity of processReport logging




> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15155925#comment-15155925
 ] 

ASF GitHub Bot commented on HDFS-9839:
--

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/78


> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6914) Resolve huge memory consumption Issue with OIV processing PB-based fsimages

2014-10-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14166460#comment-14166460
 ] 

ASF GitHub Bot commented on HDFS-6914:
--

Github user haoch closed the pull request at:

https://github.com/apache/hadoop-common/pull/25


> Resolve huge memory consumption Issue with OIV processing PB-based fsimages
> ---
>
> Key: HDFS-6914
> URL: https://issues.apache.org/jira/browse/HDFS-6914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Hao Chen
>  Labels: hdfs
> Attachments: HDFS-6914.patch, HDFS-6914.v2.patch
>
>
> For better managing and supporting a lot of large hadoop clusters in 
> production, we internally need to automatically export fsimage to delimited 
> text files in LSR style and then analyse with hive or pig or build system 
> metrics for real-time analyzing. 
> However  due to the internal layout changes introduced by the protobuf-based 
> fsimage, OIV processing program consumes excessive amount of memory. For 
> example, in order to export the fsimage in size of 8GB, it should have taken 
> about 85GB memory which is really not reasonable and impacted performance of 
> other services badly in the same server.
> To resolve above problem, I submit this patch which will reduce memory 
> consumption of OIV LSR processing by 50%.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6914) Resolve huge memory consumption Issue with OIV processing PB-based fsimages

2014-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14148963#comment-14148963
 ] 

ASF GitHub Bot commented on HDFS-6914:
--

GitHub user haoch opened a pull request:

https://github.com/apache/hadoop-common/pull/25

HDFS-6914. Resolve huge memory consumption Issue with OIV processing 
PB-based fsimages

For better managing and supporting a lot of large hadoop clusters in 
production, we internally need to automatically export fsimage to delimited 
text files in LSR style and then analyse with hive or pig or build system 
metrics for real-time analyzing. 

However  due to the internal layout changes introduced by the 
protobuf-based fsimage, OIV processing program consumes excessive amount of 
memory. For example, in order to export the fsimage in size of 8GB, it should 
have taken about 85GB memory which is really not reasonable and impacted 
performance of other services badly in the same server.

To resolve above problem, I submit this patch which will reduce memory 
consumption of OIV LSR processing by 50%.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/haoch/hadoop-common HDFS-6914-branch-2.4.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop-common/pull/25.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #25


commit 340e333f627b3d64c9463819da04f97a3deecb6b
Author: Chen,Hao 
Date:   2014-09-26T09:23:24Z

HDFS-6914. Resolve huge memory consumption Issue with OIV processing 
PB-based fsimages




> Resolve huge memory consumption Issue with OIV processing PB-based fsimages
> ---
>
> Key: HDFS-6914
> URL: https://issues.apache.org/jira/browse/HDFS-6914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Hao Chen
>  Labels: hdfs
> Attachments: HDFS-6914.patch, HDFS-6914.v2.patch
>
>
> For better managing and supporting a lot of large hadoop clusters in 
> production, we internally need to automatically export fsimage to delimited 
> text files in LSR style and then analyse with hive or pig or build system 
> metrics for real-time analyzing. 
> However  due to the internal layout changes introduced by the protobuf-based 
> fsimage, OIV processing program consumes excessive amount of memory. For 
> example, in order to export the fsimage in size of 8GB, it should have taken 
> about 85GB memory which is really not reasonable and impacted performance of 
> other services badly in the same server.
> To resolve above problem, I submit this patch which will reduce memory 
> consumption of OIV LSR processing by 50%.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279376&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279376
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 18/Jul/19 22:09
Start Date: 18/Jul/19 22:09
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #948: 
HDDS-1649. On installSnapshot notification from OM leader, download checkpoint 
and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r305134924
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3150,6 +3169,195 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint = getDBCheckpointFromLeader(leaderId);
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Pause the State Machine so that no new transactions can be applied.
+// This action also clears the OM Double Buffer so that if there are any
+// pending transactions in the buffer, they are discarded.
+// TODO: The Ratis server should also be paused here. This is required
+//  because a leader election might happen while the snapshot
+//  installation is in progress and the new leader might start sending
+//  append log entries to the ratis server.
+omRatisServer.getOmStateMachine().pause();
+
+try {
+  replaceOMDBWithCheckpoint(lastAppliedIndex, omDBcheckpoint);
+} catch (Exception e) {
+  LOG.error("OM DB checkpoint replacement with new downloaded checkpoint " 
+
+  "failed.", e);
+  return null;
+}
+
+// Reload the OM DB store with the new checkpoint.
+// Restart (unpause) the state machine and update its last applied index
+// to the installed checkpoint's snapshot index.
+try {
+  reloadOMState();
+  omRatisServer.getOmStateMachine().unpause(checkpointSnapshotIndex);
+} catch (IOException e) {
+  LOG.error("Failed to reload OM state with new DB checkpoint.", e);
+  return null;
+}
+
+// TODO: We should only return the snpashotIndex to the leader.
+//  Should be fixed after RATIS-586
+TermIndex newTermIndex = TermIndex.newTermIndex(0,
+checkpointSnapshotIndex);
+
+return newTermIndex;
+  }
+
+  /**
+   * Download the latest OM DB checkpoint from the leader OM.
+   * @param leaderId OMNodeID of the leader OM node.
+   * @return latest DB checkpoint from leader OM.
+   */
+  private DBCheckpoint getDBCheckpointFromLeader(String leaderId) {
+LOG.info("Downloading checkpoint from leader OM {} and reloading state " +
+"from the checkpoint.", leaderId);
+
+try {
+  return omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+}
+return null;
+  }
+
+  /**
+   * Replace the current OM DB with the new DB checkpoint.
+   * @param lastAppliedIndex the last applied index in the current OM DB.
+   * @param omDBcheckpoint the new DB checkpoint
+   * @throws Exception
+   */
+  void replaceOMDBWithCheckpoint(long lastAppliedIndex,
+  DBCheckpoint omDBcheckpoint) throws Exception {
+// Stop the DB first
+DBStore store = metadataManager.getStore();
+store.close();
+
+// Take a backup of the current DB
+File db = store.

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279378&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279378
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 18/Jul/19 22:10
Start Date: 18/Jul/19 22:10
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #948: 
HDDS-1649. On installSnapshot notification from OM leader, download checkpoint 
and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r305135149
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3150,6 +3169,195 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint = getDBCheckpointFromLeader(leaderId);
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279378)
Time Spent: 10.5h  (was: 10h 20m)

> On installSnapshot notification from OM leader, download checkpoint and 
> reload OM state
> ---
>
> Key: HDDS-1649
> URL: https://issues.apache.org/jira/browse/HDDS-1649
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Installing a DB checkpoint on the OM involves following steps:
>  1. When an OM follower receives installSnapshot notification from OM leader, 
> it should initiate a new checkpoint on the OM leader and download that 
> checkpoint through Http. 
>  2. After downloading the checkpoint, the StateMachine must be paused so that 
> the old OM DB can be replaced with the new downloaded checkpoint. 
>  3. The OM should be reloaded with the new state . All the services having a 
> dependency on the OM DB (such as MetadataManager, KeyManager etc.) must be 
> re-initialized/ restarted. 
>  4. Once the OM is ready with the new state, the state machine must be 
> unpaused to resume participating in the Ratis ring.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279379&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279379
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 18/Jul/19 22:17
Start Date: 18/Jul/19 22:17
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #948: 
HDDS-1649. On installSnapshot notification from OM leader, download checkpoint 
and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r305137102
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3150,6 +3169,195 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint = getDBCheckpointFromLeader(leaderId);
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Pause the State Machine so that no new transactions can be applied.
+// This action also clears the OM Double Buffer so that if there are any
+// pending transactions in the buffer, they are discarded.
+// TODO: The Ratis server should also be paused here. This is required
+//  because a leader election might happen while the snapshot
+//  installation is in progress and the new leader might start sending
+//  append log entries to the ratis server.
+omRatisServer.getOmStateMachine().pause();
+
+try {
+  replaceOMDBWithCheckpoint(lastAppliedIndex, omDBcheckpoint);
+} catch (Exception e) {
+  LOG.error("OM DB checkpoint replacement with new downloaded checkpoint " 
+
+  "failed.", e);
+  return null;
+}
+
+// Reload the OM DB store with the new checkpoint.
+// Restart (unpause) the state machine and update its last applied index
+// to the installed checkpoint's snapshot index.
+try {
+  reloadOMState();
+  omRatisServer.getOmStateMachine().unpause(checkpointSnapshotIndex);
+} catch (IOException e) {
+  LOG.error("Failed to reload OM state with new DB checkpoint.", e);
+  return null;
+}
+
+// TODO: We should only return the snpashotIndex to the leader.
+//  Should be fixed after RATIS-586
+TermIndex newTermIndex = TermIndex.newTermIndex(0,
+checkpointSnapshotIndex);
+
+return newTermIndex;
+  }
+
+  /**
+   * Download the latest OM DB checkpoint from the leader OM.
+   * @param leaderId OMNodeID of the leader OM node.
+   * @return latest DB checkpoint from leader OM.
+   */
+  private DBCheckpoint getDBCheckpointFromLeader(String leaderId) {
+LOG.info("Downloading checkpoint from leader OM {} and reloading state " +
+"from the checkpoint.", leaderId);
+
+try {
+  return omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+}
+return null;
+  }
+
+  /**
+   * Replace the current OM DB with the new DB checkpoint.
+   * @param lastAppliedIndex the last applied index in the current OM DB.
+   * @param omDBcheckpoint the new DB checkpoint
+   * @throws Exception
+   */
+  void replaceOMDBWithCheckpoint(long lastAppliedIndex,
+  DBCheckpoint omDBcheckpoint) throws Exception {
+// Stop the DB first
+DBStore store = metadataManager.getStore();
+store.close();
+
+// Take a backup of the current DB
+File db = store.

[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279382&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279382
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 18/Jul/19 22:46
Start Date: 18/Jul/19 22:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#issuecomment-513019346
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | +1 | mvninstall | 483 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 863 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 321 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 503 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 451 | the patch passed |
   | +1 | compile | 269 | the patch passed |
   | +1 | cc | 269 | the patch passed |
   | +1 | javac | 269 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 692 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 527 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 294 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1616 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 6791 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1108 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux cb07dd5ad9f5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5ef38b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/3/testReport/ |
   | Max. process+thread count | 5157 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279382)
Time Spent: 40m  (was: 0.5h)

> Implement S3 Initiat

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279396&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279396
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 18/Jul/19 23:35
Start Date: 18/Jul/19 23:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513032608
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 487 | trunk passed |
   | +1 | compile | 305 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 965 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 349 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 553 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 160 | hadoop-ozone in the patch failed. |
   | -1 | compile | 62 | hadoop-ozone in the patch failed. |
   | -1 | cc | 62 | hadoop-ozone in the patch failed. |
   | -1 | javac | 62 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 756 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | -1 | findbugs | 108 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdds in the patch passed. |
   | -1 | unit | 118 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 5149 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/948 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 5fd557cb2f6c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5ef38b |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279396)
Time S

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279404&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279404
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 00:29
Start Date: 19/Jul/19 00:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513043559
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 98 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 959 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 375 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 606 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 442 | the patch passed |
   | +1 | compile | 279 | the patch passed |
   | +1 | cc | 279 | the patch passed |
   | +1 | javac | 279 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 764 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 537 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 336 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2175 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 7789 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/948 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux f2938c9c233e 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d545f9c |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/4/testReport/ |
   | Max. process+thread count | 4968 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automat

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279426&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279426
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 01:24
Start Date: 19/Jul/19 01:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513052953
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 482 | trunk passed |
   | +1 | compile | 265 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | trunk passed |
   | 0 | spotbugs | 316 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 509 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for patch |
   | +1 | mvninstall | 435 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | cc | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 648 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 536 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 277 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1639 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6814 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/948 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 0196a0d19d2c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d545f9c |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/5/testReport/ |
   | Max. process+thread count | 5388 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279434&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279434
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 02:16
Start Date: 19/Jul/19 02:16
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513063053
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279434)
Time Spent: 11h 20m  (was: 11h 10m)

> On installSnapshot notification from OM leader, download checkpoint and 
> reload OM state
> ---
>
> Key: HDDS-1649
> URL: https://issues.apache.org/jira/browse/HDDS-1649
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> Installing a DB checkpoint on the OM involves following steps:
>  1. When an OM follower receives installSnapshot notification from OM leader, 
> it should initiate a new checkpoint on the OM leader and download that 
> checkpoint through Http. 
>  2. After downloading the checkpoint, the StateMachine must be paused so that 
> the old OM DB can be replaced with the new downloaded checkpoint. 
>  3. The OM should be reloaded with the new state . All the services having a 
> dependency on the OM DB (such as MetadataManager, KeyManager etc.) must be 
> re-initialized/ restarted. 
>  4. Once the OM is ready with the new state, the state machine must be 
> unpaused to resume participating in the Ratis ring.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279439&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279439
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 02:44
Start Date: 19/Jul/19 02:44
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #948: HDDS-1649. 
On installSnapshot notification from OM leader, download checkpoint and reload 
OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r305183995
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
 ##
 @@ -123,6 +123,9 @@ private OMConfigKeys() {
   "ozone.om.ratis.log.appender.queue.byte-limit";
   public static final String
   OZONE_OM_RATIS_LOG_APPENDER_QUEUE_BYTE_LIMIT_DEFAULT = "32MB";
+  public static final String OZONE_OM_RATIS_LOG_PURGE_GAP =
+  "ozone.om.ratis.log.purge.gap";
+  public static final int OZONE_OM_RATIS_LOG_PURGE_GAP_DEFAULT = 100;
 
 
 Review comment:
   Can we please use the new format for configs? Here are some examples: 
https://cwiki.apache.org/confluence/display/HADOOP/Java-based+configuration+API

 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279439)
Time Spent: 11.5h  (was: 11h 20m)

> On installSnapshot notification from OM leader, download checkpoint and 
> reload OM state
> ---
>
> Key: HDDS-1649
> URL: https://issues.apache.org/jira/browse/HDDS-1649
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Installing a DB checkpoint on the OM involves following steps:
>  1. When an OM follower receives installSnapshot notification from OM leader, 
> it should initiate a new checkpoint on the OM leader and download that 
> checkpoint through Http. 
>  2. After downloading the checkpoint, the StateMachine must be paused so that 
> the old OM DB can be replaced with the new downloaded checkpoint. 
>  3. The OM should be reloaded with the new state . All the services having a 
> dependency on the OM DB (such as MetadataManager, KeyManager etc.) must be 
> re-initialized/ restarted. 
>  4. Once the OM is ready with the new state, the state machine must be 
> unpaused to resume participating in the Ratis ring.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1749) Ozone Client should randomize the list of nodes in pipeline for reads

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1749?focusedWorklogId=279440&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279440
 ]

ASF GitHub Bot logged work on HDDS-1749:


Author: ASF GitHub Bot
Created on: 19/Jul/19 02:45
Start Date: 19/Jul/19 02:45
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1124: HDDS-1749 : Ozone 
Client should randomize the list of nodes in pipeli…
URL: https://github.com/apache/hadoop/pull/1124#issuecomment-513068886
 
 
   Randomize is good to balance the load. However, 
   For write, we still must go through the leader (first node). 
   For read, we can only use random optimization for closed container. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279440)
Time Spent: 40m  (was: 0.5h)

> Ozone Client should randomize the list of nodes in pipeline for reads
> -
>
> Key: HDDS-1749
> URL: https://issues.apache.org/jira/browse/HDDS-1749
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the list of nodes returned by SCM are static and are returned in 
> the same order to all the clients. Ideally these should be sorted by the 
> network topology and then returned to client.
> However even when network topology in not available, then SCM/client should 
> randomly sort the nodes before choosing the replica's to connect.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1653) Add option to "ozone scmcli printTopology" to order the output acccording to topology layer

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1653?focusedWorklogId=279444&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279444
 ]

ASF GitHub Bot logged work on HDDS-1653:


Author: ASF GitHub Bot
Created on: 19/Jul/19 02:55
Start Date: 19/Jul/19 02:55
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1067: HDDS-1653. Add 
option to "ozone scmcli printTopology" to order the ou…
URL: https://github.com/apache/hadoop/pull/1067#issuecomment-513070699
 
 
   +1, will commit shortly. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279444)
Time Spent: 1h  (was: 50m)

> Add option to "ozone scmcli printTopology" to order the output acccording to 
> topology layer
> ---
>
> Key: HDDS-1653
> URL: https://issues.apache.org/jira/browse/HDDS-1653
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Add option to order the output acccording to topology layer.
> For example, for /rack/node topolgy, we can show,
> State = HEALTHY
> /default-rack:
> ozone_datanode_1.ozone_default/172.18.0.3
> ozone_datanode_2.ozone_default/172.18.0.2
> ozone_datanode_3.ozone_default/172.18.0.4
> /rack1:
> ozone_datanode_4.ozone_default/172.18.0.5
> ozone_datanode_5.ozone_default/172.18.0.6
> For /dc/rack/node topology, we can either show
> State = HEALTHY
> /default-dc/default-rack:
> ozone_datanode_1.ozone_default/172.18.0.3
> ozone_datanode_2.ozone_default/172.18.0.2
> ozone_datanode_3.ozone_default/172.18.0.4
> /dc1/rack1:
> ozone_datanode_4.ozone_default/172.18.0.5
> ozone_datanode_5.ozone_default/172.18.0.6
> or
> State = HEALTHY
> default-dc:
> default-rack:
> ozone_datanode_1.ozone_default/172.18.0.3
> ozone_datanode_2.ozone_default/172.18.0.2
> ozone_datanode_3.ozone_default/172.18.0.4
> dc1:
> rack1:
> ozone_datanode_4.ozone_default/172.18.0.5
> ozone_datanode_5.ozone_default/172.18.0.6



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1653) Add option to "ozone scmcli printTopology" to order the output acccording to topology layer

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1653?focusedWorklogId=279445&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279445
 ]

ASF GitHub Bot logged work on HDDS-1653:


Author: ASF GitHub Bot
Created on: 19/Jul/19 03:00
Start Date: 19/Jul/19 03:00
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1067: HDDS-1653. 
Add option to "ozone scmcli printTopology" to order the ou…
URL: https://github.com/apache/hadoop/pull/1067
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279445)
Time Spent: 1h 10m  (was: 1h)

> Add option to "ozone scmcli printTopology" to order the output acccording to 
> topology layer
> ---
>
> Key: HDDS-1653
> URL: https://issues.apache.org/jira/browse/HDDS-1653
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add option to order the output acccording to topology layer.
> For example, for /rack/node topolgy, we can show,
> State = HEALTHY
> /default-rack:
> ozone_datanode_1.ozone_default/172.18.0.3
> ozone_datanode_2.ozone_default/172.18.0.2
> ozone_datanode_3.ozone_default/172.18.0.4
> /rack1:
> ozone_datanode_4.ozone_default/172.18.0.5
> ozone_datanode_5.ozone_default/172.18.0.6
> For /dc/rack/node topology, we can either show
> State = HEALTHY
> /default-dc/default-rack:
> ozone_datanode_1.ozone_default/172.18.0.3
> ozone_datanode_2.ozone_default/172.18.0.2
> ozone_datanode_3.ozone_default/172.18.0.4
> /dc1/rack1:
> ozone_datanode_4.ozone_default/172.18.0.5
> ozone_datanode_5.ozone_default/172.18.0.6
> or
> State = HEALTHY
> default-dc:
> default-rack:
> ozone_datanode_1.ozone_default/172.18.0.3
> ozone_datanode_2.ozone_default/172.18.0.2
> ozone_datanode_3.ozone_default/172.18.0.4
> dc1:
> rack1:
> ozone_datanode_4.ozone_default/172.18.0.5
> ozone_datanode_5.ozone_default/172.18.0.6



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279446&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279446
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 03:02
Start Date: 19/Jul/19 03:02
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#discussion_r305186358
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
 ##
 @@ -123,6 +123,9 @@ private OMConfigKeys() {
   "ozone.om.ratis.log.appender.queue.byte-limit";
   public static final String
   OZONE_OM_RATIS_LOG_APPENDER_QUEUE_BYTE_LIMIT_DEFAULT = "32MB";
+  public static final String OZONE_OM_RATIS_LOG_PURGE_GAP =
+  "ozone.om.ratis.log.purge.gap";
+  public static final int OZONE_OM_RATIS_LOG_PURGE_GAP_DEFAULT = 100;
 
 
 Review comment:
   Good suggestion! Let me file a followup jira to fix that. Want to get this 
patch committed today, it's been hanging around for over a month.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279446)
Time Spent: 11h 40m  (was: 11.5h)

> On installSnapshot notification from OM leader, download checkpoint and 
> reload OM state
> ---
>
> Key: HDDS-1649
> URL: https://issues.apache.org/jira/browse/HDDS-1649
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> Installing a DB checkpoint on the OM involves following steps:
>  1. When an OM follower receives installSnapshot notification from OM leader, 
> it should initiate a new checkpoint on the OM leader and download that 
> checkpoint through Http. 
>  2. After downloading the checkpoint, the StateMachine must be paused so that 
> the old OM DB can be replaced with the new downloaded checkpoint. 
>  3. The OM should be reloaded with the new state . All the services having a 
> dependency on the OM DB (such as MetadataManager, KeyManager etc.) must be 
> re-initialized/ restarted. 
>  4. Once the OM is ready with the new state, the state machine must be 
> unpaused to resume participating in the Ratis ring.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279450&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279450
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 03:06
Start Date: 19/Jul/19 03:06
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#discussion_r305187064
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
 ##
 @@ -123,6 +123,9 @@ private OMConfigKeys() {
   "ozone.om.ratis.log.appender.queue.byte-limit";
   public static final String
   OZONE_OM_RATIS_LOG_APPENDER_QUEUE_BYTE_LIMIT_DEFAULT = "32MB";
+  public static final String OZONE_OM_RATIS_LOG_PURGE_GAP =
+  "ozone.om.ratis.log.purge.gap";
+  public static final int OZONE_OM_RATIS_LOG_PURGE_GAP_DEFAULT = 100;
 
 
 Review comment:
   Filed [HDDS-1831](https://issues.apache.org/jira/browse/HDDS-1831).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279450)
Time Spent: 11h 50m  (was: 11h 40m)

> On installSnapshot notification from OM leader, download checkpoint and 
> reload OM state
> ---
>
> Key: HDDS-1649
> URL: https://issues.apache.org/jira/browse/HDDS-1649
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> Installing a DB checkpoint on the OM involves following steps:
>  1. When an OM follower receives installSnapshot notification from OM leader, 
> it should initiate a new checkpoint on the OM leader and download that 
> checkpoint through Http. 
>  2. After downloading the checkpoint, the StateMachine must be paused so that 
> the old OM DB can be replaced with the new downloaded checkpoint. 
>  3. The OM should be reloaded with the new state . All the services having a 
> dependency on the OM DB (such as MetadataManager, KeyManager etc.) must be 
> re-initialized/ restarted. 
>  4. Once the OM is ready with the new state, the state machine must be 
> unpaused to resume participating in the Ratis ring.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=279482&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279482
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 19/Jul/19 03:32
Start Date: 19/Jul/19 03:32
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1112: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1112#discussion_r305190794
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
 ##
 @@ -137,10 +137,6 @@ public void chooseNodeWithNoExcludedNodes() throws 
SCMException {
 datanodeDetails.get(2)));
 Assert.assertFalse(cluster.isSameParent(datanodeDetails.get(1),
 datanodeDetails.get(2)));
-Assert.assertFalse(cluster.isSameParent(datanodeDetails.get(0),
-datanodeDetails.get(3)));
-Assert.assertFalse(cluster.isSameParent(datanodeDetails.get(2),
-datanodeDetails.get(3)));
 
 Review comment:
   Thanks for the comments. Will remove last two assertions in testFallback.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279482)
Time Spent: 2h 50m  (was: 2h 40m)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1827) Load Snapshot info when OM Ratis server starts

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1827?focusedWorklogId=279565&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279565
 ]

ASF GitHub Bot logged work on HDDS-1827:


Author: ASF GitHub Bot
Created on: 19/Jul/19 07:44
Start Date: 19/Jul/19 07:44
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #1130: 
HDDS-1827. Load Snapshot info when OM Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279565)
Time Spent: 10m
Remaining Estimate: 0h

> Load Snapshot info when OM Ratis server starts
> --
>
> Key: HDDS-1827
> URL: https://issues.apache.org/jira/browse/HDDS-1827
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Ratis server is starting it looks for the latest snapshot to load it. 
> Even though OM does not save snapshots via Ratis, we need to load the saved 
> snaphsot index into Ratis so that the LogAppender knows to not look for logs 
> before the snapshot index. Otherwise, Ratis will replay the logs from 
> beginning every time it starts up.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1827) Load Snapshot info when OM Ratis server starts

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1827:
-
Labels: pull-request-available  (was: )

> Load Snapshot info when OM Ratis server starts
> --
>
> Key: HDDS-1827
> URL: https://issues.apache.org/jira/browse/HDDS-1827
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> When Ratis server is starting it looks for the latest snapshot to load it. 
> Even though OM does not save snapshots via Ratis, we need to load the saved 
> snaphsot index into Ratis so that the LogAppender knows to not look for logs 
> before the snapshot index. Otherwise, Ratis will replay the logs from 
> beginning every time it starts up.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=279618&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279618
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 19/Jul/19 09:51
Start Date: 19/Jul/19 09:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1112: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1112#issuecomment-513169185
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 101 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 261 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 951 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 334 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 543 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 265 | the patch passed |
   | +1 | cc | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 758 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 553 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 340 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2201 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7695 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1112/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1112 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux c1ab3b738dac 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1112/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1112/5/testReport/ |
   | Max. process+thread count | 4581 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1112/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279618)

[jira] [Work logged] (HDDS-1827) Load Snapshot info when OM Ratis server starts

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1827?focusedWorklogId=279616&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279616
 ]

ASF GitHub Bot logged work on HDDS-1827:


Author: ASF GitHub Bot
Created on: 19/Jul/19 09:51
Start Date: 19/Jul/19 09:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1130: HDDS-1827. Load 
Snapshot info when OM Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130#issuecomment-513169099
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 478 | trunk passed |
   | +1 | compile | 262 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 873 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   | 0 | spotbugs | 319 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 513 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 446 | the patch passed |
   | +1 | compile | 269 | the patch passed |
   | +1 | javac | 269 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 93 | hadoop-ozone generated 1 new + 13 unchanged - 0 fixed 
= 14 total (was 13) |
   | -1 | findbugs | 325 | hadoop-ozone generated 4 new + 0 unchanged - 0 fixed 
= 4 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 218 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2429 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 7552 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex():in
 org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex(): 
new java.io.FileReader(File)  At OMRatisSnapshotInfo.java:[line 82] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.saveRatisSnapshotToDisk(long):in
 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.saveRatisSnapshotToDisk(long):
 new java.io.FileWriter(File)  At OMRatisSnapshotInfo.java:[line 104] |
   |  |  Dereference of the result of readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:[line 88] |
   |  |  Dereference of the result of readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:[line 87] |
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1130/1/artifact/out/Dockerfile
 |
   | GITHUB PR | h

[jira] [Updated] (HDDS-1836) Change the default value of ratis leader election min timeout to a lower value

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1836:
-
Labels: pull-request-available  (was: )

> Change the default value of ratis leader election min timeout to a lower value
> --
>
> Key: HDDS-1836
> URL: https://issues.apache.org/jira/browse/HDDS-1836
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> The default value of min leader election timeout currently is 5s(done with 
> HDDS-1718) by default which is leading to leader election taking much longer 
> time to timeout in case of network failures and leading to delayed creation 
> of pipelines in the system. The idea is to change the default value to a 
> lower value of "2s" for now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1836) Change the default value of ratis leader election min timeout to a lower value

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1836?focusedWorklogId=279633&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279633
 ]

ASF GitHub Bot logged work on HDDS-1836:


Author: ASF GitHub Bot
Created on: 19/Jul/19 10:22
Start Date: 19/Jul/19 10:22
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1133: HDDS-1836. 
Change the default value of ratis leader election min timeout to a lower value
URL: https://github.com/apache/hadoop/pull/1133
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279633)
Time Spent: 10m
Remaining Estimate: 0h

> Change the default value of ratis leader election min timeout to a lower value
> --
>
> Key: HDDS-1836
> URL: https://issues.apache.org/jira/browse/HDDS-1836
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default value of min leader election timeout currently is 5s(done with 
> HDDS-1718) by default which is leading to leader election taking much longer 
> time to timeout in case of network failures and leading to delayed creation 
> of pipelines in the system. The idea is to change the default value to a 
> lower value of "2s" for now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=279644&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279644
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 19/Jul/19 11:06
Start Date: 19/Jul/19 11:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102#issuecomment-513189261
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 551 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 471 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 98 | hadoop-hdds in the patch passed. |
   | +1 | unit | 170 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 2989 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1102 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 054075b12d03 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/3/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279644)
Time Spent: 50m  (was: 40m)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1749) Ozone Client should randomize the list of nodes in pipeline for reads

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1749?focusedWorklogId=279647&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279647
 ]

ASF GitHub Bot logged work on HDDS-1749:


Author: ASF GitHub Bot
Created on: 19/Jul/19 11:09
Start Date: 19/Jul/19 11:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1124: HDDS-1749 : 
Ozone Client should randomize the list of nodes in pipeli…
URL: https://github.com/apache/hadoop/pull/1124#issuecomment-513190201
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 90 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 516 | trunk passed |
   | +1 | compile | 257 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 791 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 311 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 493 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 427 | the patch passed |
   | +1 | compile | 245 | the patch passed |
   | +1 | javac | 245 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 636 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 537 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 348 | hadoop-hdds in the patch passed. |
   | -1 | unit | 241 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 5214 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1124/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1124 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 194dd3b71806 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1124/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1124/2/testReport/ |
   | Max. process+thread count | 1351 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client U: hadoop-hdds/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1124/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279647)
Time Spent: 50m  (was: 40m)

> Ozone Client should randomize the list of nodes in pipeline for reads
> -
>
> Key: HDDS-1749
> URL: https://issues.apache.org/jira/browse/HDDS-1749
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently the list of nodes returned by SCM are static and are returned in 
> the same order to all the clien

[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=279649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279649
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 19/Jul/19 11:11
Start Date: 19/Jul/19 11:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1033: HDDS-1391 : Add 
ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#issuecomment-513190749
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/1033 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1033 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/4/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279649)
Time Spent: 1.5h  (was: 1h 20m)

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Added an RPC end point to serve the set of updates in OM RocksDB from a given 
> sequence number.
> This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
> will keep their aggregate data up to date. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279688&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279688
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 19/Jul/19 11:48
Start Date: 19/Jul/19 11:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#issuecomment-513199934
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 462 | trunk passed |
   | +1 | compile | 254 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 500 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 434 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | cc | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 614 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 521 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 276 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1459 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6294 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1108 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 60075cfa943e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/4/testReport/ |
   | Max. process+thread count | 5283 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please lo

[jira] [Work logged] (HDDS-1799) Add goofyfs to the ozone-runner docker image

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1799?focusedWorklogId=279692&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279692
 ]

ASF GitHub Bot logged work on HDDS-1799:


Author: ASF GitHub Bot
Created on: 19/Jul/19 11:56
Start Date: 19/Jul/19 11:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1105: HDDS-1799. Add 
goofyfs to the ozone-runner docker image
URL: https://github.com/apache/hadoop/pull/1105#issuecomment-513201916
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 534 | trunk passed |
   | +1 | compile | 254 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 818 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 469 | the patch passed |
   | +1 | compile | 275 | the patch passed |
   | +1 | javac | 275 | the patch passed |
   | +1 | hadolint | 2 | The patch generated 0 new + 1 unchanged - 3 fixed = 1 
total (was 4) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 330 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2156 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 6175 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1105/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1105 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml hadolint shellcheck shelldocs |
   | uname | Linux 258f7a8088f7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1105/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1105/3/testReport/ |
   | Max. process+thread count | 4912 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1105/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279692)
Time Sp

[jira] [Work logged] (HDDS-1811) Prometheus metrics are broken for datanodes due to an invalid metric

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1811?focusedWorklogId=279691&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279691
 ]

ASF GitHub Bot logged work on HDDS-1811:


Author: ASF GitHub Bot
Created on: 19/Jul/19 11:56
Start Date: 19/Jul/19 11:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1118: HDDS-1811. 
Prometheus metrics are broken
URL: https://github.com/apache/hadoop/pull/1118#issuecomment-513201944
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 523 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 484 | trunk passed |
   | +1 | compile | 245 | trunk passed |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 796 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 317 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 518 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 426 | the patch passed |
   | +1 | compile | 254 | the patch passed |
   | +1 | javac | 254 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 541 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2074 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7461 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1118/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1118 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2a9658119c44 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1118/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1118/3/testReport/ |
   | Max. process+thread count | 5008 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/framework hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1118/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279691)
Time Spent: 1h  (was: 50m)

> Prometheus metrics are broken for datanodes due to an invalid metric
> -

[jira] [Work logged] (HDDS-1798) Propagate failure in writeStateMachineData to Ratis

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1798?focusedWorklogId=279696&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279696
 ]

ASF GitHub Bot logged work on HDDS-1798:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:03
Start Date: 19/Jul/19 12:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1113: HDDS-1798. 
Propagate failure in writeStateMachineData to Ratis. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1113#issuecomment-513203691
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 513 | trunk passed |
   | +1 | compile | 272 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 948 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 349 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 555 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 470 | the patch passed |
   | +1 | compile | 277 | the patch passed |
   | +1 | javac | 277 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 579 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 347 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2362 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7830 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1113 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f4c8d4834b55 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/1/testReport/ |
   | Max. process+thread count | 5057 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1113/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apa

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279697&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279697
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:11
Start Date: 19/Jul/19 12:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#issuecomment-513205733
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 529 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 881 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 306 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 500 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 456 | the patch passed |
   | +1 | compile | 247 | the patch passed |
   | +1 | javac | 247 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 626 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | the patch passed |
   | +1 | findbugs | 521 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 274 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1783 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6793 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1097/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1097 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d13df8489eaf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1097/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1097/3/testReport/ |
   | Max. process+thread count | 4768 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1097/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279697)
Time Spent: 50m  (was: 40m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
>

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279707&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279707
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335385
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279705&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279705
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335360
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279706&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279706
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335374
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279719&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279719
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:32
Start Date: 19/Jul/19 12:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335736
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279713&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279713
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335454
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279709&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279709
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335404
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279717&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279717
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:32
Start Date: 19/Jul/19 12:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335515
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279714&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279714
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335468
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279711&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279711
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335422
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279715&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279715
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:32
Start Date: 19/Jul/19 12:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335480
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279718&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279718
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:32
Start Date: 19/Jul/19 12:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335719
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279708&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279708
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335391
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279710&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279710
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335459
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279712&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279712
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:31
Start Date: 19/Jul/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335447
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279722&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279722
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335995
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279720&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279720
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335934
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279723&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279723
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335985
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279724&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279724
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335950
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279721&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279721
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335980
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279726&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279726
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305336085
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?focusedWorklogId=279725&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279725
 ]

ASF GitHub Bot logged work on HDDS-1585:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:33
Start Date: 19/Jul/19 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1064: 
HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#discussion_r305335966
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 ##
 @@ -0,0 +1,17280 @@
+
+
+
+THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY 
BE CONTAINED IN PORTIONS OF THE OZONE RECON PRODUCT.
+
+-
+
+The following software may be included in this product: 
@ant-design/create-react-context, create-react-context. A copy of the source 
code may be downloaded from https://github.com/ant-design/create-react-context 
(@ant-design/create-react-context), 
https://github.com/thejameskyle/create-react-context (create-react-context). 
This software contains the following license and notice below:
+
+Copyright (c) 2017-present James Kyle 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+-
+
+The following software may be included in this product: @babel/code-frame, 
@babel/helper-annotate-as-pure, @babel/helper-get-function-arity, 
@babel/helper-member-expression-to-functions, @babel/helper-module-imports, 
@babel/helper-optimise-call-expression, @babel/helper-plugin-utils, 
@babel/highlight, @babel/preset-react. A copy of the source code may be 
downloaded from 
https://github.com/babel/babel/tree/master/packages/babel-code-frame 
(@babel/code-frame), 
https://github.com/babel/babel/tree/master/packages/babel-helper-annotate-as-pure
 (@babel/helper-annotate-as-pure), 
https://github.com/babel/babel/tree/master/packages/babel-helper-get-function-arity
 (@babel/helper-get-function-arity), 
https://github.com/babel/babel/tree/master/packages/babel-helper-member-expression-to-functions
 (@babel/helper-member-expression-to-functions), 
https://github.com/babel/babel/tree/master/packages/babel-helper-module-imports 
(@babel/helper-module-imports), 
https://github.com/babel/babel/tree/master/packages/babel-helper-optimise-call-expression
 (@babel/helper-optimise-call-expression), 
https://github.com/babel/babel/tree/master/packages/babel-helper-plugin-utils 
(@babel/helper-plugin-utils), 
https://github.com/babel/babel/tree/master/packages/babel-highlight 
(@babel/highlight), 
https://github.com/babel/babel/tree/master/packages/babel-preset-react 
(@babel/preset-react). This software contains the following license and notice 
below:
+
+MIT License
+
+Copyright (c) 2014-2018 Sebastian McKenzie 
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE

[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=279731&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279731
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 19/Jul/19 12:56
Start Date: 19/Jul/19 12:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #950: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/950#issuecomment-513218441
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 488 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1260 | branch has no errors when building and testing 
our client artifacts. |
   | -0 | patch | 1342 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 446 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 2636 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-950/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/950 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux d1de1989d91f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-950/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279731)
Time Spent: 2h 40m  (was: 2.5h)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1725) pv-test example to test csi is not working

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1725?focusedWorklogId=279734&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279734
 ]

ASF GitHub Bot logged work on HDDS-1725:


Author: ASF GitHub Bot
Created on: 19/Jul/19 13:00
Start Date: 19/Jul/19 13:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1070: HDDS-1725. 
pv-test example to test csi is not working
URL: https://github.com/apache/hadoop/pull/1070#issuecomment-513219383
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 498 | trunk passed |
   | +1 | compile | 280 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 767 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 474 | the patch passed |
   | +1 | compile | 280 | the patch passed |
   | +1 | javac | 280 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 652 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 35 | hadoop-hdds in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 295 | hadoop-hdds in the patch failed. |
   | -1 | unit | 4118 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7952 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.container.TestReplicationManager |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1070 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux 923ac5fd36aa 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/2/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/2/testReport/ |
   | Max. process+thread count | 4789 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automati

[jira] [Work logged] (HDDS-1710) Publish JVM metrics via Hadoop metrics

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1710?focusedWorklogId=279742&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279742
 ]

ASF GitHub Bot logged work on HDDS-1710:


Author: ASF GitHub Bot
Created on: 19/Jul/19 13:20
Start Date: 19/Jul/19 13:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #994: HDDS-1710. 
Publish JVM metrics via Hadoop metrics
URL: https://github.com/apache/hadoop/pull/994#issuecomment-513225921
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 61 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 542 | trunk passed |
   | +1 | compile | 272 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 818 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 540 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 331 | hadoop-ozone in the patch failed. |
   | -1 | compile | 63 | hadoop-ozone in the patch failed. |
   | -1 | javac | 63 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 43 | hadoop-hdds: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | -0 | checkstyle | 39 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 67 | hadoop-hdds generated 1 new + 14 unchanged - 0 fixed = 
15 total (was 14) |
   | -1 | findbugs | 108 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 328 | hadoop-hdds in the patch passed. |
   | -1 | unit | 109 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 4960 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/994 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 72f2d85f4faf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/testReport/ |
   | Max. process+thread count | 507 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 htt

[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=279751&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279751
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 19/Jul/19 13:39
Start Date: 19/Jul/19 13:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1008: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#issuecomment-513232255
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 508 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 332 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 528 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 452 | the patch passed |
   | +1 | compile | 261 | the patch passed |
   | +1 | javac | 261 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 637 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 566 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 287 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1571 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6561 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1008/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1008 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3786908b30e1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1008/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1008/2/testReport/ |
   | Max. process+thread count | 4839 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1008/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to 

[jira] [Work logged] (HDDS-1682) TestEventWatcher.testMetrics is flaky

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1682?focusedWorklogId=279758&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279758
 ]

ASF GitHub Bot logged work on HDDS-1682:


Author: ASF GitHub Bot
Created on: 19/Jul/19 13:51
Start Date: 19/Jul/19 13:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #962: HDDS-1682. 
TestEventWatcher.testMetrics is flaky
URL: https://github.com/apache/hadoop/pull/962#issuecomment-513236492
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 469 | trunk passed |
   | +1 | compile | 253 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 787 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 302 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 488 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 433 | the patch passed |
   | +1 | compile | 256 | the patch passed |
   | +1 | javac | 256 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 661 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 278 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1498 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6314 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/962 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f1116a5f31ca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/2/testReport/ |
   | Max. process+thread count | 5050 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/framework U: hadoop-hdds/framework |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279

[jira] [Work logged] (HDDS-1686) Remove check to get from openKeyTable in acl implementation for Keys

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1686?focusedWorklogId=279767&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279767
 ]

ASF GitHub Bot logged work on HDDS-1686:


Author: ASF GitHub Bot
Created on: 19/Jul/19 14:01
Start Date: 19/Jul/19 14:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #966: HDDS-1686. Remove 
check to get from openKeyTable in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966#issuecomment-513239998
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 61 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 540 | trunk passed |
   | +1 | compile | 255 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 332 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 540 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 477 | the patch passed |
   | +1 | compile | 257 | the patch passed |
   | +1 | javac | 257 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 643 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 551 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 236 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2237 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7257 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/966 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux adc33fbfedcc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/2/testReport/ |
   | Max. process+thread count | 4955 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Work logged] (HDDS-1679) TestBCSID failing because of dangling db references

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1679?focusedWorklogId=279777&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279777
 ]

ASF GitHub Bot logged work on HDDS-1679:


Author: ASF GitHub Bot
Created on: 19/Jul/19 14:24
Start Date: 19/Jul/19 14:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #960: HDDS-1679. debug 
patch
URL: https://github.com/apache/hadoop/pull/960#issuecomment-513248631
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 119 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 666 | trunk passed |
   | +1 | compile | 322 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1078 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | trunk passed |
   | 0 | spotbugs | 363 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 605 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 457 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   | +1 | findbugs | 580 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 367 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2235 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8160 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-960/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/960 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 055d2e75b1de 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e66cb9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-960/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-960/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-960/2/testReport/ |
   | Max. process+thread count | 4883 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-960/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about thi

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279782&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279782
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 14:39
Start Date: 19/Jul/19 14:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513255104
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 524 | trunk passed |
   | +1 | compile | 254 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 837 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 326 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 522 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 438 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | cc | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 638 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 571 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 239 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2317 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7392 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/948 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 2bc98f91d488 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6282c02 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/6/testReport/ |
   | Max. process+thread count | 4916 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0

[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=279793&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279793
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 19/Jul/19 15:34
Start Date: 19/Jul/19 15:34
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1112: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1112#issuecomment-513275086
 
 
   Thanks @ChenSammi for working on this. Overall the change looks good to me. 
Few of the test failure seems related, can you please take a look at them?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279793)
Time Spent: 3h 20m  (was: 3h 10m)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=279830&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279830
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 19/Jul/19 16:52
Start Date: 19/Jul/19 16:52
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1033: HDDS-1391 
: Add ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#discussion_r305440288
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DataNotFoundException.java
 ##
 @@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db;
+
+import java.io.IOException;
+
+/**
+ * Thrown if RocksDB is unable to find requested data from WAL file.
+ */
+public class DataNotFoundException extends IOException {
 
 Review comment:
   Will rename this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279830)
Time Spent: 1h 40m  (was: 1.5h)

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Added an RPC end point to serve the set of updates in OM RocksDB from a given 
> sequence number.
> This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
> will keep their aggregate data up to date. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=279831&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279831
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 19/Jul/19 16:52
Start Date: 19/Jul/19 16:52
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1033: HDDS-1391 
: Add ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#discussion_r305440235
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
 ##
 @@ -318,6 +320,44 @@ public CodecRegistry getCodecRegistry() {
 return codecRegistry;
   }
 
+  @Override
+  public DBUpdatesWrapper getUpdatesSince(long sequenceNumber)
+  throws DataNotFoundException {
+
+DBUpdatesWrapper dbUpdatesWrapper = new DBUpdatesWrapper();
+try {
+  TransactionLogIterator transactionLogIterator =
+  db.getUpdatesSince(sequenceNumber);
+
+  boolean flag = true;
+
+  while (transactionLogIterator.isValid()) {
 
 Review comment:
   yes, flushes can happen from memtables to SST anytime. If the WAL is deleted 
while it is being read, we will still be handle it through a retry mechanism 
from Recon side. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279831)
Time Spent: 1h 50m  (was: 1h 40m)

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Added an RPC end point to serve the set of updates in OM RocksDB from a given 
> sequence number.
> This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
> will keep their aggregate data up to date. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1782?focusedWorklogId=279865&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279865
 ]

ASF GitHub Bot logged work on HDDS-1782:


Author: ASF GitHub Bot
Created on: 19/Jul/19 17:44
Start Date: 19/Jul/19 17:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1076: HDDS-1782. Add 
an option to MiniOzoneChaosCluster to read files multiple times. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#issuecomment-513316421
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 486 | trunk passed |
   | +1 | compile | 249 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 725 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 519 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 434 | the patch passed |
   | +1 | compile | 250 | the patch passed |
   | +1 | javac | 250 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 644 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 553 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 314 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2070 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7045 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1076 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux c617861b2312 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cd967c7 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/4/testReport/ |
   | Max. process+thread count | 4808 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/4/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279865)
Time Spent: 1

[jira] [Work logged] (HDDS-1649) On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1649?focusedWorklogId=279918&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279918
 ]

ASF GitHub Bot logged work on HDDS-1649:


Author: ASF GitHub Bot
Created on: 19/Jul/19 19:52
Start Date: 19/Jul/19 19:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #948: HDDS-1649. On 
installSnapshot notification from OM leader, download checkpoint and reload OM 
state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513356566
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279918)
Time Spent: 12h 10m  (was: 12h)

> On installSnapshot notification from OM leader, download checkpoint and 
> reload OM state
> ---
>
> Key: HDDS-1649
> URL: https://issues.apache.org/jira/browse/HDDS-1649
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> Installing a DB checkpoint on the OM involves following steps:
>  1. When an OM follower receives installSnapshot notification from OM leader, 
> it should initiate a new checkpoint on the OM leader and download that 
> checkpoint through Http. 
>  2. After downloading the checkpoint, the StateMachine must be paused so that 
> the old OM DB can be replaced with the new downloaded checkpoint. 
>  3. The OM should be reloaded with the new state . All the services having a 
> dependency on the OM DB (such as MetadataManager, KeyManager etc.) must be 
> re-initialized/ restarted. 
>  4. Once the OM is ready with the new state, the state machine must be 
> unpaused to resume participating in the Ratis ring.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279929&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279929
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 19:54
Start Date: 19/Jul/19 19:54
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305503603
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
 
 Review comment:
   Is this checkNotNull for findbugs?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279929)
Time Spent: 1h  (was: 50m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented w

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279931&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279931
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 19:55
Start Date: 19/Jul/19 19:55
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305503939
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
+
+// TODO: Do we need to enforce the bucket rules in this code path?
+// https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
+
+// For now only checked the length.
+int bucketLength = s3DeleteBucketRequest.getS3BucketName().length();
+if (bucketLength < 3 || bucketLength >= 64) {
 
 Review comment:
   Can you use the constants here instead of magic numbers?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279931)
Time Spent: 1h 10m  (was: 1h)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Prior

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279932&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279932
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 19:57
Start Date: 19/Jul/19 19:57
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305504491
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
+
+// TODO: Do we need to enforce the bucket rules in this code path?
+// https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
+
+// For now only checked the length.
+int bucketLength = s3DeleteBucketRequest.getS3BucketName().length();
+if (bucketLength < 3 || bucketLength >= 64) {
 
 Review comment:
   Do we need to have this check for delete bucket request? If the bucket does 
not exist we will get the correct error later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279932)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat V

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279934&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279934
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 19:59
Start Date: 19/Jul/19 19:59
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1097: 
HDDS-1795. Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305505264
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
 
 Review comment:
   Intention here is to catch if some one is calling S3BucketDeleteRequest with 
OMRequest which does not have S3BucketDeleteRequest set. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279934)
Time Spent: 1.5h  (was: 1h 20m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implem

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279935&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279935
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:00
Start Date: 19/Jul/19 20:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1097: 
HDDS-1795. Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305505474
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
+
+// TODO: Do we need to enforce the bucket rules in this code path?
+// https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
+
+// For now only checked the length.
+int bucketLength = s3DeleteBucketRequest.getS3BucketName().length();
+if (bucketLength < 3 || bucketLength >= 64) {
 
 Review comment:
   This is to give the error to the user, S3 does not accept bucket requests 
with length less than 3 and greater than 64. This is to match AWS S3 behavior.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279935)
Time Spent: 1h 40m  (was: 1.5h)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone M

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279936&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279936
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:01
Start Date: 19/Jul/19 20:01
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305505870
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
 
 Review comment:
   We will get NPE on the next line. We should not add asserts for references 
we will dereference soon. Not a blocker to commit, something we can clean up 
later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279936)
Time Spent: 1h 50m  (was: 1h 40m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add th

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279937&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279937
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:01
Start Date: 19/Jul/19 20:01
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305505903
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
+
+// TODO: Do we need to enforce the bucket rules in this code path?
+// https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
+
+// For now only checked the length.
+int bucketLength = s3DeleteBucketRequest.getS3BucketName().length();
+if (bucketLength < 3 || bucketLength >= 64) {
 
 Review comment:
   Thanks.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279937)
Time Spent: 2h  (was: 1h 50m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-av

[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279940&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279940
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:05
Start Date: 19/Jul/19 20:05
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#discussion_r305506924
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
 ##
 @@ -221,7 +221,6 @@ public static boolean isReadOnly(
 case GetDelegationToken:
 case RenewDelegationToken:
 case CancelDelegationToken:
-case ApplyInitiateMultiPartUpload:
 
 Review comment:
   Why are we removing it from here? Is this intentional?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279940)
Time Spent: 1h  (was: 50m)

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279943&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279943
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:29
Start Date: 19/Jul/19 20:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1097: 
HDDS-1795. Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305514231
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
 
 Review comment:
   Done. Removed Precondition check.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279943)
Time Spent: 2h 10m  (was: 2h)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279944&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279944
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:30
Start Date: 19/Jul/19 20:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1097: 
HDDS-1795. Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#discussion_r305514532
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3DeleteBucketRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+
+/**
+ * Handle Create S3Bucket request.
+ */
+public class S3BucketDeleteRequest extends OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3BucketDeleteRequest.class);
+
+  public S3BucketDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3DeleteBucketRequest s3DeleteBucketRequest =
+getOmRequest().getDeleteS3BucketRequest();
+Preconditions.checkNotNull(s3DeleteBucketRequest);
+
+// TODO: Do we need to enforce the bucket rules in this code path?
+// https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
+
+// For now only checked the length.
+int bucketLength = s3DeleteBucketRequest.getS3BucketName().length();
+if (bucketLength < 3 || bucketLength >= 64) {
 
 Review comment:
   Updated to use constants.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279944)
Time Spent: 2h 20m  (was: 2h 10m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> 

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279945&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279945
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:30
Start Date: 19/Jul/19 20:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#issuecomment-513367472
 
 
   Thank You @arp7 for the review.
   Addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279945)
Time Spent: 2.5h  (was: 2h 20m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279946&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279946
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:31
Start Date: 19/Jul/19 20:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#issuecomment-513367797
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279946)
Time Spent: 2h 40m  (was: 2.5h)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279947
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:37
Start Date: 19/Jul/19 20:37
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#discussion_r305516630
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
 ##
 @@ -0,0 +1,214 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.multipart;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.s3.multipart.S3InitiateMultipartUploadResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.MultipartInfoInitiateRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.MultipartInfoInitiateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.UUID;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles initiate multipart upload request.
+ */
+public class S3InitiateMultipartUploadRequest extends OMKeyRequest {
+
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3InitiateMultipartUploadRequest.class);
+
+  public S3InitiateMultipartUploadRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) {
+MultipartInfoInitiateRequest multipartInfoInitiateRequest =
+getOmRequest().getInitiateMultiPartUploadRequest();
+Preconditions.checkNotNull(multipartInfoInitiateRequest);
+
+OzoneManagerProtocolProtos.KeyArgs.Builder newKeyArgs =
+multipartInfoInitiateRequest.getKeyArgs().toBuilder()
+.setMultipartUploadID(UUID.randomUUID().toString() + "-" +
+UniqueId.next()).setModificationTime(Time.now());
+
+return getOmRequest().toBuilder()
+.setUserInfo(getUserInfo())
+.setInitiateMultiPartUploadRequest(
+multipartInfoInitiateRequest.toBuilder().setKeyArgs(newKeyArgs))
+.build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+MultipartInfoInitiateRequest multipartInfoInitiateRequest =
+getOmRequest().getInitiateMultiPartUploadRequest();
+
+OzoneManagerProtocolProtos.KeyArgs keyArgs =
+multipartInfoInitiateRequest.getKeyArgs();
+
+Preconditions.checkNotNull(keyArgs.getMultipartUploadID());
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+Stri

[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279948&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279948
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:41
Start Date: 19/Jul/19 20:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1108: 
HDDS-1805. Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#discussion_r305517722
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
 ##
 @@ -0,0 +1,214 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.multipart;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.s3.multipart.S3InitiateMultipartUploadResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.MultipartInfoInitiateRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.MultipartInfoInitiateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.UUID;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles initiate multipart upload request.
+ */
+public class S3InitiateMultipartUploadRequest extends OMKeyRequest {
+
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3InitiateMultipartUploadRequest.class);
+
+  public S3InitiateMultipartUploadRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) {
+MultipartInfoInitiateRequest multipartInfoInitiateRequest =
+getOmRequest().getInitiateMultiPartUploadRequest();
+Preconditions.checkNotNull(multipartInfoInitiateRequest);
+
+OzoneManagerProtocolProtos.KeyArgs.Builder newKeyArgs =
+multipartInfoInitiateRequest.getKeyArgs().toBuilder()
+.setMultipartUploadID(UUID.randomUUID().toString() + "-" +
+UniqueId.next()).setModificationTime(Time.now());
+
+return getOmRequest().toBuilder()
+.setUserInfo(getUserInfo())
+.setInitiateMultiPartUploadRequest(
+multipartInfoInitiateRequest.toBuilder().setKeyArgs(newKeyArgs))
+.build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+MultipartInfoInitiateRequest multipartInfoInitiateRequest =
+getOmRequest().getInitiateMultiPartUploadRequest();
+
+OzoneManagerProtocolProtos.KeyArgs keyArgs =
+multipartInfoInitiateRequest.getKeyArgs();
+
+Preconditions.checkNotNull(keyArgs.getMultipartUploadID());
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();

[jira] [Work logged] (HDDS-1836) Change the default value of ratis leader election min timeout to a lower value

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1836?focusedWorklogId=279949&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279949
 ]

ASF GitHub Bot logged work on HDDS-1836:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:43
Start Date: 19/Jul/19 20:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1133: HDDS-1836. 
Change the default value of ratis leader election min timeout to a lower value
URL: https://github.com/apache/hadoop/pull/1133#issuecomment-513371180
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 531 | trunk passed |
   | +1 | compile | 254 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 821 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 319 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 511 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 285 | the patch passed |
   | +1 | javac | 285 | the patch passed |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 639 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 311 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2081 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7135 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 66a2ab8ea4d9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7f1b76c |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/1/testReport/ |
   | Max. process+thread count | 5119 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog I

[jira] [Work logged] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?focusedWorklogId=279950&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279950
 ]

ASF GitHub Bot logged work on HDDS-1795:


Author: ASF GitHub Bot
Created on: 19/Jul/19 20:45
Start Date: 19/Jul/19 20:45
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1097: HDDS-1795. 
Implement S3 Delete Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1097#issuecomment-513371794
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279950)
Time Spent: 2h 50m  (was: 2h 40m)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279952&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279952
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 19/Jul/19 21:05
Start Date: 19/Jul/19 21:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#issuecomment-513378046
 
 
   Test failures are not related to this patch.
   I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279952)
Time Spent: 1.5h  (was: 1h 20m)

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=279953&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279953
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 19/Jul/19 21:06
Start Date: 19/Jul/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#issuecomment-513378046
 
 
   Thank You @arp7 for the review.
   Test failures are not related to this patch.
   I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 279953)
Time Spent: 1h 40m  (was: 1.5h)

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >