[jira] [Created] (HDDS-1227) Avoid extra buffer copy during checksum computation in write Path

2019-03-05 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1227:
-

 Summary: Avoid extra buffer copy during checksum computation in 
write Path
 Key: HDDS-1227
 URL: https://issues.apache.org/jira/browse/HDDS-1227
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


The code here does a buffer copy to to compute checksum. This needs to be 
avoided.
{code:java}
/**
 * Computes checksum for give data.
 * @param byteString input data in the form of ByteString.
 * @return ChecksumData computed for input data.
 */
public ChecksumData computeChecksum(ByteString byteString)
throws OzoneChecksumException {
  return computeChecksum(byteString.toByteArray());
}

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-05 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-14338:


 Summary: TestPread timeouts in branch-2.8
 Key: HDFS-14338
 URL: https://issues.apache.org/jira/browse/HDFS-14338
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


TestPread timeouts in branch-2.8.
{noformat}
---
 T E S T S
---
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was 
removed in 8.0
Running org.apache.hadoop.hdfs.TestPread

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7134) Replication count for a block should not update till the blocks have settled on Datanodes

2019-03-05 Thread gurmukh singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh resolved HDFS-7134.
-
   Resolution: Fixed
Fix Version/s: 3.1.0
 Release Note: This is resolved in 3.1

> Replication count for a block should not update till the blocks have settled 
> on Datanodes
> -
>
> Key: HDFS-7134
> URL: https://issues.apache.org/jira/browse/HDFS-7134
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 1.2.1, 2.6.0, 2.7.3
> Environment: Linux nn1.cluster1.com 2.6.32-431.20.3.el6.x86_64 #1 SMP 
> Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@nn1 conf]$ cat /etc/redhat-release
> CentOS release 6.5 (Final)
>Reporter: gurmukh singh
>Priority: Critical
>  Labels: HDFS
> Fix For: 3.1.0
>
>
> The count for the number of replica's for a block should not change till the 
> blocks have settled on the datanodes.
> Test Case:
> Hadoop Cluster with 1 namenode and 3 datanodes.
> nn1.cluster1.com(192.168.1.70)
> dn1.cluster1.com(192.168.1.72)
> dn2.cluster1.com(192.168.1.73)
> dn3.cluster1.com(192.168.1.74)
> Cluster up and running fine with replication set to "1" for parameter 
> "dfs.replication on all nodes"
> 
> dfs.replication
> 1
> 
> To reduce the wait time, have reduced the dfs.heartbeat and recheck 
> parameters.
> on datanode2 (192.168.1.72)
> [hadoop@dn2 ~]$ hadoop fs -Ddfs.replication=2 -put from_dn2 /
> [hadoop@dn2 ~]$ hadoop fs -ls /from_dn2
> Found 1 items
> -rw-r--r--   2 hadoop supergroup 17 2014-09-23 13:33 /from_dn2
> On Namenode
> ===
> As expected, copy was done from datanode2, one copy will go locally.
> [hadoop@nn1 conf]$ hadoop fsck /from_dn2 -files -blocks -locations
> FSCK started by hadoop from /192.168.1.70 for path /from_dn2 at Tue Sep 23 
> 13:53:16 IST 2014
> /from_dn2 17 bytes, 1 block(s):  OK
> 0. blk_8132629811771280764_1175 len=17 repl=2 [192.168.1.74:50010, 
> 192.168.1.73:50010]
> Can see the blocks on the data nodes disks as well under the "current" 
> directory.
> Now, shutdown datanode2(192.168.1.73) and as expected block moves to another 
> datanode to maintain a replication of 2
> [hadoop@nn1 conf]$ hadoop fsck /from_dn2 -files -blocks -locations
> FSCK started by hadoop from /192.168.1.70 for path /from_dn2 at Tue Sep 23 
> 13:54:21 IST 2014
> /from_dn2 17 bytes, 1 block(s):  OK
> 0. blk_8132629811771280764_1175 len=17 repl=2 [192.168.1.74:50010, 
> 192.168.1.72:50010]
> But, now if i bring back the datanode2, and although the namenode see that 
> this block is at 3 places now and fires a invalidate command for 
> datanode1(192.168.1.72) but the replication on the namenode is bumped to 3 
> immediately.
> [hadoop@nn1 conf]$ hadoop fsck /from_dn2 -files -blocks -locations
> FSCK started by hadoop from /192.168.1.70 for path /from_dn2 at Tue Sep 23 
> 13:56:12 IST 2014
> /from_dn2 17 bytes, 1 block(s):  OK
> 0. blk_8132629811771280764_1175 len=17 repl=3 [192.168.1.74:50010, 
> 192.168.1.72:50010, 192.168.1.73:50010]
> on Datanode1 - The invalidate command has been fired immediately and the 
> block deleted.
> =
> 2014-09-23 13:54:17,483 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving blk_8132629811771280764_1175 src: /192.168.1.74:38099 dest: 
> /192.168.1.72:50010
> 2014-09-23 13:54:17,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Received blk_8132629811771280764_1175 src: /192.168.1.74:38099 dest: 
> /192.168.1.72:50010 size 17
> 2014-09-23 13:55:28,720 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Scheduling blk_8132629811771280764_1175 file 
> /space/disk1/current/blk_8132629811771280764 for deletion
> 2014-09-23 13:55:28,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Deleted blk_8132629811771280764_1175 at file 
> /space/disk1/current/blk_8132629811771280764
> The namenode still shows 3 replica's. even if one has been deleted, even 
> after more then 30 mins.
> [hadoop@nn1 conf]$ hadoop fsck /from_dn2 -files -blocks -locations
> FSCK started by hadoop from /192.168.1.70 for path /from_dn2 at Tue Sep 23 
> 14:21:27 IST 2014
> /from_dn2 17 bytes, 1 block(s):  OK
> 0. blk_8132629811771280764_1175 len=17 repl=3 [192.168.1.74:50010, 
> 192.168.1.72:50010, 192.168.1.73:50010]
> This could be a dangerous, if someone remove or other 2 datanodes fail.
> On Datanode 1
> =
> Before, the datanode1 is brought back
> [hadoop@dn1 conf]$ ls -l /space/disk*/current
> /space/disk1/current:
> total 28
> -rw-rw-r-- 1 hadoop hadoop   13 Sep 21 09:09 blk_2278001646987517832
> -rw-rw-r-- 1 hadoop hadoop   11 Sep 21 09:09 blk_2278001646987517832_1171.meta
> -rw-rw-r-- 1 hadoop hadoop   17 Sep 23 13:54 blk_8132629811771280764
> 

erasure coding with 2.x clients

2019-03-05 Thread Steven Rand
Hi all,

I wanted to suggest the possibility of backporting client-side erasure
coding changes to branch-2, and get feedback on whether this is (1)
desirable, and (2) feasible without also backporting the server-side
changes.

Currently, client-side code to support erasure coding hasn't been
backported to branch-2, and as a result, both reads and writes of
erasure-coded data with 2.x clients fail:

* Running "hdfs dfs -get" on an erasure-coded file with a 2.9 client fails
with "java.io.IOException: Unexpected EOS from the reader" coming
from DFSInputStream.readWithStrategy(DFSInputStream.java:964).
* Writing to an erasure-coded directory via "hdfs dfs -put" with a Hadoop
2.9 client fails with a NotReplicatedYetException. (Writing the same file
to a directory that doesn't use erasure coding succeeds with the 2.9
client, and writing the file to the directory with erasure coding succeeds
using a 3.2 client.)

I think it's desirable to backport the client-side erasure coding support
to branch-2. Currently we have wire compatibility that allows 2.x clients
to run on 3.x clusters; however, these clients can't make use of one of the
most compelling features of Hadoop 3.

However, I don't know the code well enough to say whether it's possible to
backport the client-side changes without also pulling in the server-side
changes, at which point the scope of the backport increases dramatically.

I'm hoping people can weigh in on whether this is something we want to do,
and also on whether it's something we can do without backporting the
server-side changes as well.

If this a reasonable request, I'll file a JIRA for it.

Thanks,
Steve


[jira] [Created] (HDFS-14337) WebHdfsFileSystem#create should throw an exception when the quota is exceeded

2019-03-05 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14337:
---

 Summary: WebHdfsFileSystem#create should throw an exception when 
the quota is exceeded
 Key: HDFS-14337
 URL: https://issues.apache.org/jira/browse/HDFS-14337
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1226) ozone-filesystem jar missing in hadoop classpath

2019-03-05 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1226:


 Summary: ozone-filesystem jar missing in hadoop classpath
 Key: HDDS-1226
 URL: https://issues.apache.org/jira/browse/HDDS-1226
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem, Ozone Manager
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


hadoop-ozone-filesystem-lib-*.jar is missing in hadoop classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-05 Thread Wangda Tan
Congratulations, Eric.

Welcome aboard!

Best,
Wangda


On Tue, Mar 5, 2019 at 2:26 PM Sree V 
wrote:

> Congratulations, Eric.
>
>
>
> Thank you./Sree
>
>
>
> On Tuesday, March 5, 2019, 12:50:20 PM PST, Ayush Saxena <
> ayush...@gmail.com> wrote:
>
>  Congratulations Eric!!!
>
> -Ayush
>
> > On 05-Mar-2019, at 11:34 PM, Chandni Singh 
> wrote:
> >
> > Congratulations Eric!
> >
> > On Tue, Mar 5, 2019 at 9:32 AM Jim Brennan
> >  wrote:
> >
> >> Congratulations Eric!
> >>
> >> On Tue, Mar 5, 2019 at 11:20 AM Eric Payne  >> .invalid>
> >> wrote:
> >>
> >>> It is my pleasure to announce that Eric Badger has accepted an
> invitation
> >>> to become a Hadoop Core committer.
> >>>
> >>> Congratulations, Eric! This is well-deserved!
> >>>
> >>> -Eric Payne
> >>>
> >>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-05 Thread Sree V
Congratulations, Eric.



Thank you./Sree

 

On Tuesday, March 5, 2019, 12:50:20 PM PST, Ayush Saxena 
 wrote:  
 
 Congratulations Eric!!!

-Ayush

> On 05-Mar-2019, at 11:34 PM, Chandni Singh  wrote:
> 
> Congratulations Eric!
> 
> On Tue, Mar 5, 2019 at 9:32 AM Jim Brennan
>  wrote:
> 
>> Congratulations Eric!
>> 
>> On Tue, Mar 5, 2019 at 11:20 AM Eric Payne > .invalid>
>> wrote:
>> 
>>> It is my pleasure to announce that Eric Badger has accepted an invitation
>>> to become a Hadoop Core committer.
>>> 
>>> Congratulations, Eric! This is well-deserved!
>>> 
>>> -Eric Payne
>>> 
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

  

[jira] [Resolved] (HDDS-725) Exception thrown in loop while trying to write a file in ozonefs

2019-03-05 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey resolved HDDS-725.
---
   Resolution: Fixed
Fix Version/s: 0.4.0

I think this has been fixed. Please re-open if the issue re-surfaces.

> Exception thrown in loop while trying to write a file in ozonefs
> 
>
> Key: HDDS-725
> URL: https://issues.apache.org/jira/browse/HDDS-725
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.3.0
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Priority: Blocker
>  Labels: test-badlands
> Fix For: 0.4.0
>
> Attachments: all-node-ozone-logs-1540375264.tar.gz
>
>
> Ran the following command :
> 
> ozone fs -put 2GB /testdir5/
> Exceptions are thrown continuously in loop. Please note that there are 8 
> datanodes alive in the cluster.
> {noformat}
> root@ctr-e138-1518143905142-53-01-08 logs]# /root/allssh.sh 'jps -l | 
> grep Datanode'
> 
> Host::172.27.20.96
> 
> 411564 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.20.91
> 
> 472897 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.38.9
> 
> 351139 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.24.90
> 
> 314304 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.15.139
> 
> 324820 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.10.199
> 
> 
> Host::172.27.15.131
> 
> 
> Host::172.27.57.0
> 
> 
> Host::172.27.23.139
> 
> 627053 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.68.65
> 
> 557443 org.apache.hadoop.ozone.HddsDatanodeService
> 
> Host::172.27.19.74
> 
> 
> Host::172.27.85.64
> 
> 508121 org.apache.hadoop.ozone.HddsDatanodeService{noformat}
>  
> {noformat}
>  
> 2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.LeaderElection: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: Election REJECTED; received 0 
> response(s) [] and 2 exception(s); 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57:t16296, leader=null, 
> voted=7c3b2fb1-cf16-4e5f-94dc-8a089492ad57, raftlog=[(t:37, i:271)], 
> conf=271: [7c3b2fb1-cf16-4e5f-94dc-8a089492ad57:172.27.85.64:9858, 
> 86f9e313-ae49-4675-95d7-27856641aee1:172.27.15.131:9858, 
> 9524f4e2-9031-4852-ab7c-11c2da3460db:172.27.57.0:9858], old=null
> 2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.LeaderElection: 0: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.LeaderElection: 1: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 changes role from CANDIDATE to FOLLOWER 
> at term 16296 for changeToFollower
> 2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.RoleInfo: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: shutdown LeaderElection
> 2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.RoleInfo: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: start FollowerState
> 2018-10-24 09:49:48,171 INFO org.apache.ratis.server.impl.FollowerState: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 changes to CANDIDATE, lastRpcTime:1078, 
> electionTimeout:1078ms
> 2018-10-24 09:49:48,171 INFO org.apache.ratis.server.impl.RoleInfo: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: shutdown FollowerState
> 2018-10-24 09:49:48,171 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 changes role from FOLLOWER to CANDIDATE 
> at term 16296 for changeToCandidate
> 2018-10-24 09:49:48,172 INFO org.apache.ratis.server.impl.RoleInfo: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: start LeaderElection
> 2018-10-24 09:49:48,173 INFO org.apache.ratis.server.impl.LeaderElection: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: begin an election in Term 16297
> 2018-10-24 09:49:48,174 INFO org.apache.ratis.server.impl.LeaderElection: 
> 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 got exception when requesting votes: {}
> 

[jira] [Created] (HDDS-1225) Provide docker-compose for OM HA

2019-03-05 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-1225:


 Summary: Provide docker-compose for OM HA
 Key: HDDS-1225
 URL: https://issues.apache.org/jira/browse/HDDS-1225
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


**This Jira proposes to add docker-compose file to run local pseudo cluster 
with OM HA (3 OM nodes).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-05 Thread Ayush Saxena
Congratulations Eric!!!

-Ayush

> On 05-Mar-2019, at 11:34 PM, Chandni Singh  wrote:
> 
> Congratulations Eric!
> 
> On Tue, Mar 5, 2019 at 9:32 AM Jim Brennan
>  wrote:
> 
>> Congratulations Eric!
>> 
>> On Tue, Mar 5, 2019 at 11:20 AM Eric Payne > .invalid>
>> wrote:
>> 
>>> It is my pleasure to announce that Eric Badger has accepted an invitation
>>> to become a Hadoop Core committer.
>>> 
>>> Congratulations, Eric! This is well-deserved!
>>> 
>>> -Eric Payne
>>> 
>> 

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-05 Thread Chandni Singh
Congratulations Eric!

On Tue, Mar 5, 2019 at 9:32 AM Jim Brennan
 wrote:

> Congratulations Eric!
>
> On Tue, Mar 5, 2019 at 11:20 AM Eric Payne  .invalid>
> wrote:
>
> > It is my pleasure to announce that Eric Badger has accepted an invitation
> > to become a Hadoop Core committer.
> >
> > Congratulations, Eric! This is well-deserved!
> >
> > -Eric Payne
> >
>


Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-05 Thread Jim Brennan
Congratulations Eric!

On Tue, Mar 5, 2019 at 11:20 AM Eric Payne 
wrote:

> It is my pleasure to announce that Eric Badger has accepted an invitation
> to become a Hadoop Core committer.
>
> Congratulations, Eric! This is well-deserved!
>
> -Eric Payne
>


[ANNOUNCE] Eric Badger is now a committer!

2019-03-05 Thread Eric Payne
It is my pleasure to announce that Eric Badger has accepted an invitation to 
become a Hadoop Core committer.

Congratulations, Eric! This is well-deserved!

-Eric Payne


[jira] [Created] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-03-05 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1224:
-

 Summary: Restructure code to validate the response from server in 
the Read path
 Key: HDDS-1224
 URL: https://issues.apache.org/jira/browse/HDDS-1224
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


In the read path, the validation of the response while reading the data from 
the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
verification happens in Ozone client to verify the read chunk response. The aim 
of this Jira is to modify the function call to take a validator function as a 
part of reading data so all validation can happen in a single unified place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1223) Add more unit tests to verify persistence of container info in Ratis Snapshots

2019-03-05 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1223:
-

 Summary: Add more unit tests to verify persistence of container 
info in Ratis Snapshots
 Key: HDDS-1223
 URL: https://issues.apache.org/jira/browse/HDDS-1223
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


As per the review comments by [~arpaga] here :

https://issues.apache.org/jira/browse/HDDS-935?focusedCommentId=16782888=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16782888

This Jira aims to add more unit tests to verify the container info persistence 
behaviour in Ratis Snapshot file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/

[Mar 1, 2019 6:49:39 PM] (xyao) HDDS-1204. Fix ClassNotFound issue with 
javax.xml.bind.DatatypeConverter
[Mar 1, 2019 11:41:09 PM] (xyao) HDDS-134. SCM CA: OM sends CSR and uses 
certificate issued by SCM.
[Mar 2, 2019 12:15:20 AM] (7813154+ajayydv) HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem.
[Mar 2, 2019 12:15:20 AM] (7813154+ajayydv) Fix checkstyle issue
[Mar 2, 2019 12:19:43 AM] (xyao) Revert "HDDS-1183. Override getDelegationToken 
API for OzoneFileSystem.
[Mar 4, 2019 7:52:04 AM] (aajisaka) HDFS-14272. [SBN read] Make 
ObserverReadProxyProvider initialize its
[Mar 4, 2019 7:58:59 AM] (aajisaka) Revert "HDDS-1072. Implement RetryProxy and 
FailoverProxy for OM
[Mar 4, 2019 7:59:20 AM] (aajisaka) YARN-9332. RackResolver tool should accept 
multiple hosts. Contributed
[Mar 4, 2019 7:59:20 AM] (aajisaka) Revert "HDFS-14261. Kerberize 
JournalNodeSyncer unit test. Contributed
[Mar 4, 2019 7:59:20 AM] (aajisaka) YARN-7477. Moving logging APIs over to 
slf4j in hadoop-yarn-common.
[Mar 4, 2019 9:34:24 AM] (yqlin) HDFS-14182. Datanode usage histogram is 
clicked to show ip list.
[Mar 4, 2019 11:27:31 AM] (bibinchundatt) Revert "YARN-8132. Final Status of 
applications shown as UNDEFINED in
[Mar 4, 2019 6:37:26 PM] (7813154+ajayydv) HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem.
[Mar 4, 2019 6:43:44 PM] (weichiu) HDFS-14314. fullBlockReportLeaseId should be 
reset after registering to
[Mar 4, 2019 8:00:16 PM] (aengineer) HDDS-1136 : Add metric counters to capture 
the RocksDB checkpointing
[Mar 4, 2019 8:37:57 PM] (weichiu) HDFS-14321. Fix -Xcheck:jni issues in 
libhdfs, run ctest with
[Mar 4, 2019 10:35:00 PM] (7813154+ajayydv) HDDS-623. On SCM UI, Node Manager 
info is empty (#523)
[Mar 4, 2019 11:08:12 PM] (stevel) HADOOP-16148. Cleanup LineReader Unit Test.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 
   hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions 
   
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.hdds.scm.block.TestBlockManager 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-patch-pylint.txt
  [144K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1066/artifact/out/xml.txt
  [16K]

   findbugs:

   

[jira] [Created] (HDDS-1222) Remove TestContainerSQLCli unit test stub

2019-03-05 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1222:
--

 Summary: Remove TestContainerSQLCli unit test stub
 Key: HDDS-1222
 URL: https://issues.apache.org/jira/browse/HDDS-1222
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


In HDDS-447 we removed the support the 'ozone noz' cli tool which was a 
rocksdb/leveldb to sql exporter.

But still we have the unit test for the tool (in fact only the skeleton of the 
unit test, as the main logic is removed). Even worse this unit test is failing 
as it calls System.exit:

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) on 
project hadoop-ozone-tools: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/testptch/hadoop/hadoop-ozone/tools/target/surefire-reports for the individual 
test results.
[ERROR] Please refer to dump files (if any exist) [date].dump, 
[date]-jvmRun[N].dump and [date].dumpstream.
[ERROR] ExecutionException The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
{code}

I think this test can be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/

[Mar 4, 2019 7:30:56 PM] (weichiu) HDFS-14314. fullBlockReportLeaseId should be 
reset after registering to




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.TestDistributedFileSystem 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/251/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   

[jira] [Created] (HDDS-1221) Introduce fine grained lock in Ozone Manager for key operations

2019-03-05 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1221:
-

 Summary: Introduce fine grained lock in Ozone Manager for key 
operations
 Key: HDDS-1221
 URL: https://issues.apache.org/jira/browse/HDDS-1221
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently ozone manager acquires bucket lock for key operations in OM. We can 
introduce fine grained lock for key operations in ozone manager. This would 
help in increasing throughput for key operations in a bucket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock

2019-03-05 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1220:
-

 Summary: KeyManager#openKey should release the bucket lock before 
doing an allocateBlock
 Key: HDDS-1220
 URL: https://issues.apache.org/jira/browse/HDDS-1220
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently KeyManager#openKey makes an allocateBlock call without releasing the 
bucket lock. Since allocateBlock requires a rpc connection to SCM, the handler 
thread in OM would hold the bucket lock until rpc is complete. Since 
allocateBlock call does not require a bucket lock to be held, allocateBlock 
call can be made after releasing the bucket lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1046) TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked fails intermittently

2019-03-05 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-1046.
---
Resolution: Fixed
  Assignee: Shashikant Banerjee

> TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked 
> fails intermittently
> -
>
> Key: HDDS-1046
> URL: https://issues.apache.org/jira/browse/HDDS-1046
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Fix For: 0.4.0
>
>
>  
> {code:java}
> java.lang.StackOverflowError
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.getSubject(Subject.java:297)
> at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:569)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getEncodedBlockToken(ContainerProtocolCalls.java:578)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:318)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:602)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:489)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:501)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:501)
> {code}
> The failure is happening because, ozone client receives a CONTAINER_NOT+OPEN 
> exception from daranode, and it allocates a new and retries to write. But 
> every allocate block call to SCM allocates a block on the same quasi closed 
> container and hence client retries indefinitely and ultimately runs out of 
> stack space.
> Logs below indicate 3 successive block allocations from SCM in quasi closed 
> container.
> {code:java}
> 15:15:26.812 [grpc-default-executor-3] ERROR DNAudit - user=null | ip=null | 
> op=WRITE_CHUNK {blockData=conID: 2 locID: 101533189852894070 bcId: 0} | 
> ret=FAILURE
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException:
>  Container 2 in QUASI_CLOSED state
> 15:15:26.818 [grpc-default-executor-3] ERROR DNAudit - user=null | ip=null | 
> op=WRITE_CHUNK {blockData=conID: 2 locID: 101533189853352823 bcId: 0} | 
> ret=FAILURE
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException:
>  Container 2 in QUASI_CLOSED state
> 15:15:26.825 [grpc-default-executor-3] ERROR DNAudit - user=null | ip=null | 
> op=WRITE_CHUNK {blockData=conID: 2 locID: 101533189853746040 bcId: 0} | 
> ret=FAILURE
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException:
>  Container 2 in QUASI_CLOSED state
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1219) TestContainerActionsHandler.testCloseContainerAction has an intermittent failure

2019-03-05 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1219:
--

 Summary: TestContainerActionsHandler.testCloseContainerAction has 
an intermittent failure
 Key: HDDS-1219
 URL: https://issues.apache.org/jira/browse/HDDS-1219
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.4.0


It's failed multiple times during the CI builds:

{code}
Error Message

Wanted but not invoked:
closeContainerEventHandler.onMessage(
#1,
org.apache.hadoop.hdds.server.events.EventQueue@3d3fcdb0
);
-> at 
org.apache.hadoop.hdds.scm.container.TestContainerActionsHandler.testCloseContainerAction(TestContainerActionsHandler.java:64)
Actually, there were zero interactions with this mock.
{code}

The fix is easy: we should call queue.processAll(1000L) to wait for the 
processing of all the events.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1218) Do the dist-layout-stitching for Ozone after the test-compile phase

2019-03-05 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1218:
--

 Summary: Do the dist-layout-stitching for Ozone after the 
test-compile phase 
 Key: HDDS-1218
 URL: https://issues.apache.org/jira/browse/HDDS-1218
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


HDDS-1135 fixed the order of the maven goal executions to include all the 
required jar files in the distribution package.

It turned out that the suggested compile phase is too early for the 
dist-layout-stitching.

Int case of test-compile execution the shaded ozone datanode service plugin is 
not created  (it's created at the package phase) but the hadoop-ozone/dist 
tries to use (copy it from the source):

The error (from Yetus) is:

{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist ---
cp: cannot stat 
'/testptch/hadoop/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-0.4.0-SNAPSHOT-plugin.jar':
 No such file or directory

Current directory /testptch/hadoop/hadoop-ozone/dist/target

$ rm -rf ozone-0.4.0-SNAPSHOT
$ mkdir ozone-0.4.0-SNAPSHOT
$ cd ozone-0.4.0-SNAPSHOT
$ cp -p /testptch/hadoop/LICENSE.txt .
$ cp -p /testptch/hadoop/NOTICE.txt .
$ cp -p /testptch/hadoop/README.txt .
$ mkdir -p ./share/hadoop/mapreduce
$ mkdir -p ./share/hadoop/ozone
$ mkdir -p ./share/hadoop/hdds
$ mkdir -p ./share/hadoop/yarn
$ mkdir -p ./share/hadoop/hdfs
$ mkdir -p ./share/hadoop/common
$ mkdir -p ./share/ozone/web
$ mkdir -p ./bin
$ mkdir -p ./sbin
$ mkdir -p ./etc
$ mkdir -p ./libexec
$ cp -r /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/conf 
etc/hadoop
$ cp 
/testptch/hadoop/hadoop-ozone/dist/src/main/conf/om-audit-log4j2.properties 
etc/hadoop
$ cp 
/testptch/hadoop/hadoop-ozone/dist/src/main/conf/dn-audit-log4j2.properties 
etc/hadoop
$ cp 
/testptch/hadoop/hadoop-ozone/dist/src/main/conf/scm-audit-log4j2.properties 
etc/hadoop
$ cp /testptch/hadoop/hadoop-ozone/dist/src/main/conf/ozone-site.xml etc/hadoop
$ cp -f /testptch/hadoop/hadoop-ozone/dist/src/main/conf/log4j.properties 
etc/hadoop
$ cp /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
bin/
$ cp 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
bin/
$ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone bin/
$ cp 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
 libexec/
$ cp 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
 libexec/
$ cp 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
 libexec/
$ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone-config.sh libexec/
$ cp -r /testptch/hadoop/hadoop-ozone/common/src/main/shellprofile.d libexec/
$ cp 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
 sbin/
$ cp 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/workers.sh 
sbin/
$ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/start-ozone.sh sbin/
$ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/stop-ozone.sh sbin/
$ mkdir -p ./share/hadoop/ozoneplugin
$ cp 
/testptch/hadoop/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-0.4.0-SNAPSHOT-plugin.jar
 ./share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.4.0-SNAPSHOT.jar

Failed!

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Ozone  SUCCESS [  5.471 s]
[INFO] Apache Hadoop Ozone Common . SUCCESS [ 13.453 s]
[INFO] Apache Hadoop Ozone Client . SUCCESS [  6.271 s]
[INFO] Apache Hadoop Ozone Manager Server . SUCCESS [  8.081 s]
[INFO] Apache Hadoop Ozone Object Store REST Service .. SUCCESS [  5.535 s]
[INFO] Apache Hadoop Ozone S3 Gateway . SUCCESS [  7.966 s]
[INFO] Apache Hadoop Ozone Integration Tests .. SUCCESS [  7.708 s]
[INFO] Apache Hadoop Ozone FileSystem . SUCCESS [  7.121 s]
[INFO] Apache Hadoop Ozone FileSystem Single Jar Library .. SUCCESS [  3.041 s]
[INFO] Apache Hadoop Ozone FileSystem Legacy Jar Library .. SUCCESS [  3.217 s]
[INFO] Apache Hadoop Ozone Tools .. SUCCESS [  7.720 s]
[INFO] Apache Hadoop Ozone Datanode ... SUCCESS [  1.912 s]
[INFO] Apache Hadoop Ozone Distribution ... FAILURE [  2.260 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:21 min
[INFO] Finished at: 2019-03-05T10:22:14+00:00
[INFO] Final Memory: 71M/1852M
[INFO] 
[WARNING] The requested profile