Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Dinesh Chitlangia
+1

-Dinesh




On Fri, Sep 6, 2019 at 11:23 PM 俊平堵  wrote:

> +1. Please include me also.
>
> Thanks,
>
> Junping
>
> Wangda Tan  于2019年9月1日周日 下午1:19写道:
>
> > Hi all,
> >
> > As we discussed in the previous thread [1],
> >
> > I just moved the spin-off proposal to CWIKI and completed all TODO parts.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
> >
> > If you have interests to learn more about this. Please review the
> proposal
> > let me know if you have any questions/suggestions for the proposal. This
> > will be sent to board post voting passed. (And please note that the
> > previous voting thread [2] to move Submarine to a separate Github repo
> is a
> > necessary effort to move Submarine to a separate Apache project but not
> > sufficient so I sent two separate voting thread.)
> >
> > Please let me know if I missed anyone in the proposal, and reply if you'd
> > like to be included in the project.
> >
> > This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
> >
> > Thanks,
> > Wangda Tan
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> > [2]
> >
> >
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
> >
>


[jira] [Created] (HDFS-14830) The calculation of DataXceiver count is not accurate

2019-09-06 Thread Chen Zhang (Jira)
Chen Zhang created HDFS-14830:
-

 Summary: The calculation of DataXceiver count is not accurate
 Key: HDFS-14830
 URL: https://issues.apache.org/jira/browse/HDFS-14830
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chen Zhang


DataNode use threadGroup.activeCount() as the number of DataXceiver, it's not 
accurate since threadGroup includes not only DataXceiver thread and 
DataXceiverServer thread, PacketResponder thread and BlockRecoveryWorker thread 
is also in the same threadGroup.

In the worst case, the reported DataXceiver count maybe double of actual 
count(e.g. all DataXceiver process write block operation, they create same 
number of PacketResponder thread at the same time).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-06 Thread Jitendra Nath Pandey (Jira)
Jitendra Nath Pandey created HDDS-2101:
--

 Summary: Ozone filesystem provider doesn't exist
 Key: HDDS-2101
 URL: https://issues.apache.org/jira/browse/HDDS-2101
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Jitendra Nath Pandey


We don't have a filesystem provider in META-INF. 
i.e. following file doesn't exist.
{{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}

See for example
{{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)
Jitendra Nath Pandey created HDDS-2100:
--

 Summary: Ozone TokenRenewer provider is incorrectly configured
 Key: HDDS-2100
 URL: https://issues.apache.org/jira/browse/HDDS-2100
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Jitendra Nath Pandey






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



3.2.1 branch is closed for commits //Re: [DISCUSS] Hadoop-3.2.1 release proposal

2019-09-06 Thread Rohith Sharma K S
I have created branch *branch-3.2.1* for the release .Hence 3.2.1 branch
closed for commits. I will be creating RC on this branch.

Kindly use *branch-3.2* for any commit and set "*Fix Version/s*" to *3.2.2*

-Rohith Sharma K S


On Sat, 7 Sep 2019 at 08:39, Rohith Sharma K S 
wrote:

> Hi Folks
>
> Given all the blockers/critical issues[1] are resolved, I will be cutting
> the branch-3.2.1 sooner.
> Thanks all for your support in pushing the JIRAs to closure.
>
> [1] https://s.apache.org/7yjh5
>
>
> -Rohith Sharma K S
>
> On Thu, 29 Aug 2019 at 11:21, Rohith Sharma K S 
> wrote:
>
>> [Update]
>> Ramping down all the critical/blockers https://s.apache.org/7yjh5, left
>> with three issues!
>>
>> YARN-9785 - Solution discussion is going on, hopefully should be able to
>> rap up solution sooner.
>> HADOOP-15998 - To be committed.
>> YARN-9796 - Patch available, to be committed.
>>
>> I am closely monitoring for these issues, and will update once these are
>> fixed.
>>
>> -Rohith Sharma K S
>>
>>
>> On Wed, 21 Aug 2019 at 13:42, Bibinchundatt 
>> wrote:
>>
>>> Hi Rohith
>>>
>>> Thank you for initiating this
>>>
>>> Few critical/blocker jira's we could consider
>>>
>>> YARN-9714
>>> YARN-9642
>>> YARN-9640
>>>
>>> Regards
>>> Bibin
>>> -Original Message-
>>> From: Rohith Sharma K S [mailto:rohithsharm...@apache.org]
>>> Sent: 21 August 2019 11:42
>>> To: Wei-Chiu Chuang 
>>> Cc: Hdfs-dev ; yarn-dev <
>>> yarn-...@hadoop.apache.org>; mapreduce-dev <
>>> mapreduce-...@hadoop.apache.org>; Hadoop Common <
>>> common-...@hadoop.apache.org>
>>> Subject: Re: [DISCUSS] Hadoop-3.2.1 release proposal
>>>
>>> On Tue, 20 Aug 2019 at 22:28, Wei-Chiu Chuang 
>>> wrote:
>>>
>>> > Hi Rohith,
>>> > Thanks for initiating this.
>>> > I want to bring up one blocker issue: HDFS-13596
>>> >  (NN restart fails
>>> > after RollingUpgrade from 2.x to 3.x)
>>> >
>>>
>>> > This should be a blocker for all active Hadoop 3.x releases: 3.3.0,
>>> > 3.2.1, 3.1.3. Hopefully we can get this fixed within this week.
>>> > Additionally, HDFS-14396
>>> >  (Failed to load
>>> > image from FSImageFile when downgrade from 3.x to 2.x).Probably not a
>>> > blocker but nice to have.
>>> >
>>>
>>>  Please set target version so that I don't miss in blockers/critical
>>> list for 3.2.1 https://s.apache.org/7yjh5.
>>>
>>>
>>> >
>>> > On Tue, Aug 20, 2019 at 3:22 AM Rohith Sharma K S <
>>> > rohithsharm...@apache.org> wrote:
>>> >
>>> >> Hello folks,
>>> >>
>>> >> It's been more than six month Hadoop-3.2.0 is released i.e 16th
>>> Jan,2019.
>>> >> We have several important fixes landed in branch-3.2 (around 48
>>> >> blockers/critical https://s.apache.org/ozd6o).
>>> >>
>>> >> I am planning to do a maintenance release of 3.2.1 in next few weeks
>>> >> i.e around 1st week of September.
>>> >>
>>> >> So far I don't see any blockers/critical in 3.2.1. I see few pending
>>> >> issues on 3.2.1 are https://s.apache.org/ni6v7.
>>> >>
>>> >> *Proposal*:
>>> >> Code Freezing Date:  30th August 2019 Release Date : 7th Sept 2019
>>> >>
>>> >> Please let me know if you have any thoughts or comments on this plan.
>>> >>
>>> >> Thanks & Regards
>>> >> Rohith Sharma K S
>>> >>
>>> >
>>>
>>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Wangda Tan
Thanks everyone for voting! And for whoever has interests to join
Submarine, you're always welcome!

And thanks to Owen for the kind help offered, I just added you to PMC list
in the proposal. It will be a great help to the community if you could
join!

For existing Hadoop committers who have interests to join, I plan to add
you to the initial list after discussed with other proposed initial
Submarine PMC members. The list I saw is:

* Naganarasimha Garla (naganarasimha_gr at apache dot org) (Hadoop PMC)
* Devaraj K (devaraj at apache dot org) (Hadoop PMC)
* Rakesh Radhakrishnan (rakeshr at apache dot org) (bookkeeper PMC, Hadoop
PMC, incubator, Mnemonic PMC, Zookeeper PMC)
* Vinayakumar B (vinayakumarb at apache dot org) (Hadoop PMC, incubator PMC)
* Ayush Saxena (ayushsaxena at apache dot org) (Hadoop Committer)
* Bibin Chundatt (bibinchundatt at apache dot org) (Hadoop PMC)
* Bharat Viswanadham (bharat at apache dot org) (Hadoop)
* Brahma Reddy Battula (brahma at apache dot org)) (Hadoop PMC)
* Abhishek Modi (abmodi at apache dot org) (Hadoop Committer)
* Wei-Chiu Chuang (weichiu at apache dot org) (Hadoop PMC)
* Junping Du (junping_du at apache dot org) (Hadoop PMC, member)

We'd like to see some reasonable contributions to the projects from all our
committers who will join now. Please join the weekly call or mailing lists
(once established) and share your inputs to the project. Members of
Submarine will reach out to all of you individually to understand the areas
you wish to contribute and will help in same. please let me know if you
DON'T want to add to the committer list.

Best,
Wangda Tan

On Fri, Sep 6, 2019 at 3:54 PM Wei-Chiu Chuang  wrote:

> +1
> I've involved myself in Submarine dev and I'd like to be included in the
> future.
>
> Thanks
>
> On Sat, Sep 7, 2019 at 5:27 AM Owen O'Malley 
> wrote:
>
>> Since you don't have any Apache Members, I'll join to provide Apache
>> oversight.
>>
>> .. Owen
>>
>> On Fri, Sep 6, 2019 at 1:38 PM Owen O'Malley 
>> wrote:
>>
>> > +1 for moving to a new project.
>> >
>> > On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan 
>> wrote:
>> >
>> >> Hi all,
>> >>
>> >> As we discussed in the previous thread [1],
>> >>
>> >> I just moved the spin-off proposal to CWIKI and completed all TODO
>> parts.
>> >>
>> >>
>> >>
>> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>> >>
>> >> If you have interests to learn more about this. Please review the
>> proposal
>> >> let me know if you have any questions/suggestions for the proposal.
>> This
>> >> will be sent to board post voting passed. (And please note that the
>> >> previous voting thread [2] to move Submarine to a separate Github repo
>> is
>> >> a
>> >> necessary effort to move Submarine to a separate Apache project but not
>> >> sufficient so I sent two separate voting thread.)
>> >>
>> >> Please let me know if I missed anyone in the proposal, and reply if
>> you'd
>> >> like to be included in the project.
>> >>
>> >> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM
>> PDT.
>> >>
>> >> Thanks,
>> >> Wangda Tan
>> >>
>> >> [1]
>> >>
>> >>
>> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
>> >> [2]
>> >>
>> >>
>> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>> >>
>> >
>>
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread 俊平堵
+1. Please include me also.

Thanks,

Junping

Wangda Tan  于2019年9月1日周日 下午1:19写道:

> Hi all,
>
> As we discussed in the previous thread [1],
>
> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>
> If you have interests to learn more about this. Please review the proposal
> let me know if you have any questions/suggestions for the proposal. This
> will be sent to board post voting passed. (And please note that the
> previous voting thread [2] to move Submarine to a separate Github repo is a
> necessary effort to move Submarine to a separate Apache project but not
> sufficient so I sent two separate voting thread.)
>
> Please let me know if I missed anyone in the proposal, and reply if you'd
> like to be included in the project.
>
> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>


Re: [DISCUSS] Hadoop-3.2.1 release proposal

2019-09-06 Thread Rohith Sharma K S
Hi Folks

Given all the blockers/critical issues[1] are resolved, I will be cutting
the branch-3.2.1 sooner.
Thanks all for your support in pushing the JIRAs to closure.

[1] https://s.apache.org/7yjh5


-Rohith Sharma K S

On Thu, 29 Aug 2019 at 11:21, Rohith Sharma K S 
wrote:

> [Update]
> Ramping down all the critical/blockers https://s.apache.org/7yjh5, left
> with three issues!
>
> YARN-9785 - Solution discussion is going on, hopefully should be able to
> rap up solution sooner.
> HADOOP-15998 - To be committed.
> YARN-9796 - Patch available, to be committed.
>
> I am closely monitoring for these issues, and will update once these are
> fixed.
>
> -Rohith Sharma K S
>
>
> On Wed, 21 Aug 2019 at 13:42, Bibinchundatt 
> wrote:
>
>> Hi Rohith
>>
>> Thank you for initiating this
>>
>> Few critical/blocker jira's we could consider
>>
>> YARN-9714
>> YARN-9642
>> YARN-9640
>>
>> Regards
>> Bibin
>> -Original Message-
>> From: Rohith Sharma K S [mailto:rohithsharm...@apache.org]
>> Sent: 21 August 2019 11:42
>> To: Wei-Chiu Chuang 
>> Cc: Hdfs-dev ; yarn-dev <
>> yarn-...@hadoop.apache.org>; mapreduce-dev <
>> mapreduce-...@hadoop.apache.org>; Hadoop Common <
>> common-...@hadoop.apache.org>
>> Subject: Re: [DISCUSS] Hadoop-3.2.1 release proposal
>>
>> On Tue, 20 Aug 2019 at 22:28, Wei-Chiu Chuang  wrote:
>>
>> > Hi Rohith,
>> > Thanks for initiating this.
>> > I want to bring up one blocker issue: HDFS-13596
>> >  (NN restart fails
>> > after RollingUpgrade from 2.x to 3.x)
>> >
>>
>> > This should be a blocker for all active Hadoop 3.x releases: 3.3.0,
>> > 3.2.1, 3.1.3. Hopefully we can get this fixed within this week.
>> > Additionally, HDFS-14396
>> >  (Failed to load
>> > image from FSImageFile when downgrade from 3.x to 2.x).Probably not a
>> > blocker but nice to have.
>> >
>>
>>  Please set target version so that I don't miss in blockers/critical
>> list for 3.2.1 https://s.apache.org/7yjh5.
>>
>>
>> >
>> > On Tue, Aug 20, 2019 at 3:22 AM Rohith Sharma K S <
>> > rohithsharm...@apache.org> wrote:
>> >
>> >> Hello folks,
>> >>
>> >> It's been more than six month Hadoop-3.2.0 is released i.e 16th
>> Jan,2019.
>> >> We have several important fixes landed in branch-3.2 (around 48
>> >> blockers/critical https://s.apache.org/ozd6o).
>> >>
>> >> I am planning to do a maintenance release of 3.2.1 in next few weeks
>> >> i.e around 1st week of September.
>> >>
>> >> So far I don't see any blockers/critical in 3.2.1. I see few pending
>> >> issues on 3.2.1 are https://s.apache.org/ni6v7.
>> >>
>> >> *Proposal*:
>> >> Code Freezing Date:  30th August 2019 Release Date : 7th Sept 2019
>> >>
>> >> Please let me know if you have any thoughts or comments on this plan.
>> >>
>> >> Thanks & Regards
>> >> Rohith Sharma K S
>> >>
>> >
>>
>


[jira] [Resolved] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-06 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1553.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks [~Sammi] for the contribution. I merged the change to trunk.

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2099) Refactor to create pipeline via DN heartbeat response

2019-09-06 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2099:


 Summary: Refactor to create pipeline via DN heartbeat response
 Key: HDDS-2099
 URL: https://issues.apache.org/jira/browse/HDDS-2099
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


Currently, SCM directly talk to DN GRPC server to create pipeline in a 
background thread. We should avoid direct communication from SCM to DN for 
better scalability of ozone. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Wei-Chiu Chuang
+1
I've involved myself in Submarine dev and I'd like to be included in the
future.

Thanks

On Sat, Sep 7, 2019 at 5:27 AM Owen O'Malley  wrote:

> Since you don't have any Apache Members, I'll join to provide Apache
> oversight.
>
> .. Owen
>
> On Fri, Sep 6, 2019 at 1:38 PM Owen O'Malley 
> wrote:
>
> > +1 for moving to a new project.
> >
> > On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan  wrote:
> >
> >> Hi all,
> >>
> >> As we discussed in the previous thread [1],
> >>
> >> I just moved the spin-off proposal to CWIKI and completed all TODO
> parts.
> >>
> >>
> >>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
> >>
> >> If you have interests to learn more about this. Please review the
> proposal
> >> let me know if you have any questions/suggestions for the proposal. This
> >> will be sent to board post voting passed. (And please note that the
> >> previous voting thread [2] to move Submarine to a separate Github repo
> is
> >> a
> >> necessary effort to move Submarine to a separate Apache project but not
> >> sufficient so I sent two separate voting thread.)
> >>
> >> Please let me know if I missed anyone in the proposal, and reply if
> you'd
> >> like to be included in the project.
> >>
> >> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
> >>
> >> Thanks,
> >> Wangda Tan
> >>
> >> [1]
> >>
> >>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> >> [2]
> >>
> >>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
> >>
> >
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Owen O'Malley
Since you don't have any Apache Members, I'll join to provide Apache
oversight.

.. Owen

On Fri, Sep 6, 2019 at 1:38 PM Owen O'Malley  wrote:

> +1 for moving to a new project.
>
> On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan  wrote:
>
>> Hi all,
>>
>> As we discussed in the previous thread [1],
>>
>> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>>
>>
>> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>>
>> If you have interests to learn more about this. Please review the proposal
>> let me know if you have any questions/suggestions for the proposal. This
>> will be sent to board post voting passed. (And please note that the
>> previous voting thread [2] to move Submarine to a separate Github repo is
>> a
>> necessary effort to move Submarine to a separate Apache project but not
>> sufficient so I sent two separate voting thread.)
>>
>> Please let me know if I missed anyone in the proposal, and reply if you'd
>> like to be included in the project.
>>
>> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>>
>> Thanks,
>> Wangda Tan
>>
>> [1]
>>
>> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
>> [2]
>>
>> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>>
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Owen O'Malley
+1 for moving to a new project.

On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan  wrote:

> Hi all,
>
> As we discussed in the previous thread [1],
>
> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>
> If you have interests to learn more about this. Please review the proposal
> let me know if you have any questions/suggestions for the proposal. This
> will be sent to board post voting passed. (And please note that the
> previous voting thread [2] to move Submarine to a separate Github repo is a
> necessary effort to move Submarine to a separate Apache project but not
> sufficient so I sent two separate voting thread.)
>
> Please let me know if I missed anyone in the proposal, and reply if you'd
> like to be included in the project.
>
> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>


[jira] [Resolved] (HDDS-1970) Upgrade Bootstrap and jQuery versions of Ozone web UIs

2019-09-06 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian resolved HDDS-1970.
--
Resolution: Fixed

> Upgrade Bootstrap and jQuery versions of Ozone web UIs 
> ---
>
> Key: HDDS-1970
> URL: https://issues.apache.org/jira/browse/HDDS-1970
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: website
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery used by Ozone web UIs are 
> reported to have known medium severity CVEs and need to be updated to the 
> latest versions.
>  
> I suggest updating bootstrap and jQuery to 3.4.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-2098:
---

 Summary: Ozone shell command prints out ERROR when the log4j file 
is not present.
 Key: HDDS-2098
 URL: https://issues.apache.org/jira/browse/HDDS-2098
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone CLI
Affects Versions: 0.5.0
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14829) [Dynamometer] Update TestDynamometerInfra to be Hadoop 3.2+ compatible

2019-09-06 Thread Erik Krogen (Jira)
Erik Krogen created HDFS-14829:
--

 Summary: [Dynamometer] Update TestDynamometerInfra to be Hadoop 
3.2+ compatible
 Key: HDFS-14829
 URL: https://issues.apache.org/jira/browse/HDFS-14829
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Erik Krogen


Currently the integration test included with Dynamometer, 
{{TestDynamometerInfra}}, is executing against version 3.1.2 of Hadoop. We 
should update it to run against a more recent version by default (3.2.x) and 
add support for 3.3 in anticipation of HDFS-14412.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14828) Add TeraSort to acceptance test

2019-09-06 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDFS-14828:
-

 Summary: Add TeraSort to acceptance test
 Key: HDFS-14828
 URL: https://issues.apache.org/jira/browse/HDFS-14828
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiaoyu Yao


We may begin with 1GB teragen/terasort/teravalidate.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/

[Sep 5, 2019 8:20:05 AM] (taoyang) YARN-8995. Log events info in 
AsyncDispatcher when event queue size
[Sep 5, 2019 12:42:36 PM] (github) HDDS-1898. GrpcReplicationService#download 
cannot replicate the
[Sep 5, 2019 1:25:15 PM] (stevel) HADOOP-16430. S3AFilesystem.delete to 
incrementally update s3guard with
[Sep 5, 2019 6:44:02 PM] (inigoiri) HDFS-12904. Add DataTransferThrottler to 
the Datanode transfers.
[Sep 5, 2019 7:49:58 PM] (billie) YARN-9718. Fixed yarn.service.am.java.opts 
shell injection. Contributed
[Sep 5, 2019 9:01:42 PM] (jhung) YARN-9810. Add queue capacity/maxcapacity 
percentage metrics.
[Sep 5, 2019 9:33:06 PM] (aengineer) HDDS-1708. Add container scrubber metrics. 
Contributed by Hrishikesh




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.v2.app.job.impl.TestJobImpl 
   hadoop.mapreduce.v2.app.TestMRApp 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/patch-mvnsite-root.txt
  [468K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-shelldocs.txt
  [88K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   

[jira] [Created] (HDDS-2096) Ozone ACL document missing AddAcl API

2019-09-06 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2096:


 Summary: Ozone ACL document missing AddAcl API
 Key: HDDS-2096
 URL: https://issues.apache.org/jira/browse/HDDS-2096
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Xiaoyu Yao


Current Ozone Native ACL APIs document looks like below, the AddAcl is missing.

 
h3. Ozone Native ACL APIs

The ACLs can be manipulated by a set of APIs supported by Ozone. The APIs 
supported are:
 # *SetAcl* – This API will take user principal, the name, type of the ozone 
object and a list of ACLs.
 # *GetAcl* – This API will take the name and type of the ozone object and will 
return a list of ACLs.
 # *RemoveAcl* - This API will take the name, type of the ozone object and the 
ACL that has to be removed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/

[Sep 5, 2019 4:35:47 PM] (ayushsaxena) HDFS-14276. [SBN read] Reduce tailing 
overhead. Contributed by Wei-Chiu
[Sep 5, 2019 9:09:08 PM] (jhung) YARN-9810. Add queue capacity/maxcapacity 
percentage metrics.
[Sep 5, 2019 11:22:15 PM] (xyao) Revert "HDFS-14633. The StorageType quota and 
consume in QuotaFeature is
[Sep 5, 2019 11:24:17 PM] (xyao) HDFS-14633. The StorageType quota and consume 
in QuotaFeature is not




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.TestAbandonBlock 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.registry.secure.TestSecureLogins 
   
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher 
   hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   

[jira] [Created] (HDFS-14827) RBF: Shared DN should display all info's in Router DtaNode UI

2019-09-06 Thread Ranith Sardar (Jira)
Ranith Sardar created HDFS-14827:


 Summary: RBF: Shared DN should display all info's in Router 
DtaNode UI
 Key: HDFS-14827
 URL: https://issues.apache.org/jira/browse/HDFS-14827
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"

2019-09-06 Thread luhuachao (Jira)
luhuachao created HDDS-2095:
---

 Summary: Submit mr job to yarn failed,   Error messegs is 
"Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
 Key: HDDS-2095
 URL: https://issues.apache.org/jira/browse/HDDS-2095
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.4.1
Reporter: luhuachao


below is the submit command 
{code:java}
hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar  nnbench 
-Dfs.defaultFS=o3fs://buc.volume-test  -maps 3   -bytesToWrite 1 -numberOfFiles 
1000  -blockSize 16  -operation create_write
{code}
clinet fail with message 
{code:java}
19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
/user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO 
mapreduce.JobSubmitter: Cleaning up the staging area 
/user/hdfs/.staging/job_1567754782562_0001java.io.IOException: 
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
application_1567754782562_0001 to YARN : 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
 at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at 
org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at 
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at 
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at 
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at 
org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at 
org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at 
org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
 at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at 
org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at 
org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.util.RunJar.run(RunJar.java:308) at 
org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: 
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
application_1567754782562_0001 to YARN : 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
 at 
org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299)
 at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 
more
{code}
the log in resourcemanager 
{code:java}
2019-09-06 15:26:51,836 WARN  security.DelegationTokenRenewer 
(DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(923)) - Unable to 
add the application to the delegation token renewer.
java.util.ServiceConfigurationError: 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at 

[jira] [Created] (HDFS-14826) dfs.ha.zkfc.port property duplicated in hdfs-default.xml

2019-09-06 Thread Renukaprasad C (Jira)
Renukaprasad C created HDFS-14826:
-

 Summary: dfs.ha.zkfc.port property duplicated in hdfs-default.xml
 Key: HDFS-14826
 URL: https://issues.apache.org/jira/browse/HDFS-14826
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Renukaprasad C


"dfs.ha.zkfc.port" property configuration is duplicated in hdfs-default.xml 
file with common value (port number - 8019) & different description.

This redundant entry to be removed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2094) TestOzoneManagerRatisServer is failing

2019-09-06 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2094:
-

 Summary: TestOzoneManagerRatisServer is failing
 Key: HDDS-2094
 URL: https://issues.apache.org/jira/browse/HDDS-2094
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar


{{TestOzoneManagerRatisServer}} is failing on trunk with the following error
{noformat}
[ERROR] 
verifyRaftGroupIdGenerationWithCustomOmServiceId(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerRatisServer)
  Time elapsed: 0.418 s  <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: Metrics source 
OzoneManagerDoubleBufferMetrics already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.ozone.om.ratis.metrics.OzoneManagerDoubleBufferMetrics.create(OzoneManagerDoubleBufferMetrics.java:50)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.(OzoneManagerDoubleBuffer.java:110)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.(OzoneManagerDoubleBuffer.java:88)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.(OzoneManagerStateMachine.java:87)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.getStateMachine(OzoneManagerRatisServer.java:314)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.(OzoneManagerRatisServer.java:244)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.newOMRatisServer(OzoneManagerRatisServer.java:302)
at 
org.apache.hadoop.ozone.om.ratis.TestOzoneManagerRatisServer.verifyRaftGroupIdGenerationWithCustomOmServiceId(TestOzoneManagerRatisServer.java:209)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org