Re: [VOTE] Release Apache Hadoop Submarine 0.2.0 - RC0

2019-06-18 Thread Xun Liu
+1 (non-binding)


> On Jun 17, 2019, at 10:31 PM, Wanqiang Ji  wrote:
> 
> +1 (non-binding)
> 
> On Mon, Jun 17, 2019 at 3:51 PM runlin zhang  wrote:
> 
>> +1 , I'm looking forward to it ~
>> 
>>> 在 2019年6月6日,下午9:23,Zhankun Tang  写道:
>>> 
>>> Hi folks,
>>> 
>>> Thanks to all of you who have contributed in this submarine 0.2.0
>> release.
>>> We now have a release candidate (RC0) for Apache Hadoop Submarine 0.2.0.
>>> 
>>> 
>>> The Artifacts for this Submarine-0.2.0 RC0 are available here:
>>> 
>>> https://home.apache.org/~ztang/submarine-0.2.0-rc0/
>>> 
>>> 
>>> It's RC tag in git is "submarine-0.2.0-RC0".
>>> 
>>> 
>>> 
>>> The maven artifacts are available via repository.apache.org at
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1221/
>>> 
>>> 
>>> This vote will run 7 days (5 weekdays), ending on 13th June at 11:59 pm
>> PST.
>>> 
>>> 
>>> 
>>> The highlights of this release.
>>> 
>>> 1. Linkedin's TonY runtime support in Submarine
>>> 
>>> 2. PyTorch enabled in Submarine with both YARN native service runtime
>>> (single node) and TonY runtime
>>> 
>>> 3. Support uber jar of Submarine to submit the job
>>> 
>>> 4. The YAML file to describe a job
>>> 
>>> 5. The Notebook support (by Apache Zeppelin Submarine interpreter)
>>> 
>>> 
>>> Thanks to Sunil, Wangda, Xun, Zac, Keqiu, Szilard for helping me in
>>> preparing the release.
>>> 
>>> I have done a few testing with my pseudo cluster. My +1 (non-binding) to
>>> start.
>>> 
>>> 
>>> 
>>> Regards,
>>> Zhankun
>> 
>> 
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> 
>> 



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] A unified and open Hadoop community sync up schedule?

2019-06-18 Thread Wangda Tan
Thanks @Wei-Chiu Chuang  . updated gdoc

On Tue, Jun 18, 2019 at 7:35 PM Wei-Chiu Chuang  wrote:

> Thanks Wangda,
>
> I just like to make a correction -- the .ics calendar file says the first
> Wednesday for HDFS/cloud connector is in Mandarin whereas on the gdoc is to
> host it on the third Wednesday.
>
> On Tue, Jun 18, 2019 at 5:29 PM Wangda Tan  wrote:
>
> > Hi Folks,
> >
> > I just updated doc:
> >
> >
> https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
> > with
> > dial-in information, notes, etc.
> >
> > Here's a calendar to subscribe:
> >
> >
> https://calendar.google.com/calendar/ical/hadoop.community.sync.up%40gmail.com/public/basic.ics
> >
> > I'm thinking to give it a try from next week, any suggestions?
> >
> > Thanks,
> > Wangda
> >
> > On Fri, Jun 14, 2019 at 4:02 PM Wangda Tan  wrote:
> >
> > > And please let me know if you can help with coordinate logistics stuff,
> > > cross-checking, etc. Let's spend some time next week to get it
> finalized.
> > >
> > > Thanks,
> > > Wangda
> > >
> > > On Fri, Jun 14, 2019 at 4:00 PM Wangda Tan 
> wrote:
> > >
> > >> Hi Folks,
> > >>
> > >> Yufei: Agree with all your opinions.
> > >>
> > >> Anu: it might be more efficient to use Google doc to track meeting
> > >> minutes and we can put them together.
> > >>
> > >> I just put the proposal to
> > >>
> >
> https://calendar.google.com/calendar/b/3?cid=aGFkb29wLmNvbW11bml0eS5zeW5jLnVwQGdtYWlsLmNvbQ
> > ,
> > >> you can check if the proposal time works or not. If you agree, we can
> go
> > >> ahead to add meeting link, google doc, etc.
> > >>
> > >> If you want to have edit permissions, please drop a private email to
> me
> > >> so I will add you.
> > >>
> > >> We still need more hosts, in each track, ideally we should have at
> least
> > >> 3 hosts per track just like HDFS blocks :), please volunteer, so we
> can
> > >> have enough members to run the meeting.
> > >>
> > >> Let's shoot by end of the next week, let's get all logistics done and
> > >> starting community sync up series from the week of Jun 25th.
> > >>
> > >> Thanks,
> > >> Wangda
> > >>
> > >> Thanks,
> > >> Wangda
> > >>
> > >>
> > >>
> > >> On Tue, Jun 11, 2019 at 10:23 AM Anu Engineer  >
> > >> wrote:
> > >>
> > >>> For Ozone, we have started using the Wiki itself as the agenda and
> > after
> > >>> the meeting is over, we convert it into the meeting notes.
> > >>> Here is an example, the project owner can edit and maintain it, it is
> > >>> like 10 mins work - and allows anyone to add stuff into the agenda
> too.
> > >>>
> > >>>
> > >>>
> >
> https://cwiki.apache.org/confluence/display/HADOOP/2019-06-10+Meeting+notes
> > >>>
> > >>> --Anu
> > >>>
> > >>> On Tue, Jun 11, 2019 at 10:20 AM Yufei Gu 
> > wrote:
> > >>>
> >  +1 for this idea. Thanks Wangda for bringing this up.
> > 
> >  Some comments to share:
> > 
> > - Agenda needed to be posted ahead of meeting and welcome any
> >  interested
> > party to contribute to topics.
> > - We should encourage more people to attend. That's whole point
> of
> >  the
> > meeting.
> > - Hopefully, this can mitigate the situation that some patches
> are
> > waiting for review for ever, which turns away new contributors.
> > - 30m per session sounds a little bit short, we can try it out
> and
> >  see
> > if extension is needed.
> > 
> >  Best,
> > 
> >  Yufei
> > 
> >  `This is not a contribution`
> > 
> > 
> >  On Fri, Jun 7, 2019 at 4:39 PM Wangda Tan 
> > wrote:
> > 
> >  > Hi Hadoop-devs,
> >  >
> >  > Previous we have regular YARN community sync up (1 hr, biweekly,
> but
> >  not
> >  > open to public). Recently because of changes in our schedules,
> Less
> >  folks
> >  > showed up in the sync up for the last several months.
> >  >
> >  > I saw the K8s community did a pretty good job to run their sig
> >  meetings,
> >  > there's regular meetings for different topics, notes, agenda, etc.
> >  Such as
> >  >
> >  >
> > 
> >
> https://docs.google.com/document/d/13mwye7nvrmV11q9_Eg77z-1w3X7Q1GTbslpml4J7F3A/edit
> >  >
> >  >
> >  > For Hadoop community, there are less such regular meetings open to
> > the
> >  > public except for Ozone project and offline meetups or
> >  Bird-of-Features in
> >  > Hadoop/DataWorks Summit. Recently we have a few folks joined
> > DataWorks
> >  > Summit at Washington DC and Barcelona, and lots (50+) of folks
> join
> >  the
> >  > Ozone/Hadoop/YARN BoF, ask (good) questions and roadmaps. I think
> it
> >  is
> >  > important to open such conversations to the public and let more
> >  > folk/companies join.
> >  >
> >  > Discussed a small group of community members and wrote a short
> >  proposal
> >  > about the form, time and topic of the community sync up, thanks
> for
> >  > 

[jira] [Created] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-06-18 Thread xuzq (JIRA)
xuzq created HDFS-14583:
---

 Summary: FileStatus#toString() will throw IllegalArgumentException
 Key: HDFS-14583
 URL: https://issues.apache.org/jira/browse/HDFS-14583
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: xuzq


FileStatus#toString() will throw IllegalArgumentException, stack and error 
message like this:
{code:java}
java.lang.IllegalArgumentException: Can not create a Path from an empty string
  at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
  at org.apache.hadoop.fs.Path.(Path.java:184)
  at 
org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
  at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
  at 
org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
{code}
Test Code like this:
{code:java}
@Test
public void testHdfsFileStatus() throws IOException {
  HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
  .replication(1)
  .blocksize(1024)
  .perm(new FsPermission((short) 777))
  .owner("owner")
  .group("group")
  .symlink(new byte[0])
  .path(new byte[0])
  .fileId(1010)
  .isdir(true)
  .build();
  System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
}{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] A unified and open Hadoop community sync up schedule?

2019-06-18 Thread Wei-Chiu Chuang
Thanks Wangda,

I just like to make a correction -- the .ics calendar file says the first
Wednesday for HDFS/cloud connector is in Mandarin whereas on the gdoc is to
host it on the third Wednesday.

On Tue, Jun 18, 2019 at 5:29 PM Wangda Tan  wrote:

> Hi Folks,
>
> I just updated doc:
>
> https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
> with
> dial-in information, notes, etc.
>
> Here's a calendar to subscribe:
>
> https://calendar.google.com/calendar/ical/hadoop.community.sync.up%40gmail.com/public/basic.ics
>
> I'm thinking to give it a try from next week, any suggestions?
>
> Thanks,
> Wangda
>
> On Fri, Jun 14, 2019 at 4:02 PM Wangda Tan  wrote:
>
> > And please let me know if you can help with coordinate logistics stuff,
> > cross-checking, etc. Let's spend some time next week to get it finalized.
> >
> > Thanks,
> > Wangda
> >
> > On Fri, Jun 14, 2019 at 4:00 PM Wangda Tan  wrote:
> >
> >> Hi Folks,
> >>
> >> Yufei: Agree with all your opinions.
> >>
> >> Anu: it might be more efficient to use Google doc to track meeting
> >> minutes and we can put them together.
> >>
> >> I just put the proposal to
> >>
> https://calendar.google.com/calendar/b/3?cid=aGFkb29wLmNvbW11bml0eS5zeW5jLnVwQGdtYWlsLmNvbQ
> ,
> >> you can check if the proposal time works or not. If you agree, we can go
> >> ahead to add meeting link, google doc, etc.
> >>
> >> If you want to have edit permissions, please drop a private email to me
> >> so I will add you.
> >>
> >> We still need more hosts, in each track, ideally we should have at least
> >> 3 hosts per track just like HDFS blocks :), please volunteer, so we can
> >> have enough members to run the meeting.
> >>
> >> Let's shoot by end of the next week, let's get all logistics done and
> >> starting community sync up series from the week of Jun 25th.
> >>
> >> Thanks,
> >> Wangda
> >>
> >> Thanks,
> >> Wangda
> >>
> >>
> >>
> >> On Tue, Jun 11, 2019 at 10:23 AM Anu Engineer 
> >> wrote:
> >>
> >>> For Ozone, we have started using the Wiki itself as the agenda and
> after
> >>> the meeting is over, we convert it into the meeting notes.
> >>> Here is an example, the project owner can edit and maintain it, it is
> >>> like 10 mins work - and allows anyone to add stuff into the agenda too.
> >>>
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/HADOOP/2019-06-10+Meeting+notes
> >>>
> >>> --Anu
> >>>
> >>> On Tue, Jun 11, 2019 at 10:20 AM Yufei Gu 
> wrote:
> >>>
>  +1 for this idea. Thanks Wangda for bringing this up.
> 
>  Some comments to share:
> 
> - Agenda needed to be posted ahead of meeting and welcome any
>  interested
> party to contribute to topics.
> - We should encourage more people to attend. That's whole point of
>  the
> meeting.
> - Hopefully, this can mitigate the situation that some patches are
> waiting for review for ever, which turns away new contributors.
> - 30m per session sounds a little bit short, we can try it out and
>  see
> if extension is needed.
> 
>  Best,
> 
>  Yufei
> 
>  `This is not a contribution`
> 
> 
>  On Fri, Jun 7, 2019 at 4:39 PM Wangda Tan 
> wrote:
> 
>  > Hi Hadoop-devs,
>  >
>  > Previous we have regular YARN community sync up (1 hr, biweekly, but
>  not
>  > open to public). Recently because of changes in our schedules, Less
>  folks
>  > showed up in the sync up for the last several months.
>  >
>  > I saw the K8s community did a pretty good job to run their sig
>  meetings,
>  > there's regular meetings for different topics, notes, agenda, etc.
>  Such as
>  >
>  >
> 
> https://docs.google.com/document/d/13mwye7nvrmV11q9_Eg77z-1w3X7Q1GTbslpml4J7F3A/edit
>  >
>  >
>  > For Hadoop community, there are less such regular meetings open to
> the
>  > public except for Ozone project and offline meetups or
>  Bird-of-Features in
>  > Hadoop/DataWorks Summit. Recently we have a few folks joined
> DataWorks
>  > Summit at Washington DC and Barcelona, and lots (50+) of folks join
>  the
>  > Ozone/Hadoop/YARN BoF, ask (good) questions and roadmaps. I think it
>  is
>  > important to open such conversations to the public and let more
>  > folk/companies join.
>  >
>  > Discussed a small group of community members and wrote a short
>  proposal
>  > about the form, time and topic of the community sync up, thanks for
>  > everybody who have contributed to the proposal! Please feel free to
>  add
>  > your thoughts to the Proposal Google doc
>  > <
>  >
> 
> https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
>  > >
>  > .
>  >
>  > Especially for the following parts:
>  > - If you have interests to run any of the community sync-ups, please
>  put
>  > your name 

Re: [DISCUSS] A unified and open Hadoop community sync up schedule?

2019-06-18 Thread Wangda Tan
Hi Folks,

I just updated doc:
https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
with
dial-in information, notes, etc.

Here's a calendar to subscribe:
https://calendar.google.com/calendar/ical/hadoop.community.sync.up%40gmail.com/public/basic.ics

I'm thinking to give it a try from next week, any suggestions?

Thanks,
Wangda

On Fri, Jun 14, 2019 at 4:02 PM Wangda Tan  wrote:

> And please let me know if you can help with coordinate logistics stuff,
> cross-checking, etc. Let's spend some time next week to get it finalized.
>
> Thanks,
> Wangda
>
> On Fri, Jun 14, 2019 at 4:00 PM Wangda Tan  wrote:
>
>> Hi Folks,
>>
>> Yufei: Agree with all your opinions.
>>
>> Anu: it might be more efficient to use Google doc to track meeting
>> minutes and we can put them together.
>>
>> I just put the proposal to
>> https://calendar.google.com/calendar/b/3?cid=aGFkb29wLmNvbW11bml0eS5zeW5jLnVwQGdtYWlsLmNvbQ,
>> you can check if the proposal time works or not. If you agree, we can go
>> ahead to add meeting link, google doc, etc.
>>
>> If you want to have edit permissions, please drop a private email to me
>> so I will add you.
>>
>> We still need more hosts, in each track, ideally we should have at least
>> 3 hosts per track just like HDFS blocks :), please volunteer, so we can
>> have enough members to run the meeting.
>>
>> Let's shoot by end of the next week, let's get all logistics done and
>> starting community sync up series from the week of Jun 25th.
>>
>> Thanks,
>> Wangda
>>
>> Thanks,
>> Wangda
>>
>>
>>
>> On Tue, Jun 11, 2019 at 10:23 AM Anu Engineer 
>> wrote:
>>
>>> For Ozone, we have started using the Wiki itself as the agenda and after
>>> the meeting is over, we convert it into the meeting notes.
>>> Here is an example, the project owner can edit and maintain it, it is
>>> like 10 mins work - and allows anyone to add stuff into the agenda too.
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/HADOOP/2019-06-10+Meeting+notes
>>>
>>> --Anu
>>>
>>> On Tue, Jun 11, 2019 at 10:20 AM Yufei Gu  wrote:
>>>
 +1 for this idea. Thanks Wangda for bringing this up.

 Some comments to share:

- Agenda needed to be posted ahead of meeting and welcome any
 interested
party to contribute to topics.
- We should encourage more people to attend. That's whole point of
 the
meeting.
- Hopefully, this can mitigate the situation that some patches are
waiting for review for ever, which turns away new contributors.
- 30m per session sounds a little bit short, we can try it out and
 see
if extension is needed.

 Best,

 Yufei

 `This is not a contribution`


 On Fri, Jun 7, 2019 at 4:39 PM Wangda Tan  wrote:

 > Hi Hadoop-devs,
 >
 > Previous we have regular YARN community sync up (1 hr, biweekly, but
 not
 > open to public). Recently because of changes in our schedules, Less
 folks
 > showed up in the sync up for the last several months.
 >
 > I saw the K8s community did a pretty good job to run their sig
 meetings,
 > there's regular meetings for different topics, notes, agenda, etc.
 Such as
 >
 >
 https://docs.google.com/document/d/13mwye7nvrmV11q9_Eg77z-1w3X7Q1GTbslpml4J7F3A/edit
 >
 >
 > For Hadoop community, there are less such regular meetings open to the
 > public except for Ozone project and offline meetups or
 Bird-of-Features in
 > Hadoop/DataWorks Summit. Recently we have a few folks joined DataWorks
 > Summit at Washington DC and Barcelona, and lots (50+) of folks join
 the
 > Ozone/Hadoop/YARN BoF, ask (good) questions and roadmaps. I think it
 is
 > important to open such conversations to the public and let more
 > folk/companies join.
 >
 > Discussed a small group of community members and wrote a short
 proposal
 > about the form, time and topic of the community sync up, thanks for
 > everybody who have contributed to the proposal! Please feel free to
 add
 > your thoughts to the Proposal Google doc
 > <
 >
 https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
 > >
 > .
 >
 > Especially for the following parts:
 > - If you have interests to run any of the community sync-ups, please
 put
 > your name to the table inside the proposal. We need more volunteers
 to help
 > run the sync-ups in different timezones.
 > - Please add suggestions to the time, frequency and themes and feel
 free to
 > share your thoughts if we should do sync ups for other topics which
 are not
 > covered by the proposal.
 >
 > Link to the Proposal Google doc
 > <
 >
 https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
 > >
 >
 > Thanks,
 > Wangda Tan
 >

>>>


[jira] [Created] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-06-18 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1705:


 Summary: Recon: Add estimatedTotalCount to the response of 
containers and containers/{id} endpoints
 Key: HDDS-1705
 URL: https://issues.apache.org/jira/browse/HDDS-1705
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.4.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1702) Optimize Ozone Recon build time

2019-06-18 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1702.

   Resolution: Fixed
Fix Version/s: 0.4.1

Thanks for the contribution. I have committed the patch to the trunk.

> Optimize Ozone Recon build time 
> 
>
> Key: HDDS-1702
> URL: https://issues.apache.org/jira/browse/HDDS-1702
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, hadoop-ozone-recon node_modules folder is copied to target folder 
> and this takes a lot of time while building hadoop-ozone project. Reduce the 
> build time by excluding node_modules folder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Aaron Fabbri as Hadoop PMC

2019-06-18 Thread Aaron Fabbri
Thank you everybody. Appreciate it.

On Mon, Jun 17, 2019 at 8:43 PM Sree V 
wrote:

> Congratulations, Aaron.
>
>
>
> Thank you./Sree
>
>
>
> On Monday, June 17, 2019, 7:31:47 PM PDT, Dinesh Chitlangia <
> dchitlan...@cloudera.com.INVALID> wrote:
>
>  Congratulations Aaron!
>
> -Dinesh
>
>
> On Mon, Jun 17, 2019 at 9:29 PM Wanqiang Ji  wrote:
>
> > Congratulations!
> >
> > On Tue, Jun 18, 2019 at 8:29 AM Da Zhou  wrote:
> >
> > > Congratulations!
> > >
> > > Regards,
> > > Da
> > >
> > > On Mon, Jun 17, 2019 at 5:14 PM Ajay Kumar  > > .invalid>
> > > wrote:
> > >
> > > > Congrats Aaron!!
> > > >
> > > > On Mon, Jun 17, 2019 at 4:00 PM Daniel Templeton
> > > >  wrote:
> > > >
> > > > > I am very pleased to announce that Aaron Fabbri has now been added
> to
> > > > > the Hadoop PMC.  Welcome aboard, Aaron, and Congratulations!
> > > > >
> > > > > Daniel
> > > > >
> > > > >
> -
> > > > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Resolved] (HDDS-1699) Update RocksDB version to 6.0.1.

2019-06-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1699.
--
   Resolution: Fixed
Fix Version/s: (was: 0.5.0)
   0.4.1

> Update RocksDB version to 6.0.1.
> 
>
> Key: HDDS-1699
> URL: https://issues.apache.org/jira/browse/HDDS-1699
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.4.1
>
>
> In RocksDb 6.0.1, some useful tuning features were brought into the JNI API. 
> We need to upgrade the version in ozone to pick those up. 
> https://github.com/facebook/rocksdb/pull/4833



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1704) Exercise Ozone tests in maven build

2019-06-18 Thread Eric Yang (JIRA)
Eric Yang created HDDS-1704:
---

 Summary: Exercise Ozone tests in maven build
 Key: HDDS-1704
 URL: https://issues.apache.org/jira/browse/HDDS-1704
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Eric Yang


Ozone trunk regression happens often, and acceptance test is not part of maven 
build to inform developer how to fix the problems.  It would be great to 
include acceptance tests as part of maven verify life cycle to reduce noise 
happening on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14487) Missing Space in Client Error Message

2019-06-18 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HDFS-14487.
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0

Thanks for the patch, [~shwetayakkali], and the review, [~sodonnell].  +1  
Committed to trunk.

> Missing Space in Client Error Message
> -
>
> Key: HDFS-14487
> URL: https://issues.apache.org/jira/browse/HDFS-14487
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie, noob
> Fix For: 3.3.0
>
> Attachments: HDFS-14487.001.patch
>
>
> {code:java}
>   if (retries == 0) {
> throw new IOException("Unable to close file because the last 
> block"
> + last + " does not have enough number of replicas.");
>   }
> {code}
> Note the missing space after "last block".
> https://github.com/apache/hadoop/blob/f940ab242da80a22bae95509d5c282d7e2f7ecdb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L968-L969



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13841) RBF: After Click on another Tab in Router Federation UI page it's not redirecting to new tab

2019-06-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-13841.

Resolution: Duplicate

> RBF: After Click on another Tab in Router Federation UI page it's not 
> redirecting to new tab 
> -
>
> Key: HDFS-13841
> URL: https://issues.apache.org/jira/browse/HDFS-13841
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.1.0
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {{Scenario :-}}
> 1. Open the Federation UI
> 2. Click on Different tabs like {{Mount Tables}} , {{Router}} or 
> {{Subcluster}}
> 3. when we click on mount table tab or any other tab it will redirect to that 
> page but it's not happening when we do refresh the page or submit the Enter 
> button that time it's redirected to particular clicked Tab page.
> {{Expected Result :-}}
> It should redirect to clicked Tab page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-06-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/

[Jun 17, 2019 2:22:01 PM] (weichiu) HDFS-14535. The default 8KB buffer in
[Jun 18, 2019 12:04:38 AM] (weichiu) HDFS-11950. Disable libhdfs zerocopy test 
on Mac. Contributed by Akira




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/356/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   

[jira] [Created] (HDFS-14582) Failed to start DN with ArithmeticException when NULL checksum used

2019-06-18 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14582:
-

 Summary: Failed to start DN with ArithmeticException when NULL 
checksum used
 Key: HDFS-14582
 URL: https://issues.apache.org/jira/browse/HDFS-14582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.1.1
 Environment: {noformat}
Caused by: java.lang.ArithmeticException: / by zero
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(BlockPoolSlice.java:823)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addReplicaToReplicasMap(BlockPoolSlice.java:627)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:702)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice$AddReplicaProcessor.compute(BlockPoolSlice.java:1047)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157){noformat}
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14580) RBF: LS command for paths shows wrong owner and permission information

2019-06-18 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq resolved HDFS-14580.
-
Resolution: Abandoned

> RBF: LS command for paths shows wrong owner and permission information
> --
>
> Key: HDFS-14580
> URL: https://issues.apache.org/jira/browse/HDFS-14580
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Priority: Major
>  Labels: RBF
>
> RouterClientProtocol#getMountPointStatus will return wrong owner, group and 
> permission for the special path.
>  
> The mountPoint like this:
> {code:java}
> Mount Table Entries:
> SourceDestinations   OwnerGroup   Mode   Quota/Usage
> /home ns1 -> /home   home homerwxrwxrwx  NsQuota: 
> -/-, SsQuota: -/-
> /home/test1   ns2 -> /home/test1 htest1   htest1  rwxr-xr-x  NsQuota: 
> -/-, SsQuota: -/-
> /test1ns3 -> /test1  test1test1   rwxrwxrwx  NsQuota: 
> -/-, SsQuota: -/-
> {code}
>  
>  
> RouterClientProtocol#getMountPointStatus("/home/test1", 
> HdfsFileStatus.EMPTY_NAME, false) will return null, and it is ok.
> But RouterClientProtocol#getMountPointStatus("/home/", 
> HdfsFileStatus.EMPTY_NAME, false) returns 0777.
> {code:java}
> [HdfsLocatedFileStatus{path=null; isDirectory=true; 
> modification_time=1560857909200; access_time=1560857909200; owner=xuzq; 
> group=supergroup; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
> isEncrypted=false; isErasureCoded=false}]
> {code}
>  
> It is clear that the result should like:
>  
> {code:java}
> [HdfsLocatedFileStatus{path=/home/test1; isDirectory=true; 
> modification_time=2; access_time=2; owner=htest1; group=htest1; 
> permission=rwxr-xr-x; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false}]
> {code}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14581) EC: Append validation should be done before getting lease on file

2019-06-18 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14581:
-

 Summary: EC: Append validation should be done before getting lease 
on file
 Key: HDFS-14581
 URL: https://issues.apache.org/jira/browse/HDFS-14581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.3.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


*org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.prepareFileForAppend(..)@189*
{noformat}
file.recordModification(iip.getLatestSnapshotId());
file.toUnderConstruction(leaseHolder, clientMachine);

fsn.getLeaseManager().addLease(
file.getFileUnderConstructionFeature().getClientName(), file.getId());

LocatedBlock ret = null;
if (!newBlock) {
if (file.isStriped()) {
throw new UnsupportedOperationException(
"Append on EC file without new block is not supported.");
}{noformat}
In this code "UnsupportedOperationException" exception thows after marking file 
toUnderConstruction. In this case file is opened without any "Open" editlogs, 
after some time lease manager close this file and add close edit log.

When SBN tail this edit log, it will fail with this exception.
{noformat}
2019-06-13 19:17:51,513 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
on operation CloseOp [length=0, inodeId=0, path=/ECtest/, 
replication=1, mtime=1560261947480, atime=1560258249117, blockSize=134217728, 
blocks=[blk_-9223372036854775792_1005], permissions=root:hadoop:rw-r--r--, 
aclEntries=null, clientName=, clientMachine=, overwrite=false, 
storagePolicyId=0, erasureCodingPolicyId=0, opCode=OP_CLOSE, txid=1363]
java.io.IOException: File is not under construction: /ECtest/container-executor
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:504)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:286)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:181)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:924)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:329)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:485)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14580) LS command for paths shows wrong owner and permission information

2019-06-18 Thread xuzq (JIRA)
xuzq created HDFS-14580:
---

 Summary: LS command for paths shows wrong owner and permission 
information
 Key: HDFS-14580
 URL: https://issues.apache.org/jira/browse/HDFS-14580
 Project: Hadoop HDFS
  Issue Type: Test
  Components: rbf
Reporter: xuzq






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14579) In refreshNodes, avoid performing a DNS lookup while holding the write lock

2019-06-18 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HDFS-14579:


 Summary: In refreshNodes, avoid performing a DNS lookup while 
holding the write lock
 Key: HDFS-14579
 URL: https://issues.apache.org/jira/browse/HDFS-14579
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


When refreshNodes is called on a large cluster, or a cluster where DNS is not 
performing well, it can cause the namenode to hang for a long time. This is 
because the refreshNodes operation holds the global write lock while it is 
running. Most of refreshNodes code is simple and hence fast, but unfortunately 
it performs a DNS lookup for each host in the cluster while the lock is held. 

Right now, it calls:

{code}
  public void refreshNodes(final Configuration conf) throws IOException {
refreshHostsReader(conf);
namesystem.writeLock();
try {
  refreshDatanodes();
  countSoftwareVersions();
} finally {
  namesystem.writeUnlock();
}
  }
{code}

The line refreshHostsReader(conf); reads the new config file and does a DNS 
lookup on each entry - the write lock is not held here. Then the main work is 
done here:

{code}
  private void refreshDatanodes() {
final Map copy;
synchronized (this) {
  copy = new HashMap<>(datanodeMap);
}
for (DatanodeDescriptor node : copy.values()) {
  // Check if not include.
  if (!hostConfigManager.isIncluded(node)) {
node.setDisallowed(true);
  } else {
long maintenanceExpireTimeInMS =
hostConfigManager.getMaintenanceExpirationTimeInMS(node);
if (node.maintenanceNotExpired(maintenanceExpireTimeInMS)) {
  datanodeAdminManager.startMaintenance(
  node, maintenanceExpireTimeInMS);
} else if (hostConfigManager.isExcluded(node)) {
  datanodeAdminManager.startDecommission(node);
} else {
  datanodeAdminManager.stopMaintenance(node);
  datanodeAdminManager.stopDecommission(node);
}
  }
  node.setUpgradeDomain(hostConfigManager.getUpgradeDomain(node));
}
  }
{code}

All the isIncluded(), isExcluded() methods call node.getResolvedAddress() which 
does the DNS lookup. We could probably change things to perform all the DNS 
lookups outside of the write lock, and then take the lock and process the 
nodes. Also change or overload isIncluded() etc to take the inetAddress rather 
than the datanode descriptor.

It would not shorten the time the operation takes to run overall, but it would 
move the long duration out of the write lock and avoid blocking the namenode 
for the entire time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1703) Freon uses wait/notify instead of polling to eliminate the test result error.

2019-06-18 Thread Xudong Cao (JIRA)
Xudong Cao created HDDS-1703:


 Summary: Freon uses wait/notify instead of polling to eliminate 
the test result error.
 Key: HDDS-1703
 URL: https://issues.apache.org/jira/browse/HDDS-1703
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Affects Versions: 0.4.0
Reporter: Xudong Cao
Assignee: Xudong Cao


After HDDS-1532, Freon has an efficient concurrent testing framework. In the 
new framework, the main thread checks every 5s to verify whether the test is 
completed (or an exception occurred), which will eventually introduce a maximum 
error of 5s.

In most cases, Freon's test results are at minutes or tens of minutes level, 
thus a 5s error is not significant, but in some particularly small tests, a 5s 
error may have a significant impact.

Therefore, we can use the combination of Object.wait() + Object.notify()  
instead of polling to completely eliminate this error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1694) TestNodeReportHandler is failing with NPE

2019-06-18 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-1694.

Resolution: Fixed

> TestNodeReportHandler is failing with NPE
> -
>
> Key: HDDS-1694
> URL: https://issues.apache.org/jira/browse/HDDS-1694
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> FAILURE in 
> ozone-unit-076618677d39x4h9/unit/hadoop-hdds/server-scm/org.apache.hadoop.hdds.scm.node.TestNodeReportHandler.txt
> ---
> Test set: org.apache.hadoop.hdds.scm.node.TestNodeReportHandler
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.43 s <<< 
> FAILURE! - in org.apache.hadoop.hdds.scm.node.TestNodeReportHandler
> testNodeReport(org.apache.hadoop.hdds.scm.node.TestNodeReportHandler)  Time 
> elapsed: 0.288 s  <<< ERROR!
> java.lang.NullPointerException
>     at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:122)
>     at 
> org.apache.hadoop.hdds.scm.node.TestNodeReportHandler.resetEventCollector(TestNodeReportHandler.java:53)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> 2019-06-16 23:52:29,345 INFO  node.SCMNodeManager 
> (SCMNodeManager.java:(119)) - Entering startup safe mode.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org