Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-04 Thread Uma Maheswara Rao Gangumalla
+1

Regards,
Uma

On Sat, Aug 31, 2019, 10:19 PM Wangda Tan  wrote:

> Hi all,
>
> As we discussed in the previous thread [1],
>
> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>
> If you have interests to learn more about this. Please review the proposal
> let me know if you have any questions/suggestions for the proposal. This
> will be sent to board post voting passed. (And please note that the
> previous voting thread [2] to move Submarine to a separate Github repo is a
> necessary effort to move Submarine to a separate Apache project but not
> sufficient so I sent two separate voting thread.)
>
> Please let me know if I missed anyone in the proposal, and reply if you'd
> like to be included in the project.
>
> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-04 Thread Rohith Sharma K S
+1, Great to see Submarine progress.
I am interested to participate in this project. Please include me as well
in the project.

-Rohith Sharma K S

On Sun, 1 Sep 2019 at 10:49, Wangda Tan  wrote:

> Hi all,
>
> As we discussed in the previous thread [1],
>
> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>
> If you have interests to learn more about this. Please review the proposal
> let me know if you have any questions/suggestions for the proposal. This
> will be sent to board post voting passed. (And please note that the
> previous voting thread [2] to move Submarine to a separate Github repo is a
> necessary effort to move Submarine to a separate Apache project but not
> sufficient so I sent two separate voting thread.)
>
> Please let me know if I missed anyone in the proposal, and reply if you'd
> like to be included in the project.
>
> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/

[Sep 4, 2019 12:11:28 AM] (cliang) HDFS-13547. Add ingress port based sasl 
resolver. Contributed by Chen
[Sep 4, 2019 12:11:28 AM] (cliang) HDFS-13566. Add configurable additional RPC 
listener to NameNode.
[Sep 4, 2019 12:11:29 AM] (cliang) HDFS-13617. Allow wrapping NN QOP into token 
in encrypted message.
[Sep 4, 2019 12:11:29 AM] (cliang) HDFS-13699. Add DFSClient sending handshake 
token to DataNode, and allow
[Sep 4, 2019 12:18:41 AM] (cliang) HDFS-14611. Move handshake secret field from 
Token to BlockAccessToken.
[Sep 4, 2019 8:26:33 AM] (iwasakims) HADOOP-16439. Upgrade bundled Tomcat in 
branch-2.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.TestQuota 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.v2.app.TestKill 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/434/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 

Re: Hadoop Storage online sync in an hour

2019-09-04 Thread Wei-Chiu Chuang
Here it is:
https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit?ts=5d609315#heading=h.xh4zfwj8ppmn

I think I made a mistake:

Storage: HDFS, Cloud connectors

North America, EMEA, India

English

Every 2 weeks (even week)

10AM (GMT -8)

Brahma Reddy Battula,

APAC, North America

English

Every 4 weeks (4th week)

1 PM (GMT +8)

Weichiu Chuang

APAC, North America

Mandarin 中文

Every 4 weeks (3rd week)

1 PM (GMT +8)

Weichiu Chuang

On Wed, Sep 4, 2019 at 6:07 PM Aaron Fabbri  wrote:

> Hi Wei-Chiu,
>
> Can you share the calendar link again for this meeting?
>
> Thanks,
> Aaron
>
> On Wed, Sep 4, 2019 at 9:31 AM Matt Foley  wrote:
>
>> Sorry I won’t be able to come today; a work meeting interferes.
>> —Matt
>>
>> On Sep 4, 2019, at 9:10 AM, Wei-Chiu Chuang  wrote:
>>
>> It's a short week so I didn't set up a predefined topic to discuss.
>>
>> What should we be discussing? How about Erasure Coding? I'm starting to
>> see
>> tricky EC bug reports coming in lately, so looks like folks are using it
>> in
>> production. Should we be thinking about the next step for EC in addition
>> to
>> bug fixes?
>>
>> Feel free to contribute other topics. We could also continue discussing
>> NameNode Fine-Grained Locking from the last time.
>>
>> Weichiu
>>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>


Re: Hadoop Storage online sync in an hour

2019-09-04 Thread Aaron Fabbri
Hi Wei-Chiu,

Can you share the calendar link again for this meeting?

Thanks,
Aaron

On Wed, Sep 4, 2019 at 9:31 AM Matt Foley  wrote:

> Sorry I won’t be able to come today; a work meeting interferes.
> —Matt
>
> On Sep 4, 2019, at 9:10 AM, Wei-Chiu Chuang  wrote:
>
> It's a short week so I didn't set up a predefined topic to discuss.
>
> What should we be discussing? How about Erasure Coding? I'm starting to see
> tricky EC bug reports coming in lately, so looks like folks are using it in
> production. Should we be thinking about the next step for EC in addition to
> bug fixes?
>
> Feel free to contribute other topics. We could also continue discussing
> NameNode Fine-Grained Locking from the last time.
>
> Weichiu
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [DISCUSS] GitHub PRs without JIRA number

2019-09-04 Thread Sean Busbey
We should add a Pull Request Template that specifically calls out the
expectation that folks need to have a JIRA associated with their PR
for it to get reviewed. Expectations around time to response and how
to go about getting attention when things lag would also be good to
include. (e.g. are folks expected to ping on the jira? are folks
expected to email a relevant *-dev list?)

If anyone is interested in doing the work to make it so "test this" /
"retest this" / etc work, open a jira and I'll give you some pointers
of examples to go off of. We use a plugin to do this for yetus based
tests in some HBase repos.

On Wed, Sep 4, 2019 at 1:59 PM Wei-Chiu Chuang
 wrote:
>
> +general@
>
>
> On Wed, Aug 28, 2019 at 6:42 AM Wei-Chiu Chuang  wrote:
>
> > I don't think our GitHub integration supports those commands. Ozone has
> > its own github integration that can test individual PRs though.
> >
> >
> >
> > On Tue, Aug 27, 2019 at 12:40 PM Iñigo Goiri  wrote:
> >
> >> I wouldn't go for #3 and always require a JIRA for a PR.
> >>
> >> In general, I think we should state the best practices for using GitHub
> >> PRs.
> >> There were some guidelines but they were kind of open
> >> For example, adding always a link to the JIRA to the description.
> >> I think PRs can have a template as a start.
> >>
> >> The other thing I would do is to disable the automatic Jenkins trigger.
> >> I've seen the "retest this" and others:
> >> https://wiki.jenkins.io/display/JENKINS/GitHub+pull+request+builder+plugin
> >> https://github.com/jenkinsci/ghprb-plugin/blob/master/README.md
> >>
> >>
> >>
> >> On Tue, Aug 27, 2019 at 10:47 AM Wei-Chiu Chuang 
> >> wrote:
> >>
> >> > Hi,
> >> > There are hundreds of GitHub PRs pending review. Many of them just sit
> >> > there wasting Jenkins resources.
> >> >
> >> > I suggest:
> >> > (1) close PRs that went stale (i.e. doesn't compile). Or even close PRs
> >> > that hasn't been reviewed for more than a year.
> >> > (1) close PRs that doesn't have a JIRA number. No one is going to
> >> review a
> >> > big PR that doesn't have a JIRA anyway.
> >> > (2) For PRs without JIRA number, file JIRAs for the PR on behalf of the
> >> > reporter.
> >> > (3) For typo fixes, merge the PRs directly without a JIRA. IMO, this is
> >> the
> >> > best use of GitHub PR.
> >> >
> >> > Thoughts?
> >> >
> >>
> >



-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] GitHub PRs without JIRA number

2019-09-04 Thread Wei-Chiu Chuang
+general@


On Wed, Aug 28, 2019 at 6:42 AM Wei-Chiu Chuang  wrote:

> I don't think our GitHub integration supports those commands. Ozone has
> its own github integration that can test individual PRs though.
>
>
>
> On Tue, Aug 27, 2019 at 12:40 PM Iñigo Goiri  wrote:
>
>> I wouldn't go for #3 and always require a JIRA for a PR.
>>
>> In general, I think we should state the best practices for using GitHub
>> PRs.
>> There were some guidelines but they were kind of open
>> For example, adding always a link to the JIRA to the description.
>> I think PRs can have a template as a start.
>>
>> The other thing I would do is to disable the automatic Jenkins trigger.
>> I've seen the "retest this" and others:
>> https://wiki.jenkins.io/display/JENKINS/GitHub+pull+request+builder+plugin
>> https://github.com/jenkinsci/ghprb-plugin/blob/master/README.md
>>
>>
>>
>> On Tue, Aug 27, 2019 at 10:47 AM Wei-Chiu Chuang 
>> wrote:
>>
>> > Hi,
>> > There are hundreds of GitHub PRs pending review. Many of them just sit
>> > there wasting Jenkins resources.
>> >
>> > I suggest:
>> > (1) close PRs that went stale (i.e. doesn't compile). Or even close PRs
>> > that hasn't been reviewed for more than a year.
>> > (1) close PRs that doesn't have a JIRA number. No one is going to
>> review a
>> > big PR that doesn't have a JIRA anyway.
>> > (2) For PRs without JIRA number, file JIRAs for the PR on behalf of the
>> > reporter.
>> > (3) For typo fixes, merge the PRs directly without a JIRA. IMO, this is
>> the
>> > best use of GitHub PR.
>> >
>> > Thoughts?
>> >
>>
>


[jira] [Created] (HADOOP-16548) ABFS: Config to enable/disable flush operation

2019-09-04 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-16548:
-

 Summary: ABFS: Config to enable/disable flush operation
 Key: HADOOP-16548
 URL: https://issues.apache.org/jira/browse/HADOOP-16548
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Bilahari T H
Assignee: Bilahari T H


Make flush operation enabled/disabled through configuration. This is part of 
performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Hadoop Storage online sync in an hour

2019-09-04 Thread Matt Foley
Sorry I won’t be able to come today; a work meeting interferes.
—Matt

On Sep 4, 2019, at 9:10 AM, Wei-Chiu Chuang  wrote:

It's a short week so I didn't set up a predefined topic to discuss.

What should we be discussing? How about Erasure Coding? I'm starting to see
tricky EC bug reports coming in lately, so looks like folks are using it in
production. Should we be thinking about the next step for EC in addition to
bug fixes?

Feel free to contribute other topics. We could also continue discussing
NameNode Fine-Grained Locking from the last time.

Weichiu


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16547:
---

 Summary: s3guard prune command doesn't get AWS auth chain from FS
 Key: HADOOP-16547
 URL: https://issues.apache.org/jira/browse/HADOOP-16547
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


s3guard prune command doesn't get AWS auth chain from any FS, so it just drives 
the DDB store from the conf settings. If S3A is set up to use Delegation tokens 
then the DTs/custom AWS auth sequence is not picked up, so you get an auth 
failure.

Fix:

# instantiate the FS before calling initMetadataStore
# review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Hadoop Storage online sync in an hour

2019-09-04 Thread Wei-Chiu Chuang
It's a short week so I didn't set up a predefined topic to discuss.

What should we be discussing? How about Erasure Coding? I'm starting to see
tricky EC bug reports coming in lately, so looks like folks are using it in
production. Should we be thinking about the next step for EC in addition to
bug fixes?

Feel free to contribute other topics. We could also continue discussing
NameNode Fine-Grained Locking from the last time.

Weichiu


RE: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-04 Thread Brahma Reddy Battula
+1, Thanks for Wangda's proposal.

I am interested to participate in this project. Please include me


-Original Message-
From: Wanqiang Ji [mailto:wanqiang...@gmail.com] 
Sent: Wednesday, September 04, 2019 6:53 PM
To: Wangda Tan 
Cc: submarine-dev ; yarn-dev 
; Hdfs-dev ; 
mapreduce-dev ; Hadoop Common 
; private 
Subject: Re: [VOTE] Moving Submarine to a separate Apache project proposal

+1

Thanks for Wangda's proposal.

It is indeed amazing to see the growth and development of submarine. As the TLP 
will attract more developers to join.
I will put more energy into it and contribute more feature. Look forward to the 
next change in submarine.

Thanks,
Wanqiang Ji

On Wed, Sep 4, 2019 at 3:09 PM Bibin Chundatt 
wrote:

> +1
> Thank you  for the proposal.
> I am interested in project. Please include me as well in the project.
>
> Thanks,
> Bibin
>
> On Tue, Sep 3, 2019 at 6:41 PM Ayush Saxena 
> wrote:
>
> > +1
> > Thanx for the proposal.
> >
> > I would even like to participate in the project.
> > Please add me as well.
> >
> > -Ayush
> >
> >
> > > On 03-Sep-2019, at 6:00 PM, Vinayakumar B 
> > > 
> > wrote:
> > >
> > > +1
> > >
> > > Thanks for the proposal.
> > > Its very interesting project and looks very promising one by 
> > > looking at
> > the
> > > participations from various companies and the speed of development.
> > >
> > > I would also like to participate in the project.
> > > Please add me as well.
> > >
> > > Thanks,
> > > -Vinay
> > >
> > > On Tue, 3 Sep 2019, 12:38 pm Rakesh Radhakrishnan, 
> > >  >
> > > wrote:
> > >
> > >> +1, Thanks for the proposal.
> > >>
> > >> I am interested to participate in this project. Please include me 
> > >> as
> > well
> > >> in the project.
> > >>
> > >> Thanks,
> > >> Rakesh
> > >>
> > >> On Tue, Sep 3, 2019 at 11:59 AM zhankun tang 
> > >> 
> > >> wrote:
> > >>
> > >>> +1
> > >>>
> > >>> Thanks for Wangda's proposal.
> > >>>
> > >>> The submarine project is born within Hadoop, but not limited to
> Hadoop.
> > >> It
> > >>> began with a trainer on YARN but it quickly realized that only a
> > trainer
> > >> is
> > >>> not enough to meet the AI platform requirements. But now there's 
> > >>> no user-friendly open-source solution covers the whole AI 
> > >>> pipeline like
> > data
> > >>> engineering, training, and serving. And the underlying data
> > >> infrastructure
> > >>> itself is also evolving, for instance, many people love k8s. Not
> > >> mentioning
> > >>> there're many AI domain problems in this area to be solved.
> > >>> It's almost for sure that building such an ML platform would 
> > >>> utilize various other open-source components taking ML into 
> > >>> consideration initially.
> > >>>
> > >>> I see submarine grows rapidly towards an enterprise-grade ML 
> > >>> platform
> > >> which
> > >>> could potentially enable AI ability for data engineer and scientist.
> > This
> > >>> is an exciting thing for both the community and the industry.
> > >>>
> > >>> BR,
> > >>> Zhankun
> > >>>
> > >>>
> >  On Tue, 3 Sep 2019 at 13:34, Xun Liu  wrote:
> > 
> >  +1
> > 
> >  Hello everyone, I am a member of the submarine development team.
> >  I have been contributing to submarine for more than a year.
> >  I have seen the progress of submarine development very fast.
> >  In more than a year, there are 9 long-term developers of 
> >  different companies. Contributing, submarine cumulative code 
> >  has more than 200,000 lines of code, is
> > >> growing
> >  very fast,
> >  and is used in the production environment of multiple companies.
> > 
> >  In the submarine development group, there are 5 PMCs and 
> >  7committer
> > >>> members
> >  from Hadoop, spark, zeppelin projects.
> >  They are very familiar with the development process and
> specifications
> > >> of
> >  the apache community,
> >  and can well grasp the project development progress and project
> > >> quality.
> >  So I recommend submarine to be a TLP project directly.
> > 
> >  We will continue to contribute to the submarine project. :-)
> > 
> >  Xun Liu
> >  Regards
> > 
> > > On Tue, 3 Sep 2019 at 12:01, Devaraj K  wrote:
> > >
> > > +1
> > >
> > > Thanks Wangda for the proposal.
> > > I would like to participate in this project, Please add me 
> > > also to
> > >> the
> > > project.
> > >
> > > Regards
> > > Devaraj K
> > >
> > > On Mon, Sep 2, 2019 at 8:50 PM zac yuan 
> > > 
> > >> wrote:
> > >
> > >> +1
> > >>
> > >> Submarine will be a complete solution for AI service development.
> > >> It
> >  can
> > >> take advantage of two best cluster systems: yarn and k8s, 
> > >> which
> > >> will
> >  help
> > >> more and more people get AI ability. To be a separate Apache
> > >> project,
> > > will
> > >> accelerate the procedure of development apparently.
> > >>
> > >> 

Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-04 Thread Wanqiang Ji
+1

Thanks for Wangda's proposal.

It is indeed amazing to see the growth and development of submarine. As the
TLP will attract more developers to join.
I will put more energy into it and contribute more feature. Look forward to
the next change in submarine.

Thanks,
Wanqiang Ji

On Wed, Sep 4, 2019 at 3:09 PM Bibin Chundatt 
wrote:

> +1
> Thank you  for the proposal.
> I am interested in project. Please include me as well in the project.
>
> Thanks,
> Bibin
>
> On Tue, Sep 3, 2019 at 6:41 PM Ayush Saxena 
> wrote:
>
> > +1
> > Thanx for the proposal.
> >
> > I would even like to participate in the project.
> > Please add me as well.
> >
> > -Ayush
> >
> >
> > > On 03-Sep-2019, at 6:00 PM, Vinayakumar B 
> > wrote:
> > >
> > > +1
> > >
> > > Thanks for the proposal.
> > > Its very interesting project and looks very promising one by looking at
> > the
> > > participations from various companies and the speed of development.
> > >
> > > I would also like to participate in the project.
> > > Please add me as well.
> > >
> > > Thanks,
> > > -Vinay
> > >
> > > On Tue, 3 Sep 2019, 12:38 pm Rakesh Radhakrishnan,  >
> > > wrote:
> > >
> > >> +1, Thanks for the proposal.
> > >>
> > >> I am interested to participate in this project. Please include me as
> > well
> > >> in the project.
> > >>
> > >> Thanks,
> > >> Rakesh
> > >>
> > >> On Tue, Sep 3, 2019 at 11:59 AM zhankun tang 
> > >> wrote:
> > >>
> > >>> +1
> > >>>
> > >>> Thanks for Wangda's proposal.
> > >>>
> > >>> The submarine project is born within Hadoop, but not limited to
> Hadoop.
> > >> It
> > >>> began with a trainer on YARN but it quickly realized that only a
> > trainer
> > >> is
> > >>> not enough to meet the AI platform requirements. But now there's no
> > >>> user-friendly open-source solution covers the whole AI pipeline like
> > data
> > >>> engineering, training, and serving. And the underlying data
> > >> infrastructure
> > >>> itself is also evolving, for instance, many people love k8s. Not
> > >> mentioning
> > >>> there're many AI domain problems in this area to be solved.
> > >>> It's almost for sure that building such an ML platform would utilize
> > >>> various other open-source components taking ML into consideration
> > >>> initially.
> > >>>
> > >>> I see submarine grows rapidly towards an enterprise-grade ML platform
> > >> which
> > >>> could potentially enable AI ability for data engineer and scientist.
> > This
> > >>> is an exciting thing for both the community and the industry.
> > >>>
> > >>> BR,
> > >>> Zhankun
> > >>>
> > >>>
> >  On Tue, 3 Sep 2019 at 13:34, Xun Liu  wrote:
> > 
> >  +1
> > 
> >  Hello everyone, I am a member of the submarine development team.
> >  I have been contributing to submarine for more than a year.
> >  I have seen the progress of submarine development very fast.
> >  In more than a year, there are 9 long-term developers of different
> >  companies. Contributing,
> >  submarine cumulative code has more than 200,000 lines of code, is
> > >> growing
> >  very fast,
> >  and is used in the production environment of multiple companies.
> > 
> >  In the submarine development group, there are 5 PMCs and 7committer
> > >>> members
> >  from Hadoop, spark, zeppelin projects.
> >  They are very familiar with the development process and
> specifications
> > >> of
> >  the apache community,
> >  and can well grasp the project development progress and project
> > >> quality.
> >  So I recommend submarine to be a TLP project directly.
> > 
> >  We will continue to contribute to the submarine project. :-)
> > 
> >  Xun Liu
> >  Regards
> > 
> > > On Tue, 3 Sep 2019 at 12:01, Devaraj K  wrote:
> > >
> > > +1
> > >
> > > Thanks Wangda for the proposal.
> > > I would like to participate in this project, Please add me also to
> > >> the
> > > project.
> > >
> > > Regards
> > > Devaraj K
> > >
> > > On Mon, Sep 2, 2019 at 8:50 PM zac yuan 
> > >> wrote:
> > >
> > >> +1
> > >>
> > >> Submarine will be a complete solution for AI service development.
> > >> It
> >  can
> > >> take advantage of two best cluster systems: yarn and k8s, which
> > >> will
> >  help
> > >> more and more people get AI ability. To be a separate Apache
> > >> project,
> > > will
> > >> accelerate the procedure of development apparently.
> > >>
> > >> Look forward to a big success in submarine project~
> > >>
> > >> 朱林浩  于2019年9月3日周二 上午10:38写道:
> > >>
> > >>> +1,
> > >>> Hopefully, that will become the top project,
> > >>>
> > >>> I also hope to make more contributions to this project.
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> At 2019-09-03 09:26:53, "Naganarasimha Garla" <
> > >> naganarasimha...@apache.org>
> > >>> wrote:
> >  + 1,
> > 

[jira] [Created] (HADOOP-16546) make sure staging committers collect DTs for the staging FS

2019-09-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16546:
---

 Summary: make sure staging committers collect DTs for the staging 
FS
 Key: HADOOP-16546
 URL: https://issues.apache.org/jira/browse/HADOOP-16546
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


This is not a problem I've seen in the wild, but I've now encountered a problem 
with hive doing something like this

we need to (somehow) make sure that the staging committers collect DTs for the 
staging dir FS. If this is the default FS or the same as a source or dest FS, 
this is handled elsewhere, but otherwise we need to add the staging fs.

I don;t see an easy way to do this, but we could add a new method to 
PathOutputCommitter to collect DTs; FileOutputFormat can invoke this alongside 
its ongoing collection of tokens for the output FS. Base impl would be a no-op, 
obviously.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] ARM/aarch64 support for Hadoop

2019-09-04 Thread Zhenyu Zheng
BTW, I also noticed that the Hadoop-trunk-Commit job has been failling for
over 2 month related to the Protobuf problem .
According to the latest successful build log:
https://builds.apache.org/job/Hadoop-trunk-Commit/lastSuccessfulBuild/consoleFull
the
os version was ubuntu 14.04 and for the jobs that are failling now such as:
https://builds.apache.org/job/Hadoop-trunk-Commit/17222/console,
the os version is 18.04. I'm not very familiar with the version changing
for the jobs but I did a little search, according to:
https://packages.ubuntu.com/search?keywords=protobuf-compiler=names
&
https://packages.ubuntu.com/search?suite=default=all=any=libprotoc-dev=names
it both said that the version of libprotc-dev and protobuf-compiler
available for ubuntu 18.04 is 3.0.0


On Wed, Sep 4, 2019 at 4:39 PM Ayush Saxena  wrote:

> Thanx Vinay for the initiative, Makes sense to add support for different
> architectures.
>
> +1, for the branch idea.
> Good Luck!!!
>
> -Ayush
>
> > On 03-Sep-2019, at 6:19 AM, 张铎(Duo Zhang)  wrote:
> >
> > For HBase, we purged all the protobuf related things from the public API,
> > and then upgraded to a shaded and relocated version of protobuf. We have
> > created a repo for this:
> >
> > https://github.com/apache/hbase-thirdparty
> >
> > But since the hadoop dependencies still pull in the protobuf 2.5 jars,
> our
> > coprocessors are still on protobuf 2.5. Recently we have opened a discuss
> > on how to deal with the upgrading of coprocessor. Glad to see that the
> > hadoop community is also willing to solve the problem.
> >
> > Anu Engineer  于2019年9月3日周二 上午1:23写道:
> >
> >> +1, for the branch idea. Just FYI, Your biggest problem is proving that
> >> Hadoop and the downstream projects work correctly after you upgrade core
> >> components like Protobuf.
> >> So while branching and working on a branch is easy, merging back after
> you
> >> upgrade some of these core components is insanely hard. You might want
> to
> >> make sure that community buys into upgrading these components in the
> trunk.
> >> That way we will get testing and downstream components will notice when
> >> things break.
> >>
> >> That said, I have lobbied for the upgrade of Protobuf for a really long
> >> time; I have argued that 2.5 is out of support and we cannot stay on
> that
> >> branch forever; or we need to take ownership of the Protobuf 2.5 code
> base.
> >> It has been rightly pointed to me that while all the arguments I make is
> >> correct; it is a very complicated task to upgrade Protobuf, and the
> worst
> >> part is we will not even know what breaks until downstream projects
> pick up
> >> these changes and work against us.
> >>
> >> If we work off the Hadoop version 3 — and assume that we have "shading"
> in
> >> place for all deployments; it might be possible to get there; still a
> >> daunting task.
> >>
> >> So best of luck with the branch approach — But please remember, Merging
> >> back will be hard, Just my 2 cents.
> >>
> >> — Anu
> >>
> >>
> >>
> >>
> >> On Sun, Sep 1, 2019 at 7:40 PM Zhenyu Zheng 
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> Thanks Vinaya for bring this up and thanks Sheng for the idea. A
> separate
> >>> branch with it's own ARM CI seems a really good idea.
> >>> By doing this we won't break any of the undergoing development in trunk
> >> and
> >>> a CI can be a very good way to show what are the
> >>> current problems and what have been fixed, it will also provide a very
> >> good
> >>> view for contributors that are intrested to working on
> >>> this. We can finally merge back the branch to trunk until the community
> >>> thinks it is good enough and stable enough. We can donate
> >>> ARM machines to the existing CI system for the job.
> >>>
> >>> I wonder if this approch possible?
> >>>
> >>> BR,
> >>>
>  On Thu, Aug 29, 2019 at 11:29 AM Sheng Liu 
> >>> wrote:
> >>>
>  Hi,
> 
>  Thanks Vinay for bring this up, I am a member of "Openlab" community
>  mentioned by Vinay. I am working on building and
>  testing Hadoop components on aarch64 server these days, besides the
> >>> missing
>  dependices of ARM platform issues #1 #2 #3
>  mentioned by Vinay, other similar issue has also be found, such as the
>  "PhantomJS" dependent package also missing for aarch64.
> 
>  To promote the ARM support for Hadoop, we have discussed and hoped to
> >> add
>  an ARM specific CI to Hadoop repo. we are not
>  sure about if there is any potential effect or confilict on the trunk
>  branch, so maybe creating a ARM specific branch for doing these stuff
>  is a better choice, what do you think?
> 
>  Hope to hear thoughts from you :)
> 
>  BR,
>  Liu sheng
> 
>  Vinayakumar B  于2019年8月27日周二 上午5:34写道:
> 
> > Hi Folks,
> >
> > ARM is becoming famous lately in its processing capability and has
> >> got
>  the
> > potential to run Bigdata workloads.
> > Many users have been moving to ARM 

[jira] [Resolved] (HADOOP-16545) Update the release year to 2019

2019-09-04 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang resolved HADOOP-16545.
---
Resolution: Duplicate

close this due to duplicated with HADOOP-16025

> Update the release year to 2019
> ---
>
> Key: HADOOP-16545
> URL: https://issues.apache.org/jira/browse/HADOOP-16545
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Critical
>
> Is doing the release. We need to update the release year from 2018 to 2019.
> {code:java}
> $ find . -name "pom.xml" | xargs grep -n 2018
> ./hadoop-project/pom.xml:34:2018
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16545) Update the release year to 2019

2019-09-04 Thread Zhankun Tang (Jira)
Zhankun Tang created HADOOP-16545:
-

 Summary: Update the release year to 2019
 Key: HADOOP-16545
 URL: https://issues.apache.org/jira/browse/HADOOP-16545
 Project: Hadoop Common
  Issue Type: Task
Reporter: Zhankun Tang


Is doing the release. We need to update the release year from 2018 to 2019.

{code:java}
$ find . -name "pom.xml" | xargs grep -n 2018
./hadoop-project/pom.xml:34:2018
{code}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] ARM/aarch64 support for Hadoop

2019-09-04 Thread Ayush Saxena
Thanx Vinay for the initiative, Makes sense to add support for different 
architectures.

+1, for the branch idea.
Good Luck!!!

-Ayush

> On 03-Sep-2019, at 6:19 AM, 张铎(Duo Zhang)  wrote:
> 
> For HBase, we purged all the protobuf related things from the public API,
> and then upgraded to a shaded and relocated version of protobuf. We have
> created a repo for this:
> 
> https://github.com/apache/hbase-thirdparty
> 
> But since the hadoop dependencies still pull in the protobuf 2.5 jars, our
> coprocessors are still on protobuf 2.5. Recently we have opened a discuss
> on how to deal with the upgrading of coprocessor. Glad to see that the
> hadoop community is also willing to solve the problem.
> 
> Anu Engineer  于2019年9月3日周二 上午1:23写道:
> 
>> +1, for the branch idea. Just FYI, Your biggest problem is proving that
>> Hadoop and the downstream projects work correctly after you upgrade core
>> components like Protobuf.
>> So while branching and working on a branch is easy, merging back after you
>> upgrade some of these core components is insanely hard. You might want to
>> make sure that community buys into upgrading these components in the trunk.
>> That way we will get testing and downstream components will notice when
>> things break.
>> 
>> That said, I have lobbied for the upgrade of Protobuf for a really long
>> time; I have argued that 2.5 is out of support and we cannot stay on that
>> branch forever; or we need to take ownership of the Protobuf 2.5 code base.
>> It has been rightly pointed to me that while all the arguments I make is
>> correct; it is a very complicated task to upgrade Protobuf, and the worst
>> part is we will not even know what breaks until downstream projects pick up
>> these changes and work against us.
>> 
>> If we work off the Hadoop version 3 — and assume that we have "shading" in
>> place for all deployments; it might be possible to get there; still a
>> daunting task.
>> 
>> So best of luck with the branch approach — But please remember, Merging
>> back will be hard, Just my 2 cents.
>> 
>> — Anu
>> 
>> 
>> 
>> 
>> On Sun, Sep 1, 2019 at 7:40 PM Zhenyu Zheng 
>> wrote:
>> 
>>> Hi,
>>> 
>>> Thanks Vinaya for bring this up and thanks Sheng for the idea. A separate
>>> branch with it's own ARM CI seems a really good idea.
>>> By doing this we won't break any of the undergoing development in trunk
>> and
>>> a CI can be a very good way to show what are the
>>> current problems and what have been fixed, it will also provide a very
>> good
>>> view for contributors that are intrested to working on
>>> this. We can finally merge back the branch to trunk until the community
>>> thinks it is good enough and stable enough. We can donate
>>> ARM machines to the existing CI system for the job.
>>> 
>>> I wonder if this approch possible?
>>> 
>>> BR,
>>> 
 On Thu, Aug 29, 2019 at 11:29 AM Sheng Liu 
>>> wrote:
>>> 
 Hi,
 
 Thanks Vinay for bring this up, I am a member of "Openlab" community
 mentioned by Vinay. I am working on building and
 testing Hadoop components on aarch64 server these days, besides the
>>> missing
 dependices of ARM platform issues #1 #2 #3
 mentioned by Vinay, other similar issue has also be found, such as the
 "PhantomJS" dependent package also missing for aarch64.
 
 To promote the ARM support for Hadoop, we have discussed and hoped to
>> add
 an ARM specific CI to Hadoop repo. we are not
 sure about if there is any potential effect or confilict on the trunk
 branch, so maybe creating a ARM specific branch for doing these stuff
 is a better choice, what do you think?
 
 Hope to hear thoughts from you :)
 
 BR,
 Liu sheng
 
 Vinayakumar B  于2019年8月27日周二 上午5:34写道:
 
> Hi Folks,
> 
> ARM is becoming famous lately in its processing capability and has
>> got
 the
> potential to run Bigdata workloads.
> Many users have been moving to ARM machines due to its low cost.
> 
> In the past there were attempts to compile Hadoop on ARM (Rasberry
>> PI)
 for
> experimental purposes. Today ARM architecture is taking some of the
> serverside processing as well. So there will be/is a real need of
>>> Hadoop
 to
> support ARM architecture as well.
> 
> There are bunch of users who are trying out building Hadoop on ARM,
 trying
> to add ARM CI to hadoop and facing issues[1]. Also some
> 
> As of today, Hadoop does not compile on ARM due to below issues,
>> found
 from
> testing done in openlab in [2].
> 
> 1. Protobuf :
> ---
> Hadoop project (also some downstream projects) stuck to protobuf
 2.5.0
> version, due to backward compatibility reasons. Protobuf-2.5.0 is not
 being
> maintained in the community. While protobuf 3.x is being actively
>>> adopted
> widely, still protobuf 3.x provides wire compatibility for proto2
 messages.
> Due to some 

Hadoop 3.1.3 is ready to cut branch Re: [Release Plan] Hadoop-3.1.3 discussion

2019-09-04 Thread zhankun tang
Hi all,

Thanks for everyone helping in resolving all the blockers targeting Hadoop
3.1.3[1]. We've cleaned all the blockers and moved out non-blockers issues
to 3.1.4.

I'll cut the branch today and call a release vote soon. Thanks!


[1]. https://s.apache.org/5hj5i

BR,
Zhankun


On Wed, 21 Aug 2019 at 12:38, Zhankun Tang  wrote:

> Hi folks,
>
> We have Apache Hadoop 3.1.2 released on Feb 2019.
>
> It's been more than 6 months passed and there're
>
> 246 fixes[1]. 2 blocker and 4 critical Issues [2]
>
> (As Wei-Chiu Chuang mentioned, HDFS-13596 will be another blocker)
>
>
> I propose my plan to do a maintenance release of 3.1.3 in the next few
> (one or two) weeks.
>
> Hadoop 3.1.3 release plan:
>
> Code Freezing Date: *25th August 2019 PDT*
>
> Release Date: *31th August 2019 PDT*
>
>
> Please feel free to share your insights on this. Thanks!
>
>
> [1] https://s.apache.org/zw8l5
>
> [2] https://s.apache.org/fjol5
>
>
> BR,
>
> Zhankun
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-04 Thread Bibin Chundatt
+1
Thank you  for the proposal.
I am interested in project. Please include me as well in the project.

Thanks,
Bibin

On Tue, Sep 3, 2019 at 6:41 PM Ayush Saxena  wrote:

> +1
> Thanx for the proposal.
>
> I would even like to participate in the project.
> Please add me as well.
>
> -Ayush
>
>
> > On 03-Sep-2019, at 6:00 PM, Vinayakumar B 
> wrote:
> >
> > +1
> >
> > Thanks for the proposal.
> > Its very interesting project and looks very promising one by looking at
> the
> > participations from various companies and the speed of development.
> >
> > I would also like to participate in the project.
> > Please add me as well.
> >
> > Thanks,
> > -Vinay
> >
> > On Tue, 3 Sep 2019, 12:38 pm Rakesh Radhakrishnan, 
> > wrote:
> >
> >> +1, Thanks for the proposal.
> >>
> >> I am interested to participate in this project. Please include me as
> well
> >> in the project.
> >>
> >> Thanks,
> >> Rakesh
> >>
> >> On Tue, Sep 3, 2019 at 11:59 AM zhankun tang 
> >> wrote:
> >>
> >>> +1
> >>>
> >>> Thanks for Wangda's proposal.
> >>>
> >>> The submarine project is born within Hadoop, but not limited to Hadoop.
> >> It
> >>> began with a trainer on YARN but it quickly realized that only a
> trainer
> >> is
> >>> not enough to meet the AI platform requirements. But now there's no
> >>> user-friendly open-source solution covers the whole AI pipeline like
> data
> >>> engineering, training, and serving. And the underlying data
> >> infrastructure
> >>> itself is also evolving, for instance, many people love k8s. Not
> >> mentioning
> >>> there're many AI domain problems in this area to be solved.
> >>> It's almost for sure that building such an ML platform would utilize
> >>> various other open-source components taking ML into consideration
> >>> initially.
> >>>
> >>> I see submarine grows rapidly towards an enterprise-grade ML platform
> >> which
> >>> could potentially enable AI ability for data engineer and scientist.
> This
> >>> is an exciting thing for both the community and the industry.
> >>>
> >>> BR,
> >>> Zhankun
> >>>
> >>>
>  On Tue, 3 Sep 2019 at 13:34, Xun Liu  wrote:
> 
>  +1
> 
>  Hello everyone, I am a member of the submarine development team.
>  I have been contributing to submarine for more than a year.
>  I have seen the progress of submarine development very fast.
>  In more than a year, there are 9 long-term developers of different
>  companies. Contributing,
>  submarine cumulative code has more than 200,000 lines of code, is
> >> growing
>  very fast,
>  and is used in the production environment of multiple companies.
> 
>  In the submarine development group, there are 5 PMCs and 7committer
> >>> members
>  from Hadoop, spark, zeppelin projects.
>  They are very familiar with the development process and specifications
> >> of
>  the apache community,
>  and can well grasp the project development progress and project
> >> quality.
>  So I recommend submarine to be a TLP project directly.
> 
>  We will continue to contribute to the submarine project. :-)
> 
>  Xun Liu
>  Regards
> 
> > On Tue, 3 Sep 2019 at 12:01, Devaraj K  wrote:
> >
> > +1
> >
> > Thanks Wangda for the proposal.
> > I would like to participate in this project, Please add me also to
> >> the
> > project.
> >
> > Regards
> > Devaraj K
> >
> > On Mon, Sep 2, 2019 at 8:50 PM zac yuan 
> >> wrote:
> >
> >> +1
> >>
> >> Submarine will be a complete solution for AI service development.
> >> It
>  can
> >> take advantage of two best cluster systems: yarn and k8s, which
> >> will
>  help
> >> more and more people get AI ability. To be a separate Apache
> >> project,
> > will
> >> accelerate the procedure of development apparently.
> >>
> >> Look forward to a big success in submarine project~
> >>
> >> 朱林浩  于2019年9月3日周二 上午10:38写道:
> >>
> >>> +1,
> >>> Hopefully, that will become the top project,
> >>>
> >>> I also hope to make more contributions to this project.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> At 2019-09-03 09:26:53, "Naganarasimha Garla" <
> >> naganarasimha...@apache.org>
> >>> wrote:
>  + 1,
>    I would also like start participate in this project, hope to
> >>> get
> >>> myself
>  added to the project.
> 
>  Thanks and Regards,
>  + Naga
> 
>  On Tue, Sep 3, 2019 at 8:35 AM Wangda Tan 
> > wrote:
> 
> > Hi Sree,
> >
> > I put it to the proposal, please let me know what you think:
> >
> > The traditional path at Apache would have been to create an
> > incubator
> >> project, but the code is already being released by Apache
> >> and
>  most
> >> of
> >>> the
> >> developers are familiar with Apache rules 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1248/

[Sep 2, 2019 4:45:59 AM] (abmodi) YARN-7982. Do ACLs check while retrieving 
entity-types per application.
[Sep 2, 2019 5:15:59 AM] (abmodi) YARN-8174. Add containerId to 
ResourceLocalizationService fetch failure
[Sep 2, 2019 5:28:23 AM] (abmodi) YARN-9400. Remove unnecessary if at
[Sep 2, 2019 7:31:52 AM] (ayushsaxena) HDFS-14654. RBF: 
TestRouterRpc#testNamenodeMetrics is flaky. Contributed
[Sep 2, 2019 4:43:44 PM] (weichiu) Revert "HDFS-14706. Checksums are not 
checked if block meta file is less
[Sep 2, 2019 4:47:04 PM] (weichiu) HDFS-14706. Checksums are not checked if 
block meta file is less than 7
[Sep 3, 2019 6:23:34 AM] (bibinchundatt) YARN-9797. 
LeafQueue#activateApplications should use
[Sep 3, 2019 6:55:15 AM] (ztang) YARN-9785. Fix DominantResourceCalculator when 
one resource is zero.
[Sep 3, 2019 7:07:09 AM] (surendralilhore) HDFS-14630. 
Configuration.getTimeDurationHelper() should not log time
[Sep 3, 2019 9:48:50 AM] (31469764+bshashikant) HDDS-1783 : Latency metric for 
applyTransaction in ContainerStateMachine
[Sep 3, 2019 11:20:57 AM] (nanda) HDDS-1810. SCM command to Activate and 
Deactivate pipelines. (#1224)
[Sep 3, 2019 12:10:38 PM] (github) HADOOP-16534. Exclude submarine from hadoop 
source build. (#1356)
[Sep 3, 2019 12:28:48 PM] (nanda) HDDS-2069. Default values of properties
[Sep 3, 2019 12:38:42 PM] (ayushsaxena) HDFS-14807. SetTimes updates all 
negative values apart from -1.
[Sep 3, 2019 4:29:58 PM] (xyao) HDFS-14633. The StorageType quota and consume 
in QuotaFeature is not


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org