Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-10 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/200/

No changes


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy

2020-07-10 Thread Swaminathan Balachandran (Jira)
Swaminathan Balachandran created HADOOP-17122:
-

 Summary: Bug in preserving Directory Attributes in DistCp with 
Atomic Copy
 Key: HADOOP-17122
 URL: https://issues.apache.org/jira/browse/HADOOP-17122
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Swaminathan Balachandran


Description:

In case of Atomic Copy the copied data is commited and post that the preserve 
directory attributes runs. Preserving directory attributes is done over work 
path and not final path. I have fixed the base directory to point towards final 
path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: A more inclusive elephant...

2020-07-10 Thread Ahmed Hussein
+1, this is great folks.

In addition to that initiative, Do you think there is a chance to
launch a "*Hadoop
Mentorship Program for Minority Students*"

*The program will work as follows:*

   - Define a programme committee to administrate and mentor candidates.
   - The Committee defines a timeline for applications and projects. Let's
   say it is some sort of 3 months. (Similar to an internship)
   - Define a list of ideas/projects that can be picked by the candidates
   - Candidates can propose their idea as well. This can be a good way to
   inject new blood and research ideas into Hadoop.
   - Pick top top applications and assign them to mentors.
   - If sponsors can allocate money, then candidates with good evaluation
   can get some sort of prize. If no money is allocated, then we can discuss
   any other kind of motivation.

I remember there were Student Mentorship programmes in Open source projects
like "JikesRVM" and several proposals were actually merged and/or
transformed into publications.
There are many missing links that need to be filled like how to define the
target and the audience of the programme

Let me know WDYT guys.

On Fri, Jul 10, 2020 at 1:45 PM Wei-Chiu Chuang  wrote:

> Thanks Carlo and Eric for the initiative.
>
> I am all for it and I'll do my part to mind the code. This is a small yet
> meaningful step we can take. Meanwhile, I'd like to take this opportunity
> to open up conversation around the Diversity & Inclusion within the
> community.
>
> If you read this quarter's Hadoop board report, I am starting to collect
> metrics about the composition of our community in order to understand if we
> are building a diverse & inclusive community. Things that are obvious to me
> that I thought I should report are the following: affiliation among
> commiters, and demographics of committers. As of last quarter, 4 out of 7
> newly minted committers are affiliated with Cloudera. 4 out of the 7 said
> committers are located in Asia. Those facts suggest we have a good
> international participation (I am being US-centric), which is good.
> However, having half of the active committers affiliated with one company
> is a potential problem.
>
> I'd like to hear your thoughts on this. What other metrics should we
> collect, and what actions can we take.
>
>
>
> On Fri, Jul 10, 2020 at 11:29 AM Carlo Aldo Curino  >
> wrote:
>
> > Eric,
> >
> > Thank you so much for the support and for stepping up offering to work on
> > this. I am super +1 on this. Let's give folks a few more days to chime
> in,
> > in case there is anything to discuss before we get cracking!
> >
> > (Really) Thanks,
> > Carlo
> >
> > On Fri, Jul 10, 2020, 10:38 AM Eric Badger 
> > wrote:
> >
> > > Thanks for writing this up, Carlo. I'm +1 (idk if I'm technically
> binding
> > > on this or not) for the changes moving forward and I think we refactor
> > away
> > > any instances that are internal to the code (i.e. not APIs or other
> > things
> > > that would break compatibility) in all active branches and then also
> > change
> > > the APIs in trunk (an incompatible change).
> > >
> > > I just came across an internal issue related to the NM
> > > whitelist/blacklist. I would be happy to go refactor the code and look
> > for
> > > instances of these and replace them with allowlist/blocklist. Doing a
> > quick
> > > "git grep" of trunk, I see 270 instances of "whitelist" and 1318
> > instances
> > > of "blacklist".
> > >
> > > If there are no objections, I'll create a JIRA to clean this specific
> > > stuff up. It would be wonderful if others could pick up a different
> > portion
> > > (e.g. master/slave) so that we can spread the work out.
> > >
> > > Eric
> > >
> > > On Tue, Jul 7, 2020 at 6:27 PM Carlo Aldo Curino <
> carlo.cur...@gmail.com
> > >
> > > wrote:
> > >
> > >> Hello Folks,
> > >>
> > >> I hope you are all doing well...
> > >>
> > >> *The problem*
> > >> The recent protests made me realize that we are not just a bystanders
> of
> > >> the systematic racism that affect our society, but we are active
> > >> participants of it. Being "non-racist" is not enough, I strongly feel
> we
> > >> should be actively "anti-racist" in our day to day lives, and
> > continuously
> > >> check our biases. I assume most of you will agree with the general
> > >> sentiment, but based on your exposure to the recent events and US
> > >> culture/history might have more or less strong feelings about your
> role
> > in
> > >> the problem and potential solution.
> > >>
> > >> *What can we do about it?* I think a simple action we can take is to
> > work
> > >> on our code/comments/documentation/websites and remove racist
> > terminology.
> > >> Here is a IETF draft to fix up some of the most egregious examples
> > >> (master/slave, whitelist/backlist) with proposed alternatives.
> > >>
> > >>
> >
> https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1
> > >> Also as we go about this effort, we should also consider other

Re: A more inclusive elephant...

2020-07-10 Thread Wei-Chiu Chuang
Thanks Carlo and Eric for the initiative.

I am all for it and I'll do my part to mind the code. This is a small yet
meaningful step we can take. Meanwhile, I'd like to take this opportunity
to open up conversation around the Diversity & Inclusion within the
community.

If you read this quarter's Hadoop board report, I am starting to collect
metrics about the composition of our community in order to understand if we
are building a diverse & inclusive community. Things that are obvious to me
that I thought I should report are the following: affiliation among
commiters, and demographics of committers. As of last quarter, 4 out of 7
newly minted committers are affiliated with Cloudera. 4 out of the 7 said
committers are located in Asia. Those facts suggest we have a good
international participation (I am being US-centric), which is good.
However, having half of the active committers affiliated with one company
is a potential problem.

I'd like to hear your thoughts on this. What other metrics should we
collect, and what actions can we take.



On Fri, Jul 10, 2020 at 11:29 AM Carlo Aldo Curino 
wrote:

> Eric,
>
> Thank you so much for the support and for stepping up offering to work on
> this. I am super +1 on this. Let's give folks a few more days to chime in,
> in case there is anything to discuss before we get cracking!
>
> (Really) Thanks,
> Carlo
>
> On Fri, Jul 10, 2020, 10:38 AM Eric Badger 
> wrote:
>
> > Thanks for writing this up, Carlo. I'm +1 (idk if I'm technically binding
> > on this or not) for the changes moving forward and I think we refactor
> away
> > any instances that are internal to the code (i.e. not APIs or other
> things
> > that would break compatibility) in all active branches and then also
> change
> > the APIs in trunk (an incompatible change).
> >
> > I just came across an internal issue related to the NM
> > whitelist/blacklist. I would be happy to go refactor the code and look
> for
> > instances of these and replace them with allowlist/blocklist. Doing a
> quick
> > "git grep" of trunk, I see 270 instances of "whitelist" and 1318
> instances
> > of "blacklist".
> >
> > If there are no objections, I'll create a JIRA to clean this specific
> > stuff up. It would be wonderful if others could pick up a different
> portion
> > (e.g. master/slave) so that we can spread the work out.
> >
> > Eric
> >
> > On Tue, Jul 7, 2020 at 6:27 PM Carlo Aldo Curino  >
> > wrote:
> >
> >> Hello Folks,
> >>
> >> I hope you are all doing well...
> >>
> >> *The problem*
> >> The recent protests made me realize that we are not just a bystanders of
> >> the systematic racism that affect our society, but we are active
> >> participants of it. Being "non-racist" is not enough, I strongly feel we
> >> should be actively "anti-racist" in our day to day lives, and
> continuously
> >> check our biases. I assume most of you will agree with the general
> >> sentiment, but based on your exposure to the recent events and US
> >> culture/history might have more or less strong feelings about your role
> in
> >> the problem and potential solution.
> >>
> >> *What can we do about it?* I think a simple action we can take is to
> work
> >> on our code/comments/documentation/websites and remove racist
> terminology.
> >> Here is a IETF draft to fix up some of the most egregious examples
> >> (master/slave, whitelist/backlist) with proposed alternatives.
> >>
> >>
> https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1
> >> Also as we go about this effort, we should also consider other
> >> "non-inclusive" terminology issues around gender (e.g., binary gendered
> >> examples, "Alice" doing the wrong security thing systematically), and
> >> ableism (e.g., referring to misbehaving hardware as "lame" or "limping",
> >> etc.).
> >> The easiest action item is to avoid this going forward (ideally adding
> it
> >> to the checkstyles if possible), a more costly one is to start going
> back
> >> and refactor away existing instances.
> >>
> >> I know this requires a bunch of work as refactorings might break dev
> >> branches and non-committed patches, possibly scripts, etc. but I think
> >> this
> >> is something important and relatively simple we can do. The effect goes
> >> well beyond some text in github, it signals what we believe in, and
> forces
> >> hundreds of users and contributors to notice and think about it. Our
> >> force-multiplier is huge and it matches our responsibility.
> >>
> >> What do you folks think?
> >>
> >> Thanks,
> >> Carlo
> >>
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Iñigo Goiri
+1 (Binding)

Deployed a cluster on Azure VMs with:
* 3 VMs with HDFS Namenodes and Routers
* 2 VMs with YARN Resource Managers
* 5 VMs with HDFS Datanodes and Node Managers

Tests:
* Executed Tergagen+Terasort+Teravalidate.
* Executed wordcount.
* Browsed through the Web UI.



On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B 
wrote:

> +1 (Binding)
>
> -Verified all checksums and Signatures.
> -Verified site, Release notes and Change logs
>   + May be changelog and release notes could be grouped based on the
> project at second level for better look (this needs to be supported from
> yetus)
> -Tested in x86 local 3-node docker cluster.
>   + Built from source with OpenJdk 8 and Ubuntu 18.04
>   + Deployed 3 node docker cluster
>   + Ran various Jobs (wordcount, Terasort, Pi, etc)
>
> No Issues reported.
>
> -Vinay
>
> On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu  wrote:
>
> > +1 (non-binding)
> >
> > - checkout the "3.3.0-aarch64-RC0" binaries packages
> >
> > - started a clusters with 3 nodes VMs of Ubuntu 18.04 ARM/aarch64,
> > openjdk-11-jdk
> >
> > - checked some web UIs (NN, DN, RM, NM)
> >
> > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
> >
> > - Executed a TestDFSIO job
> >
> > - Executed a Pi job
> >
> > BR,
> > Liusheng
> >
> > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
> >
> > > +1 (non-binding)
> > >
> > > - Verified all hashes and checksums
> > > - Tested on ARM platform for the following actions:
> > >   + Built from source on Ubuntu 18.04, OpenJDK 8
> > >   + Deployed a pseudo cluster
> > >   + Ran some example jobs(grep, wordcount, pi)
> > >   + Ran teragen/terasort/teravalidate
> > >   + Ran TestDFSIO job
> > >
> > > BR,
> > >
> > > Zhenyu
> > >
> > > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka 
> > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > - Verified checksums and signatures.
> > > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster
> > > (with
> > > > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > > > - The document looks good.
> > > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > > >
> > > > Thanks,
> > > > Akira
> > > >
> > > >
> > > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula <
> bra...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > Hi folks,
> > > > >
> > > > > This is the first release candidate for the first release of Apache
> > > > > Hadoop 3.3.0
> > > > > line.
> > > > >
> > > > > It contains *1644[1]* fixed jira issues since 3.2.1 which include a
> > lot
> > > > of
> > > > > features and improvements(read the full set of release notes).
> > > > >
> > > > > Below feature additions are the highlights of the release.
> > > > >
> > > > > - ARM Support
> > > > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > > > - Java 11 Runtime support and TLS 1.3.
> > > > > - Support Tencent Cloud COS File System.
> > > > > - Added security to HDFS Router.
> > > > > - Support non-volatile storage class memory(SCM) in HDFS cache
> > > directives
> > > > > - Support Interactive Docker Shell for running Containers.
> > > > > - Scheduling of opportunistic containers
> > > > > - A pluggable device plugin framework to ease vendor plugin
> > development
> > > > >
> > > > > *The RC0 artifacts are at*:
> > > > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > > > >
> > > > > *First release to include ARM binary, Have a check.*
> > > > > *RC tag is *release-3.3.0-RC0.
> > > > >
> > > > >
> > > > > *The maven artifacts are hosted here:*
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> > > > >
> > > > > *My public key is available here:*
> > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > > >
> > > > > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM
> > > IST.
> > > > >
> > > > >
> > > > > I have done a few testing with my pseudo cluster. My +1 to start.
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > > Brahma Reddy Battula
> > > > >
> > > > >
> > > > > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> > (3.3.0)
> > > > AND
> > > > > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER
> > BY
> > > > > fixVersion ASC
> > > > >
> > > >
> > >
> >
>


Re: A more inclusive elephant...

2020-07-10 Thread Carlo Aldo Curino
Eric,

Thank you so much for the support and for stepping up offering to work on
this. I am super +1 on this. Let's give folks a few more days to chime in,
in case there is anything to discuss before we get cracking!

(Really) Thanks,
Carlo

On Fri, Jul 10, 2020, 10:38 AM Eric Badger  wrote:

> Thanks for writing this up, Carlo. I'm +1 (idk if I'm technically binding
> on this or not) for the changes moving forward and I think we refactor away
> any instances that are internal to the code (i.e. not APIs or other things
> that would break compatibility) in all active branches and then also change
> the APIs in trunk (an incompatible change).
>
> I just came across an internal issue related to the NM
> whitelist/blacklist. I would be happy to go refactor the code and look for
> instances of these and replace them with allowlist/blocklist. Doing a quick
> "git grep" of trunk, I see 270 instances of "whitelist" and 1318 instances
> of "blacklist".
>
> If there are no objections, I'll create a JIRA to clean this specific
> stuff up. It would be wonderful if others could pick up a different portion
> (e.g. master/slave) so that we can spread the work out.
>
> Eric
>
> On Tue, Jul 7, 2020 at 6:27 PM Carlo Aldo Curino 
> wrote:
>
>> Hello Folks,
>>
>> I hope you are all doing well...
>>
>> *The problem*
>> The recent protests made me realize that we are not just a bystanders of
>> the systematic racism that affect our society, but we are active
>> participants of it. Being "non-racist" is not enough, I strongly feel we
>> should be actively "anti-racist" in our day to day lives, and continuously
>> check our biases. I assume most of you will agree with the general
>> sentiment, but based on your exposure to the recent events and US
>> culture/history might have more or less strong feelings about your role in
>> the problem and potential solution.
>>
>> *What can we do about it?* I think a simple action we can take is to work
>> on our code/comments/documentation/websites and remove racist terminology.
>> Here is a IETF draft to fix up some of the most egregious examples
>> (master/slave, whitelist/backlist) with proposed alternatives.
>>
>> https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1
>> Also as we go about this effort, we should also consider other
>> "non-inclusive" terminology issues around gender (e.g., binary gendered
>> examples, "Alice" doing the wrong security thing systematically), and
>> ableism (e.g., referring to misbehaving hardware as "lame" or "limping",
>> etc.).
>> The easiest action item is to avoid this going forward (ideally adding it
>> to the checkstyles if possible), a more costly one is to start going back
>> and refactor away existing instances.
>>
>> I know this requires a bunch of work as refactorings might break dev
>> branches and non-committed patches, possibly scripts, etc. but I think
>> this
>> is something important and relatively simple we can do. The effect goes
>> well beyond some text in github, it signals what we believe in, and forces
>> hundreds of users and contributors to notice and think about it. Our
>> force-multiplier is huge and it matches our responsibility.
>>
>> What do you folks think?
>>
>> Thanks,
>> Carlo
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-10 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/199/

[Jul 9, 2020 4:59:47 AM] (noreply) YARN-10344. Sync netty versions in 
hadoop-yarn-csi. (#2126)
[Jul 9, 2020 7:04:52 AM] (Brahma Reddy Battula) YARN-10341. Yarn Service 
Container Completed event doesn't get processed. Contributed by Bilwa S T.
[Jul 9, 2020 7:20:25 AM] (Sunil G) YARN-10333. YarnClient obtain Delegation 
Token for Log Aggregation Path. Contributed by Prabhu Joseph.
[Jul 9, 2020 6:33:37 PM] (noreply) HADOOP-17079. Optimize UGI#getGroups by 
adding UGI#getGroupsSet. (#2085)
[Jul 9, 2020 7:38:52 PM] (noreply) HDFS-15462. Add 
fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml (#2131)




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   

Re: A more inclusive elephant...

2020-07-10 Thread Eric Badger
Thanks for writing this up, Carlo. I'm +1 (idk if I'm technically binding
on this or not) for the changes moving forward and I think we refactor away
any instances that are internal to the code (i.e. not APIs or other things
that would break compatibility) in all active branches and then also change
the APIs in trunk (an incompatible change).

I just came across an internal issue related to the NM whitelist/blacklist.
I would be happy to go refactor the code and look for instances of these
and replace them with allowlist/blocklist. Doing a quick "git grep" of
trunk, I see 270 instances of "whitelist" and 1318 instances of
"blacklist".

If there are no objections, I'll create a JIRA to clean this specific stuff
up. It would be wonderful if others could pick up a different portion (e.g.
master/slave) so that we can spread the work out.

Eric

On Tue, Jul 7, 2020 at 6:27 PM Carlo Aldo Curino 
wrote:

> Hello Folks,
>
> I hope you are all doing well...
>
> *The problem*
> The recent protests made me realize that we are not just a bystanders of
> the systematic racism that affect our society, but we are active
> participants of it. Being "non-racist" is not enough, I strongly feel we
> should be actively "anti-racist" in our day to day lives, and continuously
> check our biases. I assume most of you will agree with the general
> sentiment, but based on your exposure to the recent events and US
> culture/history might have more or less strong feelings about your role in
> the problem and potential solution.
>
> *What can we do about it?* I think a simple action we can take is to work
> on our code/comments/documentation/websites and remove racist terminology.
> Here is a IETF draft to fix up some of the most egregious examples
> (master/slave, whitelist/backlist) with proposed alternatives.
>
> https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1
> Also as we go about this effort, we should also consider other
> "non-inclusive" terminology issues around gender (e.g., binary gendered
> examples, "Alice" doing the wrong security thing systematically), and
> ableism (e.g., referring to misbehaving hardware as "lame" or "limping",
> etc.).
> The easiest action item is to avoid this going forward (ideally adding it
> to the checkstyles if possible), a more costly one is to start going back
> and refactor away existing instances.
>
> I know this requires a bunch of work as refactorings might break dev
> branches and non-committed patches, possibly scripts, etc. but I think this
> is something important and relatively simple we can do. The effect goes
> well beyond some text in github, it signals what we believe in, and forces
> hundreds of users and contributors to notice and think about it. Our
> force-multiplier is huge and it matches our responsibility.
>
> What do you folks think?
>
> Thanks,
> Carlo
>


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/743/

[Jul 9, 2020 3:43:22 AM] (iwasakims) HADOOP-17120. Fix failure of docker image 
creation due to pip2 install




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

findbugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

findbugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 

Re: [VOTE] Release Apache Hadoop 3.1.4 (RC2)

2020-07-10 Thread Gabor Bota
Yes, sure. I'll do another RC for next week.

Thank you all for working on this!

On Thu, Jul 9, 2020 at 8:20 AM Masatake Iwasaki
 wrote:
>
> Hi Gabor Bota,
>
> I committed the fix of YARN-10347 to branch-3.1.
> I think this should be blocker for 3.1.4.
> Could you cherry-pick it to branch-3.1.4 and cut a new RC?
>
> Thanks,
> Masatake Iwasaki
>
> On 2020/07/08 23:31, Masatake Iwasaki wrote:
> > Thanks Steve and Prabhu for the information.
> >
> > The cause turned out to be locking in CapacityScheduler#reinitialize.
> > I think the method is called after transitioning to active stat if
> > RM-HA is enabled.
> >
> > I filed YARN-10347 and created PR.
> >
> >
> > Masatake Iwasaki
> >
> >
> > On 2020/07/08 16:33, Prabhu Joseph wrote:
> >> Hi Masatake,
> >>
> >>   The thread is waiting for a ReadLock, we need to check what the
> >> other
> >> thread holding WriteLock is blocked on.
> >> Can you get three consecutive complete jstack of ResourceManager
> >> during the
> >> issue.
> >>
>  I got no issue if RM-HA is disabled.
> >> Looks RM is not able to access Zookeeper State Store. Can you check if
> >> there is any connectivity issue between RM and Zookeeper.
> >>
> >> Thanks,
> >> Prabhu Joseph
> >>
> >>
> >> On Mon, Jul 6, 2020 at 2:44 AM Masatake Iwasaki
> >> 
> >> wrote:
> >>
> >>> Thanks for putting this up, Gabor Bota.
> >>>
> >>> I'm testing the RC2 on 3 node docker cluster with NN-HA and RM-HA
> >>> enabled.
> >>> ResourceManager reproducibly blocks on submitApplication while
> >>> launching
> >>> example MR jobs.
> >>> Does anyone run into the same issue?
> >>>
> >>> The same configuration worked for 3.1.3.
> >>> I got no issue if RM-HA is disabled.
> >>>
> >>>
> >>> "IPC Server handler 1 on default port 8032" #167 daemon prio=5
> >>> os_prio=0
> >>> tid=0x7fe91821ec50 nid=0x3b9 waiting on condition
> >>> [0x7fe901bac000]
> >>>  java.lang.Thread.State: WAITING (parking)
> >>>   at sun.misc.Unsafe.park(Native Method)
> >>>   - parking to wait for  <0x85d37a40> (a
> >>> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> >>>   at
> >>> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> >>>   at
> >>>
> >>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> >>>
> >>>   at
> >>>
> >>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
> >>>
> >>>   at
> >>>
> >>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
> >>>
> >>>   at
> >>>
> >>> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.checkAndGetApplicationPriority(CapacityScheduler.java:2521)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:417)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:342)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:678)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
> >>>
> >>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> >>>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
> >>>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
> >>>   at java.security.AccessController.doPrivileged(Native Method)
> >>>   at javax.security.auth.Subject.doAs(Subject.java:422)
> >>>   at
> >>>
> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >>>
> >>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
> >>>
> >>>
> >>> Masatake Iwasaki
> >>>
> >>> On 2020/06/26 22:51, Gabor Bota wrote:
>  Hi folks,
> 
>  I have put together a release candidate (RC2) for Hadoop 3.1.4.
> 
>  The RC is available at:
> >>> http://people.apache.org/~gabota/hadoop-3.1.4-RC2/
>  The RC tag in git is here:
>  https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC2
>  The maven artifacts are staged at
>  https://repository.apache.org/content/repositories/orgapachehadoop-1269/
> 
> 
>  You can find my 

Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Tianhua huang
  +1 (non-binding)

- Verified signatures and checksums
- Checked the documents including changes and release notes
- Native built from source on Ubuntu 18.04 aarch64 with openjdk-11-jdk
- Setup a pseudo-distributed cluster
- Ran some example jobs(grep, wordcount, pi)
- Executed TeraGen, TeraSort and TeraValidate
- Executed a TestDFSIO job

On Fri, Jul 10, 2020 at 5:26 PM Yikun Jiang  wrote:

> +1 (non-binding)
>
> - Verified signatures and checksums
> - Checked the documents including changes and release notes
> Tested on CentOS 7.8 aarch64 with jdk 1.80_252:
> - Deployed a pseudo-distributed cluster(single node)
> - Ran  some example(wordcount, Tera*, PI, TestDFSIO...)
>
> Regards,
> Yikun
>


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Yikun Jiang
+1 (non-binding)

- Verified signatures and checksums
- Checked the documents including changes and release notes
Tested on CentOS 7.8 aarch64 with jdk 1.80_252:
- Deployed a pseudo-distributed cluster(single node)
- Ran  some example(wordcount, Tera*, PI, TestDFSIO...)

Regards,
Yikun


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Vinayakumar B
+1 (Binding)

-Verified all checksums and Signatures.
-Verified site, Release notes and Change logs
  + May be changelog and release notes could be grouped based on the
project at second level for better look (this needs to be supported from
yetus)
-Tested in x86 local 3-node docker cluster.
  + Built from source with OpenJdk 8 and Ubuntu 18.04
  + Deployed 3 node docker cluster
  + Ran various Jobs (wordcount, Terasort, Pi, etc)

No Issues reported.

-Vinay

On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu  wrote:

> +1 (non-binding)
>
> - checkout the "3.3.0-aarch64-RC0" binaries packages
>
> - started a clusters with 3 nodes VMs of Ubuntu 18.04 ARM/aarch64,
> openjdk-11-jdk
>
> - checked some web UIs (NN, DN, RM, NM)
>
> - Executed a wordcount, TeraGen, TeraSort and TeraValidate
>
> - Executed a TestDFSIO job
>
> - Executed a Pi job
>
> BR,
> Liusheng
>
> Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
>
> > +1 (non-binding)
> >
> > - Verified all hashes and checksums
> > - Tested on ARM platform for the following actions:
> >   + Built from source on Ubuntu 18.04, OpenJDK 8
> >   + Deployed a pseudo cluster
> >   + Ran some example jobs(grep, wordcount, pi)
> >   + Ran teragen/terasort/teravalidate
> >   + Ran TestDFSIO job
> >
> > BR,
> >
> > Zhenyu
> >
> > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka 
> wrote:
> >
> > > +1 (binding)
> > >
> > > - Verified checksums and signatures.
> > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster
> > (with
> > > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > > - The document looks good.
> > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > >
> > > Thanks,
> > > Akira
> > >
> > >
> > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula  >
> > > wrote:
> > >
> > > > Hi folks,
> > > >
> > > > This is the first release candidate for the first release of Apache
> > > > Hadoop 3.3.0
> > > > line.
> > > >
> > > > It contains *1644[1]* fixed jira issues since 3.2.1 which include a
> lot
> > > of
> > > > features and improvements(read the full set of release notes).
> > > >
> > > > Below feature additions are the highlights of the release.
> > > >
> > > > - ARM Support
> > > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > > - Java 11 Runtime support and TLS 1.3.
> > > > - Support Tencent Cloud COS File System.
> > > > - Added security to HDFS Router.
> > > > - Support non-volatile storage class memory(SCM) in HDFS cache
> > directives
> > > > - Support Interactive Docker Shell for running Containers.
> > > > - Scheduling of opportunistic containers
> > > > - A pluggable device plugin framework to ease vendor plugin
> development
> > > >
> > > > *The RC0 artifacts are at*:
> > > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > > >
> > > > *First release to include ARM binary, Have a check.*
> > > > *RC tag is *release-3.3.0-RC0.
> > > >
> > > >
> > > > *The maven artifacts are hosted here:*
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> > > >
> > > > *My public key is available here:*
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM
> > IST.
> > > >
> > > >
> > > > I have done a few testing with my pseudo cluster. My +1 to start.
> > > >
> > > >
> > > >
> > > > Regards,
> > > > Brahma Reddy Battula
> > > >
> > > >
> > > > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> (3.3.0)
> > > AND
> > > > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER
> BY
> > > > fixVersion ASC
> > > >
> > >
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Sheng Liu
+1 (non-binding)

- checkout the "3.3.0-aarch64-RC0" binaries packages

- started a clusters with 3 nodes VMs of Ubuntu 18.04 ARM/aarch64,
openjdk-11-jdk

- checked some web UIs (NN, DN, RM, NM)

- Executed a wordcount, TeraGen, TeraSort and TeraValidate

- Executed a TestDFSIO job

- Executed a Pi job

BR,
Liusheng

Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:

> +1 (non-binding)
>
> - Verified all hashes and checksums
> - Tested on ARM platform for the following actions:
>   + Built from source on Ubuntu 18.04, OpenJDK 8
>   + Deployed a pseudo cluster
>   + Ran some example jobs(grep, wordcount, pi)
>   + Ran teragen/terasort/teravalidate
>   + Ran TestDFSIO job
>
> BR,
>
> Zhenyu
>
> On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka  wrote:
>
> > +1 (binding)
> >
> > - Verified checksums and signatures.
> > - Built from the source with CentOS 7 and OpenJDK 8.
> > - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster
> (with
> > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > - The document looks good.
> > - Deployed pseudo cluster and ran some MapReduce jobs.
> >
> > Thanks,
> > Akira
> >
> >
> > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula 
> > wrote:
> >
> > > Hi folks,
> > >
> > > This is the first release candidate for the first release of Apache
> > > Hadoop 3.3.0
> > > line.
> > >
> > > It contains *1644[1]* fixed jira issues since 3.2.1 which include a lot
> > of
> > > features and improvements(read the full set of release notes).
> > >
> > > Below feature additions are the highlights of the release.
> > >
> > > - ARM Support
> > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > - Java 11 Runtime support and TLS 1.3.
> > > - Support Tencent Cloud COS File System.
> > > - Added security to HDFS Router.
> > > - Support non-volatile storage class memory(SCM) in HDFS cache
> directives
> > > - Support Interactive Docker Shell for running Containers.
> > > - Scheduling of opportunistic containers
> > > - A pluggable device plugin framework to ease vendor plugin development
> > >
> > > *The RC0 artifacts are at*:
> > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > >
> > > *First release to include ARM binary, Have a check.*
> > > *RC tag is *release-3.3.0-RC0.
> > >
> > >
> > > *The maven artifacts are hosted here:*
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> > >
> > > *My public key is available here:*
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM
> IST.
> > >
> > >
> > > I have done a few testing with my pseudo cluster. My +1 to start.
> > >
> > >
> > >
> > > Regards,
> > > Brahma Reddy Battula
> > >
> > >
> > > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.3.0)
> > AND
> > > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER BY
> > > fixVersion ASC
> > >
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Zhenyu Zheng
+1 (non-binding)

- Verified all hashes and checksums
- Tested on ARM platform for the following actions:
  + Built from source on Ubuntu 18.04, OpenJDK 8
  + Deployed a pseudo cluster
  + Ran some example jobs(grep, wordcount, pi)
  + Ran teragen/terasort/teravalidate
  + Ran TestDFSIO job

BR,

Zhenyu

On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka  wrote:

> +1 (binding)
>
> - Verified checksums and signatures.
> - Built from the source with CentOS 7 and OpenJDK 8.
> - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster (with
> RBF, security, and OpenJDK 11) for end-users. No issues reported.
> - The document looks good.
> - Deployed pseudo cluster and ran some MapReduce jobs.
>
> Thanks,
> Akira
>
>
> On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula 
> wrote:
>
> > Hi folks,
> >
> > This is the first release candidate for the first release of Apache
> > Hadoop 3.3.0
> > line.
> >
> > It contains *1644[1]* fixed jira issues since 3.2.1 which include a lot
> of
> > features and improvements(read the full set of release notes).
> >
> > Below feature additions are the highlights of the release.
> >
> > - ARM Support
> > - Enhancements and new features on S3a,S3Guard,ABFS
> > - Java 11 Runtime support and TLS 1.3.
> > - Support Tencent Cloud COS File System.
> > - Added security to HDFS Router.
> > - Support non-volatile storage class memory(SCM) in HDFS cache directives
> > - Support Interactive Docker Shell for running Containers.
> > - Scheduling of opportunistic containers
> > - A pluggable device plugin framework to ease vendor plugin development
> >
> > *The RC0 artifacts are at*:
> > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> >
> > *First release to include ARM binary, Have a check.*
> > *RC tag is *release-3.3.0-RC0.
> >
> >
> > *The maven artifacts are hosted here:*
> > https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> >
> > *My public key is available here:*
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM IST.
> >
> >
> > I have done a few testing with my pseudo cluster. My +1 to start.
> >
> >
> >
> > Regards,
> > Brahma Reddy Battula
> >
> >
> > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.3.0)
> AND
> > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER BY
> > fixVersion ASC
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Akira Ajisaka
+1 (binding)

- Verified checksums and signatures.
- Built from the source with CentOS 7 and OpenJDK 8.
- Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster (with
RBF, security, and OpenJDK 11) for end-users. No issues reported.
- The document looks good.
- Deployed pseudo cluster and ran some MapReduce jobs.

Thanks,
Akira


On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula 
wrote:

> Hi folks,
>
> This is the first release candidate for the first release of Apache
> Hadoop 3.3.0
> line.
>
> It contains *1644[1]* fixed jira issues since 3.2.1 which include a lot of
> features and improvements(read the full set of release notes).
>
> Below feature additions are the highlights of the release.
>
> - ARM Support
> - Enhancements and new features on S3a,S3Guard,ABFS
> - Java 11 Runtime support and TLS 1.3.
> - Support Tencent Cloud COS File System.
> - Added security to HDFS Router.
> - Support non-volatile storage class memory(SCM) in HDFS cache directives
> - Support Interactive Docker Shell for running Containers.
> - Scheduling of opportunistic containers
> - A pluggable device plugin framework to ease vendor plugin development
>
> *The RC0 artifacts are at*:
> http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
>
> *First release to include ARM binary, Have a check.*
> *RC tag is *release-3.3.0-RC0.
>
>
> *The maven artifacts are hosted here:*
> https://repository.apache.org/content/repositories/orgapachehadoop-1271/
>
> *My public key is available here:*
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM IST.
>
>
> I have done a few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
> Brahma Reddy Battula
>
>
> 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.3.0) AND
> fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER BY
> fixVersion ASC
>