Re: [DISCUSS] Hadoop 3.3.1 release

2021-04-19 Thread Viraj Jasani
Thanks Wei-Chiu.

> If you believe there are more features/bug fixes we should include in
3.3.1
(I spent the past few weeks backporting jiras but I'm sure I missed some)
please shout out.

Although not marked critical, I believe HDFS-15982 could get in 3.3.1 if
you are fine. Thanks to Bhavik for filing this.

> Meanwhile, I believe we need to release hadoop-thirdparty 1.1.0 too.

I agree we should work on this to upgrade some dependencies from known CVEs
(e.g guava). However, does this mean Hadoop 3.3.1 has dependency
on hadoop-thirdparty 1.1.0? (as we would like to upgrade guava and other
shaded thirdparty dependencies for 3.3.1 as well)


On Mon, Apr 19, 2021 at 11:04 AM Wei-Chiu Chuang
 wrote:

> Hello, reviving this thread.
>
> I created a dashboard for Hadoop 3.3.1 release.
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336122
> Also a jira to track the release work: HADOOP-17647
> 
>
> We are currently at 5 release blockers and 3 critical issues for Hadoop
> 3.3.1. I'll go through each of them and push out the ones that aren't
> really blocking us.
>
> If you believe there are more features/bug fixes we should include in 3.3.1
> (I spent the past few weeks backporting jiras but I'm sure I missed some)
> please shout out.
>
> Meanwhile, I believe we need to release hadoop-thirdparty 1.1.0 too. There
> are a number of tasks to be done there too. Let's start another thread for
> hadoop-thirdparty 1.1.0 release.
>
> On Mon, Mar 15, 2021 at 7:04 PM hemanth boyina  >
> wrote:
>
> > Hi Steve and Wei-Chiu
> >
> > Regarding the IPV6,Few years back we have rebased the HADOOP-11890 to
> trunk
> > and tried to work out with IPV6, we have faced some issues and have made
> > the required changes for ipv6 to work.After the changes were made we have
> > tested the IPV6 changes on top of Ipv4 and Ipv6 machines and tested
> > rigorously.Its been quite a some time these changes were deployed in
> > production cluster and have been in use for extensive purpose.
> >
> > I think it's good time to add this feature.
> >
> > Thanks
> > Hemanth Boyina
> >
> >
> >
> > On Thu, 11 Mar 2021, 10:22 Vinayakumar B, 
> wrote:
> >
> > > Hi David,
> > >
> > > >> Still hoping for help here:
> > >
> > > >> https://issues.apache.org/jira/browse/HDFS-15790
> > >
> > > I will raise a PR for the said solution soon (in a day or two).
> > >
> > > -Vinay
> > >
> > > On Thu, 11 Mar 2021 at 5:39 AM, David  wrote:
> > >
> > > > Hello,
> > > >
> > > > Still hoping for help here:
> > > >
> > > > https://issues.apache.org/jira/browse/HDFS-15790
> > > >
> > > > Looks like it has been worked on, not sure how to best move it
> forward.
> > > >
> > > > On Wed, Mar 10, 2021, 12:21 PM Steve Loughran
> > >  > > > >
> > > > wrote:
> > > >
> > > > > I'm going to argue its too late to do IPv6 support close to a
> > release,
> > > as
> > > > > it's best if its on developer machines for some time to let all
> > quirks
> > > > > surface. It's not so much IPv6 itself, but do we cause any
> > regressions
> > > on
> > > > > IPv4?
> > > > >
> > > > > But: it can/should go into trunk and stabilize there
> > > > >
> > > > > On Thu, 4 Mar 2021 at 03:52, Muralikrishna Dmmkr <
> > > > > muralikrishna.dm...@gmail.com> wrote:
> > > > >
> > > > > > Hi Brahma,
> > > > > >
> > > > > > I have missed out mentioning about the IPV6 feature in the last
> > mail,
> > > > > > Support for IPV6 has been in development since 2015 and We have
> > done
> > > a
> > > > > good
> > > > > > amount of testing at our organisation, the feature is stable and
> > used
> > > > by
> > > > > > our customers extensively in the last one year. I think it is a
> > good
> > > > time
> > > > > > to add the IPV6 support to 3.3.1.
> > > > > >
> > > > > > https://issues.apache.org/jira/browse/HADOOP-11890
> > > > > >
> > > > > > Thanks
> > > > > > D M Murali Krishna Reddy
> > > > > >
> > > > > > On Wed, Feb 24, 2021 at 9:13 AM Muralikrishna Dmmkr <
> > > > > > muralikrishna.dm...@gmail.com> wrote:
> > > > > >
> > > > > > > Hi Brahma,
> > > > > > >
> > > > > > > Can we have this new feature "YARN Registry based AM discovery
> > with
> > > > > retry
> > > > > > > and in-flight task persistent via JHS" in the upcoming 3.1.1
> > > > release. I
> > > > > > > have also attached a test-report in the below jira.
> > > > > > >
> > > > > > > https://issues.apache.org/jira/browse/MAPREDUCE-6726
> > > > > > >
> > > > > > >
> > > > > > > Thanks,
> > > > > > > D M Murali Krishna Reddy
> > > > > > >
> > > > > > > On Tue, Feb 23, 2021 at 10:11 AM Brahma Reddy Battula <
> > > > > bra...@apache.org
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi Bilwa,
> > > > > > >>
> > > > > > >> I have commented on the jira's you mentioned. Based on the
> > > stability
> > > > > we
> > > > > > >> can
> > > > > > >> plan this.But needs to be merged ASAP.
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> On Fri, Feb 19, 2021 at 5:20 PM bilw

Re: Error while running github feature from .asf.yaml in hadoop!

2021-05-04 Thread Viraj Jasani
Looks like this is related to features tag missing? I am also not sure if
this is mandatory tag but worth adding:
https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features#Git.asf.yamlfeatures-Repositoryfeatures

github:
  features:
# Enable wiki for documentation
wiki: true
# Enable issue management
issues: true
# Enable projects for project management boards
projects: true


On Tue, May 4, 2021 at 1:57 PM Wei-Chiu Chuang  wrote:

> I received this message but it's impossible to tell what went wrong.
>
> My guess is this PR HADOOP-17623
>
> https://github.com/apache/hadoop-site/commit/d3f5b0bc70c2c4fdb542c4a0bdfc2ed9c11edfa6
> caused
> the error though.
> Any idea what we can do to fix this? I can easily revert the change
> if needed.
>
> -- Forwarded message -
> From: Apache Infrastructure 
> Date: Tue, May 4, 2021 at 4:21 PM
> Subject: Error while running github feature from .asf.yaml in hadoop!
> To: , 
>
>
>
> An error occurred while running github feature in .asf.yaml!:
> 'next'
>


Re: [VOTE] hadoop-thirdparty 1.1.0-RC0

2021-05-14 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum : ok
* CHANGELOG / RELEASENOTES: ok
* Rat check (1.8.0_171): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_171): ok
 - mvn clean install  -DskipTests
 - mvn clean install -DskipTests -Psrc
* Built Hadoop trunk againt hadoop-thirdparty 1.1.0: ok


On Thu, May 13, 2021 at 5:25 PM Wei-Chiu Chuang  wrote:

> Hello my fellow Hadoop developers,
>
> I am putting together the first release candidate (RC0) for
> Hadoop-thirdparty 1.1.0. This is going to be consumed by the upcoming
> Hadoop 3.3.1 release.
>
> The RC is available at:
> https://people.apache.org/~weichiu/hadoop-thirdparty-1.1.0-RC0/
> The RC tag in github is here:
> https://github.com/apache/hadoop-thirdparty/tree/release-1.1.0-RC0
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1309/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS or
> https://people.apache.org/keys/committer/weichiu.asc
>
>
> Please try the release and vote. The vote will run for 5 days until
> 2021/05/19 at 00:00 CST.
>
> Note: Our post commit automation builds the code, and pushes the SNAPSHOT
> artifacts to central Maven, which is consumed by Hadoop trunk and
> branch-3.3, so it is a good validation that things are working properly in
> hadoop-thirdparty.
>
> Thanks,
> Wei-Chiu
>


Re: [DISCUSS] Change project style guidelines to allow line length 100

2021-05-20 Thread Viraj Jasani
+1 (non-binding) to increasing line length to 100 instead of 80 and
applying this to all active branches (for the ease of backport as Stephen
mentioned).


On Thu, 20 May 2021 at 3:30 PM, Stephen O'Donnell
 wrote:

> I am +1 on increasing the line length to 100.
>
> As for changes to address existing style issues - I think that is more pain
> than it's worth. It will make backports much harder, and we have quite a
> few active branches, not to mention those who maintain custom builds.
>
> To ease the backport problems the style fixes would need to be pushed down
> all the branches, otherwise if we start making them on trunk only, future
> changes on trunk will not be able to be cherry-picked cleanly to branch-3.3
> etc.
>
>
> On Thu, May 20, 2021 at 8:28 AM Bhavik Patel 
> wrote:
>
> > I am just worried about the backporting of the Jira to child branch!! How
> > we are planning to handle this?
> >
> > On Thu, May 20, 2021, 11:09 AM Qi Zhu <821684...@qq.com> wrote:
> >
> > > +1 100 is reasonable.
> > >
> > >
> > >
> > > ---Original---
> > > From: "Xiaoqiao He" > > Date: Thu, May 20, 2021 13:35 PM
> > > To: "Masatake Iwasaki" > > Cc: "Akira Ajisaka" > > common-dev@hadoop.apache.org>;"Hdfs-dev" > > >;"yarn-dev" > > mapreduce-...@hadoop.apache.org>;
> > > Subject: Re: [DISCUSS] Change project style guidelines to allow line
> > > length 100
> > >
> > >
> > > +1 for <= 100 chars long per line length.
> > >
> > > On Thu, May 20, 2021 at 10:28 AM Masatake Iwasaki <
> > > iwasak...@oss.nttdata.co.jp> wrote:
> > >
> > > > I'm +1 too.
> > > > I feel 80 characters limit tends to degrade readability by
> > introducing
> > > > useless line breaks.
> > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/7813c2f8a49b1d1e7655dad180f2d915a280b2f4d562cfe981e1dd4e%401406489966%40%3Ccommon-dev.hadoop.apache.org%3E
> > > >
> > > <
> >
> https://lists.apache.org/thread.html/7813c2f8a49b1d1e7655dad180f2d915a280b2f4d562cfe981e1dd4e%401406489966%40%3Ccommon-dev.hadoop.apache.org%3E>
> > >
> > > ;
> > > > I have no inconvenience on 100 characters for using Emacs and
> > > side-by-side
> > > > diff even on 13-inch MBP.
> > > >
> > > > Masatake Iwasaki
> > > >
> > > > On 2021/05/20 11:00, Akira Ajisaka wrote:
> > > > > I'm +1 to allow <= 100 chars.
> > > > >
> > > > > FYI: There were some discussions long before:
> > > > > -
> > > >
> > >
> >
> https://lists.apache.org/thread.html/7813c2f8a49b1d1e7655dad180f2d915a280b2f4d562cfe981e1dd4e%401406489966%40%3Ccommon-dev.hadoop.apache.org%3E
> > > >
> > > <
> >
> https://lists.apache.org/thread.html/7813c2f8a49b1d1e7655dad180f2d915a280b2f4d562cfe981e1dd4e%401406489966%40%3Ccommon-dev.hadoop.apache.org%3E>
> > >;
> > > > -
> > > >
> > >
> >
> https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970fa0f534811cfe00795a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E
> > > >
> > > <
> >
> https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970fa0f534811cfe00795a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E>
> > >;
> > > >
> > > > > Thanks,
> > > > > Akira
> > > > >
> > > > > On Thu, May 20, 2021 at 6:36 AM Sean Busbey
> > >  > > > wrote:
> > > > >>
> > > > >> Hello!
> > > > >>
> > > > >> What do folks think about changing our line length
> > > guidelines to allow
> > > > for 100 character width?
> > > > >>
> > > > >> Currently, we tell folks to follow the sun style guide
> with
> > > some
> > > > exception unrelated to line length. That guide says width of 80 is
> > the
> > > > standard and our current check style rules act as enforcement.
> > > > >>
> > > > >> Looking at the current trunk codebase our nightly build
> > > shows a total
> > > > of ~15k line length violations; it’s about 18% of identified
> > > checkstyle
> > > > issues.
> > > > >>
> > > > >> The vast majority of those line length violations are <=
> > 100
> > > characters
> > > > long. 100 characters happens to be the length for the Google Java
> > > Style
> > > > Guide, another commonly adopted style guide for java projects, so
> I
> > > suspect
> > > > these longer lines leaking past the checkstyle precommit warning
> > > might be a
> > > > reflection of committers working across multiple java codebases.
> > > > >>
> > > > >> I don’t feel strongly about lines being longer, but I
> would
> > > like to
> > > > move towards more consistent style enforcement as a project.
> > Updating
> > > our
> > > > project guidance to allow for 100 character lines would reduce the
> > > > likelihood that folks bringing in new contributions need a
> precommit
> > > test
> > > > cycle to get the formatting correct.
> > > > >>
> > > > >> Does anyone feel strongly about keeping the line length
> > > limit at 80
> > > > characters?
> > > > >>
> > > > >> Does anyone feel strongly about contributions coming in
> > that
> > > clear up
> > > > line length violations?
> > > > >>
> > > > >>
> > > > >>
> > > -
> > > > >> To un

Re: [VOTE] Release Apache Hadoop Thirdparty 1.1.1 RC0

2021-05-26 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_171): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_171): ok
 - mvn clean install
* Built Hadoop trunk with updated thirdparty: ok
* CL/RN: ok


On Wed, 26 May 2021 at 1:59 PM, Wei-Chiu Chuang  wrote:

> Hi folks,
>
> I have put together a release candidate (RC0) for Hadoop Thirdparty
> 1.1.1 which will be consumed by Hadoop 3.3.1 RC2.
>
>
> The RC is available at:
> https://people.apache.org/~weichiu/hadoop-thirdparty-1.1.1-RC0/
>
>
> The RC tag in svn is
> here:
> https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.1.1-RC0
>
> The maven artifacts are staged at
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1316/
>
>
> Comparing to 1.1.0, there are two additional fixes:
>
> HADOOP-17707. Remove jaeger document from site index.
> <
> https://github.com/apache/hadoop-thirdparty/commit/e1db87b85117b5694972f2725aa32c9975a83b5b
> >
>
> HADOOP-17730. Add back error_prone
> <
> https://github.com/apache/hadoop-thirdparty/commit/db2fc27e2f53637a06c36c3a9d8dae0a8c894cd8
> >
>
> You can find my public key
> at:https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
>
> Please try the release and vote. The vote will run for 5 days.
>
> Thanks
> Weichiu
>


Re: [VOTE] Hadoop 3.1.x EOL

2021-06-03 Thread Viraj Jasani
+1 (non-binding)

On Thu, 3 Jun 2021 at 12:21 PM, Wei-Chiu Chuang  wrote:

> +1
>
> On Thu, Jun 3, 2021 at 2:14 PM Akira Ajisaka  wrote:
>
> > Dear Hadoop developers,
> >
> > Given the feedback from the discussion thread [1], I'd like to start
> > an official vote
> > thread for the community to vote and start the 3.1 EOL process.
> >
> > What this entails:
> >
> > (1) an official announcement that no further regular Hadoop 3.1.x
> releases
> > will be made after 3.1.4.
> > (2) resolve JIRAs that specifically target 3.1.5 as won't fix.
> >
> > This vote will run for 7 days and conclude by June 10th, 16:00 JST [2].
> >
> > Committers are eligible to cast binding votes. Non-committers are
> welcomed
> > to cast non-binding votes.
> >
> > Here is my vote, +1
> >
> > [1] https://s.apache.org/w9ilb
> > [2]
> >
> https://www.timeanddate.com/worldclock/fixedtime.html?msg=4&iso=20210610T16&p1=248
> >
> > Regards,
> > Akira
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.1 RC3

2021-06-10 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_171): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_171): ok
 - mvn clean install  -DskipTests
* HDFS basic testing in pseudo-distributed mode: ok
* Built HBase 2.4.4 with Hadoop 3.3.1 RC and tested some basic scenarios,
looks good

On Wed, Jun 9, 2021 at 10:55 PM Stack  wrote:

> +1
>
>
>
> * Signature: ok
>
> * Checksum : ok
>
> * Rat check (1.8.0_191): ok
>
>  - mvn clean apache-rat:check
>
> * Built from source (1.8.0_191): ok
>
>  - mvn clean install -DskipTests
>
>
> Ran a ten node cluster w/ hbase on top running its verification loadings w/
> (gentle) chaos. Had trouble getting the rig running but mostly pilot error
> and none that I could particularly attribute to hdfs after poking in logs.
>
> Messed in UI and shell some. Nothing untoward.
>
> Wei-Chiu fixed broke tests over in hbase and complete runs are pretty much
> there (a classic flakie seems more-so on 3.3.1... will dig in more on why).
>
>
> Thanks,
>
> S
>
>
> On Tue, Jun 1, 2021 at 3:29 AM Wei-Chiu Chuang  wrote:
>
> > Hi community,
> >
> > This is the release candidate RC3 of Apache Hadoop 3.3.1 line. All
> blocker
> > issues have been resolved [1] again.
> >
> > There are 2 additional issues resolved for RC3:
> > * Revert "MAPREDUCE-7303. Fix TestJobResourceUploader failures after
> > HADOOP-16878
> > * Revert "HADOOP-16878. FileUtil.copy() to throw IOException if the
> source
> > and destination are the same
> >
> > There are 4 issues resolved for RC2:
> > * HADOOP-17666. Update LICENSE for 3.3.1
> > * MAPREDUCE-7348. TestFrameworkUploader#testNativeIO fails. (#3053)
> > * Revert "HADOOP-17563. Update Bouncy Castle to 1.68. (#2740)" (#3055)
> > * HADOOP-17739. Use hadoop-thirdparty 1.1.1. (#3064)
> >
> > The Hadoop-thirdparty 1.1.1, as previously mentioned, contains two extra
> > fixes compared to hadoop-thirdparty 1.1.0:
> > * HADOOP-17707. Remove jaeger document from site index.
> > * HADOOP-17730. Add back error_prone
> >
> > *RC tag is release-3.3.1-RC3
> > https://github.com/apache/hadoop/releases/tag/release-3.3.1-RC3
> >
> > *The RC3 artifacts are at*:
> > https://home.apache.org/~weichiu/hadoop-3.3.1-RC3/
> > ARM artifacts: https://home.apache.org/~weichiu/hadoop-3.3.1-RC3-arm/
> >
> > *The maven artifacts are hosted here:*
> > https://repository.apache.org/content/repositories/orgapachehadoop-1320/
> >
> > *My public key is available here:*
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> >
> > Things I've verified:
> > * all blocker issues targeting 3.3.1 have been resolved.
> > * stable/evolving API changes between 3.3.0 and 3.3.1 are compatible.
> > * LICENSE and NOTICE files checked
> > * RELEASENOTES and CHANGELOG
> > * rat check passed.
> > * Built HBase master branch on top of Hadoop 3.3.1 RC2, ran unit tests.
> > * Built Ozone master on top fo Hadoop 3.3.1 RC2, ran unit tests.
> > * Extra: built 50 other open source projects on top of Hadoop 3.3.1 RC2.
> > Had to patch some of them due to commons-lang migration (Hadoop 3.2.0)
> and
> > dependency divergence. Issues are being identified but so far nothing
> > blocker for Hadoop itself.
> >
> > Please try the release and vote. The vote will run for 5 days.
> >
> > My +1 to start,
> >
> > [1] https://issues.apache.org/jira/issues/?filter=12350491
> > [2]
> >
> >
> https://github.com/apache/hadoop/compare/release-3.3.1-RC1...release-3.3.1-RC3
> >
>


Re: [DISCUSS] Hadoop 3.3.2 release?

2021-09-08 Thread Viraj Jasani
+1 (non-binding) for the release.


On Tue, 7 Sep 2021 at 10:36 PM, Chao Sun  wrote:

> Hi all,
>
> It has been almost 3 months since the 3.3.1 release and branch-3.3 has
> accumulated quite a few commits (118 atm). In particular, Spark community
> recently found an issue which prevents one from using the shaded Hadoop
> client together with certain compression codecs such as lz4 and snappy
> codec. The details are recorded in HADOOP-17891 and SPARK-36669.
>
> Therefore, I'm wondering if anyone is also interested in a 3.3.2 release.
> If there is no objection, I'd like to volunteer myself for the work as
> well.
>
> Best Regards,
> Chao
>


[DISCUSS] Checkin Hadoop code formatter

2021-09-11 Thread Viraj Jasani
+ common-dev@hadoop.apache.org

-- Forwarded message -
From: Viraj Jasani 
Date: Tue, Sep 7, 2021 at 6:18 PM
Subject: Checkin Hadoop code formatter
To: common-dev@hadoop.apache.org 


It seems some recent new devs are not familiar with the common code
formatter that we use for our codebase.
While we already have Wiki page [1] for new contributors and it mentions:
"Code must be formatted according to Sun's conventions
<http://www.oracle.com/technetwork/java/javase/documentation/codeconvtoc-136057.html>"
but this Oracle's code conventions page is not being actively maintained
(no update has been received after 1999) and hence, I believe we should
check-in and maintain code formatter xmls for supported IDEs in our
codebase only (under dev-support) for all devs to be able to import it in
the respective IDE.
Keeping this in mind, I have created this PR 3387
<https://github.com/apache/hadoop/pull/3387>. If you could please take a
look and if the PR receives sufficient +1s, we might want to update our
Wiki page to directly refer to our own codebase for code formatters that we
maintain. Thoughts?


1.
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-MakingChanges


Re: [DISCUSS] Checkin Hadoop code formatter

2021-09-23 Thread Viraj Jasani
PR 3387 <https://github.com/apache/hadoop/pull/3387> has been merged and
the Wiki page is also updated for devs to refer to the IDEA code formatter
xml. If any Eclipse user would like to contribute the formatter xml, please
feel free to comment on HADOOP-17892
<https://issues.apache.org/jira/browse/HADOOP-17892> or create a sub-task.

Thanks to everyone who helped with reviews, merging the PR and updating the
Wiki page.

On Tue, Sep 14, 2021 at 5:29 PM Hui Fei  wrote:

> Thanks Viraj.
> It does make sense.
>
> Viraj Jasani  于2021年9月12日周日 上午2:58写道:
>
>> + common-dev@hadoop.apache.org
>>
>> ------ Forwarded message -
>> From: Viraj Jasani 
>> Date: Tue, Sep 7, 2021 at 6:18 PM
>> Subject: Checkin Hadoop code formatter
>> To: common-dev@hadoop.apache.org 
>>
>>
>> It seems some recent new devs are not familiar with the common code
>> formatter that we use for our codebase.
>> While we already have Wiki page [1] for new contributors and it mentions:
>> "Code must be formatted according to Sun's conventions
>> <
>> http://www.oracle.com/technetwork/java/javase/documentation/codeconvtoc-136057.html
>> >"
>> but this Oracle's code conventions page is not being actively maintained
>> (no update has been received after 1999) and hence, I believe we should
>> check-in and maintain code formatter xmls for supported IDEs in our
>> codebase only (under dev-support) for all devs to be able to import it in
>> the respective IDE.
>> Keeping this in mind, I have created this PR 3387
>> <https://github.com/apache/hadoop/pull/3387>. If you could please take a
>> look and if the PR receives sufficient +1s, we might want to update our
>> Wiki page to directly refer to our own codebase for code formatters that
>> we
>> maintain. Thoughts?
>>
>>
>> 1.
>>
>> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-MakingChanges
>>
>


[DISCUSS] Migrate to Yetus Interface classification annotations

2021-09-27 Thread Viraj Jasani
Since the early days, Hadoop has provided Interface classification
annotations to represent the scope and stability for downstream
applications to select Hadoop APIs carefully. After some time, these
annotations (InterfaceAudience and InterfaceStability) have been migrated
to Apache Yetus. As of today, with increasing number of Hadoop ecosystem
applications using (or starting to use) Yetus stability annotations for
their own downstreamers, we should also consider using IA/IS annotations
provided by *org.apache.yetus.audience *directly in our codebase and retire
our *org.apache.hadoop.classification* package for the better separation of
concern and single source.

I believe we can go with this migration to maintain compatibility for
Hadoop downstreamers:

   1. In Hadoop trunk (3.4.0+ releases), replace all usages of o.a.h.c
   stability annotations with o.a.y.a annotations.
   2. Deprecate o.a.h.c annotations, and provide deprecation warning that
   we will remove o.a.h.c in 4.0.0 (or 5.0.0) release and the only source for
   these annotations should be o.a.y.a.

Any thoughts?


Re: [DISCUSS] Migrate to Yetus Interface classification annotations

2021-09-28 Thread Viraj Jasani
> The problem comes from the removal of com.sun.tools.doclets.* packages

Agree. Here is the summary
<https://docs.oracle.com/en/java/javase/11/docs/api/jdk.javadoc/jdk/javadoc/doclet/package-summary.html>
of the replacement package *jdk.javadoc.doclet*.
Here is the migration guide
<https://docs.oracle.com/en/java/javase/11/docs/api/jdk.javadoc/jdk/javadoc/doclet/package-summary.html#migration>
for the same.

On Tue, Sep 28, 2021 at 1:06 PM Akira Ajisaka  wrote:

> Hi Masatake,
>
> The problem comes from the removal of com.sun.tools.doclets.* packages in
> Java 10.
> In Apache Hadoop, I removed the doclet support for filtering javadocs when
> the environment is Java 10 or upper.
> https://issues.apache.org/jira/browse/HADOOP-15304
>
> Thanks,
> Akira
>
> On Tue, Sep 28, 2021 at 10:27 AM Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
> > > In particular, there has been an outstanding problem with doclet
> support
> > for filtering javadocs by annotation since JDK9 came out.
> >
> > Could you give me a pointer to relevant Yetus JIRA or ML thread?
> >
> > On 2021/09/28 1:17, Sean Busbey wrote:
> > > I think consolidating on a common library and tooling for defining API
> > expectations for Hadoop would be great.
> > >
> > > Unfortunately, the Apache Yetus community recently started a discussion
> > around dropping their maintenance of the audience annotations codebase[1]
> > due to lack of community interest. In particular, there has been an
> > outstanding problem with doclet support for filtering javadocs by
> > annotation since JDK9 came out.
> > >
> > > I think that means a necessary first step here would be to determine if
> > we have contributors willing to show up over in that project to get
> things
> > into a good state for future JDK adoption.
> > >
> > >
> > >
> > > [1]:
> > > https://s.apache.org/ybdl6
> > > "[DISCUSS] Drop JDK8; audience-annotations" from d...@yetus.apache.org
> > >
> > >> On Sep 27, 2021, at 2:46 AM, Viraj Jasani  wrote:
> > >>
> > >> Since the early days, Hadoop has provided Interface classification
> > >> annotations to represent the scope and stability for downstream
> > >> applications to select Hadoop APIs carefully. After some time, these
> > >> annotations (InterfaceAudience and InterfaceStability) have been
> > migrated
> > >> to Apache Yetus. As of today, with increasing number of Hadoop
> ecosystem
> > >> applications using (or starting to use) Yetus stability annotations
> for
> > >> their own downstreamers, we should also consider using IA/IS
> annotations
> > >> provided by *org.apache.yetus.audience *directly in our codebase and
> > retire
> > >> our *org.apache.hadoop.classification* package for the better
> > separation of
> > >> concern and single source.
> > >>
> > >> I believe we can go with this migration to maintain compatibility for
> > >> Hadoop downstreamers:
> > >>
> > >>1. In Hadoop trunk (3.4.0+ releases), replace all usages of o.a.h.c
> > >>stability annotations with o.a.y.a annotations.
> > >>2. Deprecate o.a.h.c annotations, and provide deprecation warning
> > that
> > >>we will remove o.a.h.c in 4.0.0 (or 5.0.0) release and the only
> > source for
> > >>these annotations should be o.a.y.a.
> > >>
> > >> Any thoughts?
> > >
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >
>


Re: [DISCUSS] Migrate to Yetus Interface classification annotations

2021-09-29 Thread Viraj Jasani
Thanks Masatake for the suggestions. I agree that until Yetus comes to
final conclusion on whether to keep or drop IA/IS annotations for higher
JDK versions (or fix/drop doclet support), we should hold on for now.
Thanks Sean and Akira for providing the context.


On Tue, 28 Sep 2021 at 6:55 PM, Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Thanks, Akira and Viraj.
>
> My understanding is that we have options like
>
> 1. migrate org.apache.yetus:audience-annotations to Java >= 9 then
> migrate Hadoop to the new org.apache.yetus:audience-annotations.
>
> 2. "use the Jigsaw feature to export only @Public elements to other
> projects
> and create javadoc by new --show-packages=exported option instead of
> relying on the annotations." as mentioned by Akira[1].
>
> Both require dropping Java 8 support.
>
> If current org.apache.yetus:audience-annotations(:0.13.0) for Java 8 no
> longer evolves,
> migrating to it in short term is not much useful?
>
> [1]
> https://issues.apache.org/jira/browse/HADOOP-15304?focusedCommentId=16418072&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16418072
>
> On 2021/09/28 18:38, Viraj Jasani wrote:
> >> The problem comes from the removal of com.sun.tools.doclets.* packages
> >
> > Agree. Here is the summary
> > <
> https://docs.oracle.com/en/java/javase/11/docs/api/jdk.javadoc/jdk/javadoc/doclet/package-summary.html
> >
> > of the replacement package *jdk.javadoc.doclet*.
> > Here is the migration guide
> > <
> https://docs.oracle.com/en/java/javase/11/docs/api/jdk.javadoc/jdk/javadoc/doclet/package-summary.html#migration
> >
> > for the same.
> >
> > On Tue, Sep 28, 2021 at 1:06 PM Akira Ajisaka 
> wrote:
> >
> >> Hi Masatake,
> >>
> >> The problem comes from the removal of com.sun.tools.doclets.* packages
> in
> >> Java 10.
> >> In Apache Hadoop, I removed the doclet support for filtering javadocs
> when
> >> the environment is Java 10 or upper.
> >> https://issues.apache.org/jira/browse/HADOOP-15304
> >>
> >> Thanks,
> >> Akira
> >>
> >> On Tue, Sep 28, 2021 at 10:27 AM Masatake Iwasaki <
> >> iwasak...@oss.nttdata.co.jp> wrote:
> >>
> >>>> In particular, there has been an outstanding problem with doclet
> >> support
> >>> for filtering javadocs by annotation since JDK9 came out.
> >>>
> >>> Could you give me a pointer to relevant Yetus JIRA or ML thread?
> >>>
> >>> On 2021/09/28 1:17, Sean Busbey wrote:
> >>>> I think consolidating on a common library and tooling for defining API
> >>> expectations for Hadoop would be great.
> >>>>
> >>>> Unfortunately, the Apache Yetus community recently started a
> discussion
> >>> around dropping their maintenance of the audience annotations
> codebase[1]
> >>> due to lack of community interest. In particular, there has been an
> >>> outstanding problem with doclet support for filtering javadocs by
> >>> annotation since JDK9 came out.
> >>>>
> >>>> I think that means a necessary first step here would be to determine
> if
> >>> we have contributors willing to show up over in that project to get
> >> things
> >>> into a good state for future JDK adoption.
> >>>>
> >>>>
> >>>>
> >>>> [1]:
> >>>> https://s.apache.org/ybdl6
> >>>> "[DISCUSS] Drop JDK8; audience-annotations" from d...@yetus.apache.org
> >>>>
> >>>>> On Sep 27, 2021, at 2:46 AM, Viraj Jasani 
> wrote:
> >>>>>
> >>>>> Since the early days, Hadoop has provided Interface classification
> >>>>> annotations to represent the scope and stability for downstream
> >>>>> applications to select Hadoop APIs carefully. After some time, these
> >>>>> annotations (InterfaceAudience and InterfaceStability) have been
> >>> migrated
> >>>>> to Apache Yetus. As of today, with increasing number of Hadoop
> >> ecosystem
> >>>>> applications using (or starting to use) Yetus stability annotations
> >> for
> >>>>> their own downstreamers, we should also consider using IA/IS
> >> annotations
> >>>>> provided by *org.apache.yetus.audience *directly in our 

Re: yetus isn't working

2021-11-24 Thread Viraj Jasani
This does seem to be happening intermittently. For one of the recent
builds, 4/5 runs failed with the same reason and only 1/5 was able to
proceed further.

https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3711/


On Wed, 24 Nov 2021 at 11:28 PM, Ayush Saxena  wrote:

> This was happening with this PR only, others seems to work.
>
> I was trying to fix, by then you merged it and the build got disabled.
>
> Took me 10 mins to realise that the PR got merged, that is why the build
> got disabled and it is not me who broke it in experimenting. :-)
>
> I will try to fix it next time….
>
>
> On 24-Nov-2021, at 10:36 PM, Steve Loughran 
> wrote:
> >
> > 
> https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3633/9/pipeline
> >
> > Check out from version control
> > <1s
> > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-3633:
> Read-only
> > file system
> >
> > #!/usr/bin/env bash # See HADOOP-13951 chmod -R u+rxw "${WORKSPACE}"
> > — Shell Script
> > <1s
> > Required context class hudson.FilePath is missing
> >
> > Perhaps you forgot to surround the code with a step that provides this,
> > such as: node
> >
> > anyone know how to fix this?
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum: ok
* Build from source: ok
* Ran some large tests from hbase 2.5 branch against RC2: looks good (carry
forward from previous RC)
* HDFS functional tests: ok
* Ran a couple of MapReduce Jobs: ok
* ATSv2 functional tests: ok


On Thu, Jan 27, 2022 at 12:47 AM Chao Sun  wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC3 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1333
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> The only delta between this and RC2 is the addition of the following fix:
>   - HADOOP-18094. Disable S3A auditing by default.
>
> I've done the same tests as in RC2 and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Viraj Jasani
> * ATSv2 functional tests: ok

Ran against hbase 1.7 cluster

On Thu, Jan 27, 2022 at 5:03 PM Viraj Jasani  wrote:

> +1 (non-binding)
>
> * Signature: ok
> * Checksum: ok
> * Build from source: ok
> * Ran some large tests from hbase 2.5 branch against RC2: looks good
> (carry forward from previous RC)
> * HDFS functional tests: ok
> * Ran a couple of MapReduce Jobs: ok
> * ATSv2 functional tests: ok
>
>
> On Thu, Jan 27, 2022 at 12:47 AM Chao Sun  wrote:
>
>> Hi all,
>>
>> I've put together Hadoop 3.3.2 RC3 below:
>>
>> The RC is available at:
>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
>> The RC tag is at:
>> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
>> The Maven artifacts are staged at:
>> https://repository.apache.org/content/repositories/orgapachehadoop-1333
>>
>> You can find my public key at:
>> https://downloads.apache.org/hadoop/common/KEYS
>>
>> The only delta between this and RC2 is the addition of the following fix:
>>   - HADOOP-18094. Disable S3A auditing by default.
>>
>> I've done the same tests as in RC2 and they look good:
>> - Ran all the unit tests
>> - Started a single node HDFS cluster and tested a few simple commands
>> - Ran all the tests in Spark using the RC2 artifacts
>>
>> Please evaluate the RC and vote, thanks!
>>
>> Best,
>> Chao
>>
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC4

2022-02-13 Thread Viraj Jasani
> The RC is available at:
http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/

Chao, the RC folder seems empty as of now.


On Sat, Feb 12, 2022 at 3:12 AM Chao Sun  wrote:

> Hi all,
>
> Sorry for the delay! was waiting for a few fixes that people have requested
> to backport. I've just put together Hadoop 3.3.2 RC4 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC4
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1334/
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> The deltas between this and RC3 are the addition of the following fixes:
>   - HADOOP-17198. Support S3 Access Points
>   - HADOOP-18096. Distcp: Sync moves filtered file to home directory rather
> than deleting.
>   - YARN-10561. Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN
> application catalog webapp
>
> Same as before, I've done the following verification and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC4 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC4

2022-02-13 Thread Viraj Jasani
Thanks Chao. Yes contents are now present, I will start the testing.


On Mon, 14 Feb 2022 at 2:13 AM, Chao Sun  wrote:

> Oops, sorry about that. They should be there now. Can you check again?
> Thanks!
>
> Chao
>
> On Sun, Feb 13, 2022 at 6:47 AM Viraj Jasani  wrote:
>
>> > The RC is available at:
>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>>
>> Chao, the RC folder seems empty as of now.
>>
>>
>> On Sat, Feb 12, 2022 at 3:12 AM Chao Sun  wrote:
>>
>>> Hi all,
>>>
>>> Sorry for the delay! was waiting for a few fixes that people have
>>> requested
>>> to backport. I've just put together Hadoop 3.3.2 RC4 below:
>>>
>>> The RC is available at:
>>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>>> The RC tag is at:
>>> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC4
>>> The Maven artifacts are staged at:
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1334/
>>>
>>> You can find my public key at:
>>> https://downloads.apache.org/hadoop/common/KEYS
>>>
>>> The deltas between this and RC3 are the addition of the following fixes:
>>>   - HADOOP-17198. Support S3 Access Points
>>>   - HADOOP-18096. Distcp: Sync moves filtered file to home directory
>>> rather
>>> than deleting.
>>>   - YARN-10561. Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN
>>> application catalog webapp
>>>
>>> Same as before, I've done the following verification and they look good:
>>> - Ran all the unit tests
>>> - Started a single node HDFS cluster and tested a few simple commands
>>> - Ran all the tests in Spark using the RC4 artifacts
>>>
>>> Please evaluate the RC and vote, thanks!
>>>
>>> Best,
>>> Chao
>>>
>>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC4

2022-02-14 Thread Viraj Jasani
-0 (non-binding), due to git/jira version discrepancies. Once resolved and
changelist updated, will change my vote to +1 (non-binding).

Basic RC verification using hadoop-vote.sh
<https://github.com/apache/hadoop/blob/trunk/dev-support/hadoop-vote.sh>:

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar
-Dmaven.javadoc.skip=true

Functional validation of HDFS, MapReduce and ATSv2 carry-forwarded from
previous RC.

Found some Git/Jira version discrepancies (script used from PR
<https://github.com/apache/hadoop/pull/3991>):

Jiras with missing fixVersion: 3.3.2

HADOOP-17198
HDFS-16344
HDFS-16339
YARN-11007
HDFS-16171
HDFS-16350
HDFS-16336
HADOOP-17975
YARN-10991
HDFS-16332
HADOOP-17857
HDFS-16271
HADOOP-17195
HADOOP-17290
HADOOP-17819
YARN-9551



On Mon, Feb 14, 2022 at 11:10 AM Viraj Jasani  wrote:

> Thanks Chao. Yes contents are now present, I will start the testing.
>
>
> On Mon, 14 Feb 2022 at 2:13 AM, Chao Sun  wrote:
>
>> Oops, sorry about that. They should be there now. Can you check again?
>> Thanks!
>>
>> Chao
>>
>> On Sun, Feb 13, 2022 at 6:47 AM Viraj Jasani  wrote:
>>
>>> > The RC is available at:
>>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>>>
>>> Chao, the RC folder seems empty as of now.
>>>
>>>
>>> On Sat, Feb 12, 2022 at 3:12 AM Chao Sun  wrote:
>>>
>>>> Hi all,
>>>>
>>>> Sorry for the delay! was waiting for a few fixes that people have
>>>> requested
>>>> to backport. I've just put together Hadoop 3.3.2 RC4 below:
>>>>
>>>> The RC is available at:
>>>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>>>> The RC tag is at:
>>>> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC4
>>>> The Maven artifacts are staged at:
>>>> https://repository.apache.org/content/repositories/orgapachehadoop-1334/
>>>>
>>>> You can find my public key at:
>>>> https://downloads.apache.org/hadoop/common/KEYS
>>>>
>>>> The deltas between this and RC3 are the addition of the following fixes:
>>>>   - HADOOP-17198. Support S3 Access Points
>>>>   - HADOOP-18096. Distcp: Sync moves filtered file to home directory
>>>> rather
>>>> than deleting.
>>>>   - YARN-10561. Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN
>>>> application catalog webapp
>>>>
>>>> Same as before, I've done the following verification and they look good:
>>>> - Ran all the unit tests
>>>> - Started a single node HDFS cluster and tested a few simple commands
>>>> - Ran all the tests in Spark using the RC4 artifacts
>>>>
>>>> Please evaluate the RC and vote, thanks!
>>>>
>>>> Best,
>>>> Chao
>>>>
>>>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC4

2022-02-14 Thread Viraj Jasani
Thanks Chao. I just re-executed the script and found very minor differences:

1. Incorrect fix versions:
HADOOP-17988
HADOOP-17728

2. HADOOP-17873 is not yet resolved but the corresponding commit is present.

3. Resolved with 3.3.2 fixversion but no no corresponding commit found on
3.3.2:
HADOOP-17936
HADOOP-18066



On Mon, Feb 14, 2022 at 11:49 PM Chao Sun  wrote:

> Thank you so much Viraj! I've no idea that I missed tagging these many
> JIRAs. The script is super useful!
>
> I just fixed the "fix version" of these. Could you double check? Really
> appreciate you putting effort to do a thorough check on this.
>
> Chao
>
> On Mon, Feb 14, 2022 at 3:10 AM Viraj Jasani  wrote:
>
>> -0 (non-binding), due to git/jira version discrepancies. Once resolved and
>> changelist updated, will change my vote to +1 (non-binding).
>>
>> Basic RC verification using hadoop-vote.sh
>> <https://github.com/apache/hadoop/blob/trunk/dev-support/hadoop-vote.sh>:
>>
>> * Signature: ok
>> * Checksum : ok
>> * Rat check (1.8.0_301): ok
>>  - mvn clean apache-rat:check
>> * Built from source (1.8.0_301): ok
>>  - mvn clean install  -DskipTests
>> * Built tar from source (1.8.0_301): ok
>>  - mvn clean package  -Pdist -DskipTests -Dtar
>> -Dmaven.javadoc.skip=true
>>
>> Functional validation of HDFS, MapReduce and ATSv2 carry-forwarded from
>> previous RC.
>>
>> Found some Git/Jira version discrepancies (script used from PR
>> <https://github.com/apache/hadoop/pull/3991>):
>>
>> Jiras with missing fixVersion: 3.3.2
>>
>> HADOOP-17198
>> HDFS-16344
>> HDFS-16339
>> YARN-11007
>> HDFS-16171
>> HDFS-16350
>> HDFS-16336
>> HADOOP-17975
>> YARN-10991
>> HDFS-16332
>> HADOOP-17857
>> HDFS-16271
>> HADOOP-17195
>> HADOOP-17290
>> HADOOP-17819
>> YARN-9551
>>
>>
>>
>> On Mon, Feb 14, 2022 at 11:10 AM Viraj Jasani  wrote:
>>
>> > Thanks Chao. Yes contents are now present, I will start the testing.
>> >
>> >
>> > On Mon, 14 Feb 2022 at 2:13 AM, Chao Sun  wrote:
>> >
>> >> Oops, sorry about that. They should be there now. Can you check again?
>> >> Thanks!
>> >>
>> >> Chao
>> >>
>> >> On Sun, Feb 13, 2022 at 6:47 AM Viraj Jasani 
>> wrote:
>> >>
>> >>> > The RC is available at:
>> >>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>> >>>
>> >>> Chao, the RC folder seems empty as of now.
>> >>>
>> >>>
>> >>> On Sat, Feb 12, 2022 at 3:12 AM Chao Sun  wrote:
>> >>>
>> >>>> Hi all,
>> >>>>
>> >>>> Sorry for the delay! was waiting for a few fixes that people have
>> >>>> requested
>> >>>> to backport. I've just put together Hadoop 3.3.2 RC4 below:
>> >>>>
>> >>>> The RC is available at:
>> >>>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>> >>>> The RC tag is at:
>> >>>> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC4
>> >>>> The Maven artifacts are staged at:
>> >>>>
>> https://repository.apache.org/content/repositories/orgapachehadoop-1334/
>> >>>>
>> >>>> You can find my public key at:
>> >>>> https://downloads.apache.org/hadoop/common/KEYS
>> >>>>
>> >>>> The deltas between this and RC3 are the addition of the following
>> fixes:
>> >>>>   - HADOOP-17198. Support S3 Access Points
>> >>>>   - HADOOP-18096. Distcp: Sync moves filtered file to home directory
>> >>>> rather
>> >>>> than deleting.
>> >>>>   - YARN-10561. Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN
>> >>>> application catalog webapp
>> >>>>
>> >>>> Same as before, I've done the following verification and they look
>> good:
>> >>>> - Ran all the unit tests
>> >>>> - Started a single node HDFS cluster and tested a few simple commands
>> >>>> - Ran all the tests in Spark using the RC4 artifacts
>> >>>>
>> >>>> Please evaluate the RC and vote, thanks!
>> >>>>
>> >>>> Best,
>> >>>> Chao
>> >>>>
>> >>>
>>
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC4

2022-02-15 Thread Viraj Jasani
Thanks Chao, things look good with the Jira fixVersion discrepancies.
I am curious if this requires new RC or you would like to update Changelog
in the existing RC. Either way, once Changelog is updated, and since other
functional testing looks good anyways, my vote would be +1 (non-binding).


On Tue, 15 Feb 2022 at 11:27 PM, Chao Sun  wrote:

> Thanks again Viraj!
>
> > 1. Incorrect fix versions: HADOOP-17988, HADOOP-17728
>
> I fixed HADOOP-17988. HADOOP-17728 is resolved as invalid so I removed the
> fix version tag.
>
> 2. HADOOP-17873 is not yet resolved but the corresponding commit is
> present.
>
> Removed the fix version tag for this too.
>
> > 3. Resolved with 3.3.2 fixversion but no no corresponding commit found
> on 3.3.2: HADOOP-17936, HADOOP-18066
>
> I think HADOOP-17936 is already in branch-3.3.2?
> https://github.com/apache/hadoop/commit/6931b70a004de2c84b0630bdb92a2c9f62879c24
> HADOOP-18066 is resolved as invalid so I just removed the fix version tag.
>
> Please check again when you have a chance. Thanks.
>
> Best,
> Chao
>
>
>
>
>
>
>
>
>
>
> On Mon, Feb 14, 2022 at 11:51 PM Viraj Jasani  wrote:
>
>> Thanks Chao. I just re-executed the script and found very minor
>> differences:
>>
>> 1. Incorrect fix versions:
>> HADOOP-17988
>> HADOOP-17728
>>
>> 2. HADOOP-17873 is not yet resolved but the corresponding commit is
>> present.
>>
>> 3. Resolved with 3.3.2 fixversion but no no corresponding commit found on
>> 3.3.2:
>> HADOOP-17936
>> HADOOP-18066
>>
>>
>>
>> On Mon, Feb 14, 2022 at 11:49 PM Chao Sun  wrote:
>>
>>> Thank you so much Viraj! I've no idea that I missed tagging these many
>>> JIRAs. The script is super useful!
>>>
>>> I just fixed the "fix version" of these. Could you double check? Really
>>> appreciate you putting effort to do a thorough check on this.
>>>
>>> Chao
>>>
>>> On Mon, Feb 14, 2022 at 3:10 AM Viraj Jasani  wrote:
>>>
>>>> -0 (non-binding), due to git/jira version discrepancies. Once resolved
>>>> and
>>>> changelist updated, will change my vote to +1 (non-binding).
>>>>
>>>> Basic RC verification using hadoop-vote.sh
>>>> <https://github.com/apache/hadoop/blob/trunk/dev-support/hadoop-vote.sh
>>>> >:
>>>>
>>>> * Signature: ok
>>>> * Checksum : ok
>>>> * Rat check (1.8.0_301): ok
>>>>  - mvn clean apache-rat:check
>>>> * Built from source (1.8.0_301): ok
>>>>  - mvn clean install  -DskipTests
>>>> * Built tar from source (1.8.0_301): ok
>>>>  - mvn clean package  -Pdist -DskipTests -Dtar
>>>> -Dmaven.javadoc.skip=true
>>>>
>>>> Functional validation of HDFS, MapReduce and ATSv2 carry-forwarded from
>>>> previous RC.
>>>>
>>>> Found some Git/Jira version discrepancies (script used from PR
>>>> <https://github.com/apache/hadoop/pull/3991>):
>>>>
>>>> Jiras with missing fixVersion: 3.3.2
>>>>
>>>> HADOOP-17198
>>>> HDFS-16344
>>>> HDFS-16339
>>>> YARN-11007
>>>> HDFS-16171
>>>> HDFS-16350
>>>> HDFS-16336
>>>> HADOOP-17975
>>>> YARN-10991
>>>> HDFS-16332
>>>> HADOOP-17857
>>>> HDFS-16271
>>>> HADOOP-17195
>>>> HADOOP-17290
>>>> HADOOP-17819
>>>> YARN-9551
>>>>
>>>>
>>>>
>>>> On Mon, Feb 14, 2022 at 11:10 AM Viraj Jasani 
>>>> wrote:
>>>>
>>>> > Thanks Chao. Yes contents are now present, I will start the testing.
>>>> >
>>>> >
>>>> > On Mon, 14 Feb 2022 at 2:13 AM, Chao Sun  wrote:
>>>> >
>>>> >> Oops, sorry about that. They should be there now. Can you check
>>>> again?
>>>> >> Thanks!
>>>> >>
>>>> >> Chao
>>>> >>
>>>> >> On Sun, Feb 13, 2022 at 6:47 AM Viraj Jasani 
>>>> wrote:
>>>> >>
>>>> >>> > The RC is available at:
>>>> >>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC4/
>>>> >>>
>>>> >>> Chao, the RC folder seems empty as of now.
>>>> >>>
>>>> >>&g

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC5

2022-02-24 Thread Viraj Jasani
+1 (non-binding)

Using hadoop-vote.sh:

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true


* Basic functional testing on pseudo distributed cluster (carry-forwarded
from RC4): HDFS, MapReduce, ATSv2, HBase (2.x)
* Jira fixVersions seem consistent with git commits



On Tue, Feb 22, 2022 at 10:47 AM Chao Sun  wrote:

> Hi all,
>
> Here's Hadoop 3.3.2 release candidate #5:
>
> The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC5
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC5
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1335
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> CHANGELOG is the only difference between this and RC4. Therefore, the tests
> I've done in RC4 are still valid:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC5 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>


Re: [VOTE] Release Apache Hadoop 3.2.3 - RC0

2022-03-16 Thread Viraj Jasani
+1 (non-binding)

Using hadoop-vote.sh

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

Tested HDFS on pseudo-distributed mode with HBase 2.4 pseudo-distributed
cluster (1M rows ingested), all good.

Test PR to run full build and track UT failures
https://github.com/apache/hadoop/pull/4073, few tests are flaky but they
are passing locally.


On Mon, Mar 14, 2022 at 12:45 PM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Hi all,
>
> Here's Hadoop 3.2.3 release candidate #0:
>
> The RC is available at:
>https://home.apache.org/~iwasakims/hadoop-3.2.3-RC0/
>
> The RC tag is at:
>https://github.com/apache/hadoop/releases/tag/release-3.2.3-RC0
>
> The Maven artifacts are staged at:
>https://repository.apache.org/content/repositories/orgapachehadoop-1339
>
> You can find my public key at:
>https://downloads.apache.org/hadoop/common/KEYS
>
> Please evaluate the RC and vote.
>
> Thanks,
> Masatake Iwasaki
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.2.3 - RC1

2022-03-26 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

* Functional testing of HDFS with HBase looks good (from RC0)
* MapReduce functional testing looks good
* Re-run of UTs for all modules looks good


On Sun, Mar 20, 2022 at 11:03 AM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Hi all,
>
> Here's Hadoop 3.2.3 release candidate #1:
>
> The RC is available at:
>https://home.apache.org/~iwasakims/hadoop-3.2.3-RC1/
>
> The RC tag is at:
>https://github.com/apache/hadoop/releases/tag/release-3.2.3-RC1
>
> The Maven artifacts are staged at:
>https://repository.apache.org/content/repositories/orgapachehadoop-1342
>
> You can find my public key at:
>https://downloads.apache.org/hadoop/common/KEYS
>
> Please evaluate the RC and vote.
> The vote will be open for (at least) 5 days.
>
> Thanks,
> Masatake Iwasaki
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Viraj Jasani
+1 (non-binding),

With a minor change  in
hadoop-vote,

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

HDFS and MapReduce functional testing looks good.

As per PR#4268 , except for a
few flakes, TestDistributedShell and TestCsiClient are consistently failing.


On Tue, May 3, 2022 at 4:24 AM Steve Loughran 
wrote:

> I have put together a release candidate (rc0) for Hadoop 3.3.3
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
>
> The git tag is release-3.3.3-RC0, commit d37586cbda3
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
>
> Release notes
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
>
> There's a very small number of changes, primarily critical code/packaging
> issues and security fixes.
>
>
>- The critical fixes which shipped in the 3.2.3 release.
>-  CVEs in our code and dependencies
>- Shaded client packaging issues.
>- A switch from log4j to reload4j
>
>
> reload4j is an active fork of the log4j 1.17 library with the classes which
> contain CVEs removed. Even though hadoop never used those classes, they
> regularly raised alerts on security scans and concen from users. Switching
> to the forked project allows us to ship a secure logging framework. It will
> complicate the builds of downstream maven/ivy/gradle projects which exclude
> our log4j artifacts, as they need to cut the new dependency instead/as
> well.
>
> See the release notes for details.
>
> This is my first release through the new docker build process, do please
> validate artifact signing &c to make sure it is good. I'll be trying builds
> of downstream projects.
>
> We know there are some outstanding issues with at least one library we are
> shipping (okhttp), but I don't want to hold this release up for it. If the
> docker based release process works smoothly enough we can do a followup
> security release in a few weeks.
>
> Please try the release and vote. The vote will run for 5 days.
>
> -Steve
>


Re: [VOTE] Release Apache Hadoop 3.3.3 (RC1)

2022-05-15 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

HDFS, MapReduce and HBase (2.5) CRUD functional testing on
pseudo-distributed mode looks good.


On Wed, May 11, 2022 at 10:26 AM Steve Loughran 
wrote:

> I have put together a release candidate (RC1) for Hadoop 3.3.3
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/
>
> The git tag is release-3.3.3-RC1, commit d37586cbda3
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1349/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/CHANGELOG.md
>
> Release notes
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/RELEASENOTES.md
>
> There's a very small number of changes, primarily critical code/packaging
> issues and security fixes.
>
> * The critical fixes which shipped in the 3.2.3 release.
> * CVEs in our code and dependencies
> * Shaded client packaging issues.
> * A switch from log4j to reload4j
>
> reload4j is an active fork of the log4j 1.17 library with the classes
> which contain CVEs removed. Even though hadoop never used those classes,
> they regularly raised alerts on security scans and concen from users.
> Switching to the forked project allows us to ship a secure logging
> framework. It will complicate the builds of downstream
> maven/ivy/gradle projects which exclude our log4j artifacts, as they
> need to cut the new dependency instead/as well.
>
> See the release notes for details.
>
> This is the second release attempt. It is the same git commit as before,
> but
> fully recompiled with another republish to maven staging, which has bee
> verified by building spark, as well as a minimal test project.
>
> Please try the release and vote. The vote will run for 5 days.
>
> -Steve
>


Re: [VOTE] Release Apache Hadoop 2.10.2 - RC0

2022-05-27 Thread Viraj Jasani
+0 (non-binding),

* Signature/Checksum looks good, though I am not sure where
"target/artifacts" is coming from for the tars, here is the diff (this was
the case for 2.10.1 as well but checksum was correct):

1c1
< SHA512 (hadoop-2.10.2-site.tar.gz) =
3055a830003f5012660d92da68a317e15da5b73301c2c73cf618e724c67b7d830551b16928e0c28c10b66f04567e4b6f0b564647015bacc4677e232c0011537f
---
> SHA512 (target/artifacts/hadoop-2.10.2-site.tar.gz) =
3055a830003f5012660d92da68a317e15da5b73301c2c73cf618e724c67b7d830551b16928e0c28c10b66f04567e4b6f0b564647015bacc4677e232c0011537f
1c1
< SHA512 (hadoop-2.10.2-src.tar.gz) =
483b6a4efd44234153e21ffb63a9f551530a1627f983a8837c655ce1b8ef13486d7178a7917ed3f35525c338e7df9b23404f4a1b0db186c49880448988b88600
---
> SHA512 (target/artifacts/hadoop-2.10.2-src.tar.gz) =
483b6a4efd44234153e21ffb63a9f551530a1627f983a8837c655ce1b8ef13486d7178a7917ed3f35525c338e7df9b23404f4a1b0db186c49880448988b88600
1c1
< SHA512 (hadoop-2.10.2.tar.gz) =
13e95907073d815e3f86cdcc24193bb5eec0374239c79151923561e863326988c7f32a05fb7a1e5bc962728deb417f546364c2149541d6234221b00459154576
---
> SHA512 (target/artifacts/hadoop-2.10.2.tar.gz) =
13e95907073d815e3f86cdcc24193bb5eec0374239c79151923561e863326988c7f32a05fb7a1e5bc962728deb417f546364c2149541d6234221b00459154576

However, checksums are correct.

* Builds from source look good
 - mvn clean install  -DskipTests
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

* Rat check, if run before building from source locally, fails with error:

[ERROR] Plugin org.apache.hadoop:hadoop-maven-plugins:2.10.2 or one of its
dependencies could not be resolved: Could not find artifact
org.apache.hadoop:hadoop-maven-plugins:jar:2.10.2 in central (
https://repo.maven.apache.org/maven2) -> [Help 1]
[ERROR]

However, once we build locally, rat check passes (because
hadoop-maven-plugins 2.10.2 would be present in local .m2).
Also, hadoop-maven-plugins:2.10.2 is available here
https://repository.apache.org/content/repositories/orgapachehadoop-1350/org/apache/hadoop/hadoop-maven-plugins/2.10.2/

* Ran sample HDFS and MapReduce commands, look good.

Until we release Hadoop artifacts, hadoop-maven-plugins for that release
would not be present in the central maven repository, hence I am still
wondering how rat check failed only for this RC and not for any of previous
release RCs. hadoop-vote.sh always runs rat check before building from
source locally.


On Tue, May 24, 2022 at 7:41 PM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Hi all,
>
> Here's Hadoop 2.10.2 release candidate #0:
>
> The RC is available at:
>https://home.apache.org/~iwasakims/hadoop-2.10.2-RC0/
>
> The RC tag is at:
>https://github.com/apache/hadoop/releases/tag/release-2.10.2-RC0
>
> The Maven artifacts are staged at:
>https://repository.apache.org/content/repositories/orgapachehadoop-1350
>
> You can find my public key at:
>https://downloads.apache.org/hadoop/common/KEYS
>
> Please evaluate the RC and vote.
> The vote will be open for (at least) 5 days.
>
> Thanks,
> Masatake Iwasaki
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.10.2 - RC0

2022-05-27 Thread Viraj Jasani
Sounds good, yes they both are not blocker issues for release anyways
(hence +0), just wanted to get more insights. Thanks for the detailed
clarifications Ayush!

Changing my vote to +1 (non-binding).


On Fri, May 27, 2022 at 1:24 PM Ayush Saxena  wrote:

> The checksum stuff was addressed in HADOOP-16985, so that filename stuff
> is sorted only post 3.3.x
> BTW it is a known issue:
>
> https://issues.apache.org/jira/browse/HADOOP-16494?focusedCommentId=16927236&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16927236
>
> Must not be a blocker for us
>
> The RAT check failing with dependency issue. That also should work post
> 3.3.x because there is no Hadoop-maven-plugin dependency in Hadoop-yarn-api
> module post 3.3.x, HADOOP-16560 removed it.
> Ref:
> https://github.com/apache/hadoop/pull/1496/files#diff-f5d219eaf211871f9527ae48da59586e7e9958ea7649de74a1393e599caa6dd6L121-R122
>
> So, that is why the RAT check passes for 3.3.x+ without the need of this
> module. Committing HADOOP-16663, should solve this though.(I haven't tried
> though, just by looking at the problem)
>
> Good to have patches, but doesn't look like blockers to me. kind of build
> related stuffs only, nothing bad with our core Hadoop code.
>
> -Ayush
>
> On Sat, 28 May 2022 at 01:04, Viraj Jasani  wrote:
>
>> +0 (non-binding),
>>
>> * Signature/Checksum looks good, though I am not sure where
>> "target/artifacts" is coming from for the tars, here is the diff (this was
>> the case for 2.10.1 as well but checksum was correct):
>>
>> 1c1
>> < SHA512 (hadoop-2.10.2-site.tar.gz) =
>>
>> 3055a830003f5012660d92da68a317e15da5b73301c2c73cf618e724c67b7d830551b16928e0c28c10b66f04567e4b6f0b564647015bacc4677e232c0011537f
>> ---
>> > SHA512 (target/artifacts/hadoop-2.10.2-site.tar.gz) =
>>
>> 3055a830003f5012660d92da68a317e15da5b73301c2c73cf618e724c67b7d830551b16928e0c28c10b66f04567e4b6f0b564647015bacc4677e232c0011537f
>> 1c1
>> < SHA512 (hadoop-2.10.2-src.tar.gz) =
>>
>> 483b6a4efd44234153e21ffb63a9f551530a1627f983a8837c655ce1b8ef13486d7178a7917ed3f35525c338e7df9b23404f4a1b0db186c49880448988b88600
>> ---
>> > SHA512 (target/artifacts/hadoop-2.10.2-src.tar.gz) =
>>
>> 483b6a4efd44234153e21ffb63a9f551530a1627f983a8837c655ce1b8ef13486d7178a7917ed3f35525c338e7df9b23404f4a1b0db186c49880448988b88600
>> 1c1
>> < SHA512 (hadoop-2.10.2.tar.gz) =
>>
>> 13e95907073d815e3f86cdcc24193bb5eec0374239c79151923561e863326988c7f32a05fb7a1e5bc962728deb417f546364c2149541d6234221b00459154576
>> ---
>> > SHA512 (target/artifacts/hadoop-2.10.2.tar.gz) =
>>
>> 13e95907073d815e3f86cdcc24193bb5eec0374239c79151923561e863326988c7f32a05fb7a1e5bc962728deb417f546364c2149541d6234221b00459154576
>>
>> However, checksums are correct.
>>
>> * Builds from source look good
>>  - mvn clean install  -DskipTests
>>  - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true
>>
>> * Rat check, if run before building from source locally, fails with error:
>>
>> [ERROR] Plugin org.apache.hadoop:hadoop-maven-plugins:2.10.2 or one of its
>> dependencies could not be resolved: Could not find artifact
>> org.apache.hadoop:hadoop-maven-plugins:jar:2.10.2 in central (
>> https://repo.maven.apache.org/maven2) -> [Help 1]
>> [ERROR]
>>
>> However, once we build locally, rat check passes (because
>> hadoop-maven-plugins 2.10.2 would be present in local .m2).
>> Also, hadoop-maven-plugins:2.10.2 is available here
>>
>> https://repository.apache.org/content/repositories/orgapachehadoop-1350/org/apache/hadoop/hadoop-maven-plugins/2.10.2/
>>
>> * Ran sample HDFS and MapReduce commands, look good.
>>
>> Until we release Hadoop artifacts, hadoop-maven-plugins for that release
>> would not be present in the central maven repository, hence I am still
>> wondering how rat check failed only for this RC and not for any of
>> previous
>> release RCs. hadoop-vote.sh always runs rat check before building from
>> source locally.
>>
>>
>> On Tue, May 24, 2022 at 7:41 PM Masatake Iwasaki <
>> iwasak...@oss.nttdata.co.jp> wrote:
>>
>> > Hi all,
>> >
>> > Here's Hadoop 2.10.2 release candidate #0:
>> >
>> > The RC is available at:
>> >https://home.apache.org/~iwasakims/hadoop-2.10.2-RC0/
>> >
>> > The RC tag is at:
>> >https://github.com/apache/hadoop/releases/tag/release-2.10.2-RC0
>> >
>> > The Maven artifacts are staged at:
>> >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1350
>> >
>> > You can find my public key at:
>> >https://downloads.apache.org/hadoop/common/KEYS
>> >
>> > Please evaluate the RC and vote.
>> > The vote will be open for (at least) 5 days.
>> >
>> > Thanks,
>> > Masatake Iwasaki
>> >
>> > -
>> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> >
>> >
>>
>


Re: [VOTE] Release Apache Hadoop 3.3.5

2022-12-27 Thread Viraj Jasani
-0 (non-binding)

Output of hadoop-vote.sh:

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_341): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_341): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_341): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

Manual testing on local mini cluster:
* Basic CRUD tests on Hdfs looks good
* Sample MapReduce job looks good
* S3A tests look good with scale profile (ITestS3AContractUnbuffer is
flaky, but when run individually, it passes)

Full build with all modules UT results for branch-3.3.5 latest HEAD are
available on
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-3.3.5-java8-linux-x86_64/

>From the above build, there are some consistently failing tests, out of
which only TestDataNodeRollingUpgrade passed locally, whereas rest of the
tests are consistently failing locally as well, we might want to fix (or
ignore, if required) them:

org.apache.hadoop.hdfs.TestErasureCodingPolicyWithSnapshot#testSnapshotsOnErasureCodingDirAfterNNRestart
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister
org.apache.hadoop.hdfs.TestLeaseRecovery2#testHardLeaseRecoveryAfterNameNodeRestart
org.apache.hadoop.hdfs.TestLeaseRecovery2#testHardLeaseRecoveryAfterNameNodeRestart2
org.apache.hadoop.hdfs.TestLeaseRecovery2#testHardLeaseRecoveryWithRenameAfterNameNodeRestart
org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade#testWithLayoutChangeAndFinalize
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot#testSnapshotOpsOnRootReservedPath
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap#testReadRenamedSnapshotFileWithCheckpoint
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion#testApplyEditLogForDeletion



On Wed, Dec 21, 2022 at 11:29 AM Steve Loughran 
wrote:

> Mukund and I have put together a release candidate (RC0) for Hadoop 3.3.5.
>
> Given the time of year it's a bit unrealistic to run a 5 day vote and
> expect people to be able to test it thoroughly enough to make this the one
> we can ship.
>
> What we would like is for anyone who can to verify the tarballs, and test
> the binaries, especially anyone who can try the arm64 binaries. We've got
> the building of those done and now the build file will incorporate them
> into the release -but neither of us have actually tested it yet. Maybe I
> should try it on my pi400 over xmas.
>
> The maven artifacts are up on the apache staging repo -they are the ones
> from x86 build. Building and testing downstream apps will be incredibly
> helpful.
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC0/
>
> The git tag is release-3.3.5-RC0, commit 3262495904d
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1365/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC0/CHANGELOG.md
>
> Release notes
>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC0/RELEASENOTES.md
>
> This is off branch-3.3 and is the first big release since 3.3.2.
>
> Key changes include
>
> * Big update of dependencies to try and keep those reports of
>   transitive CVEs under control -both genuine and false positive.
> * HDFS RBF enhancements
> * Critical fix to ABFS input stream prefetching for correct reading.
> * Vectored IO API for all FSDataInputStream implementations, with
>   high-performance versions for file:// and s3a:// filesystems.
>   file:// through java native io
>   s3a:// parallel GET requests.
> * This release includes Arm64 binaries. Please can anyone with
>   compatible systems validate these.
>
>
> Please try the release and vote on it, even though i don't know what is a
> good timeline here...i'm actually going on holiday in early jan. Mukund is
> around and so can drive the process while I'm offline.
>
> Assuming we do have another iteration, the RC1 will not be before mid jan
> for that reason
>
> Steve (and mukund)
>


Socket timeout settings

2023-02-24 Thread Viraj Jasani
We have a specific environment where we need to harmonize socket connection
timeouts for all Hadoop daemons and some downstreamers too. While reviewing
the socket connection timeouts set in NetUtils, UrlConnection
(HttpURLConnection), I compiled a list of the following configurations:

   - ipc.client.connect.timeout
   - dfs.client.socket-timeout
   - dfs.datanode.socket.write.timeout
   - dfs.client.fsck.connect.timeout
   - dfs.client.fsck.read.timeout
   - dfs.federation.router.connect.timeout
   - dfs.qjournal.http.open.timeout.ms
   - dfs.qjournal.http.read.timeout.ms
   - dfs.checksum.ec.socket-timeout
   - hadoop.security.kms.client.timeout
   - mapreduce.reduce.shuffle.connect.timeout
   - mapreduce.reduce.shuffle.read.timeout


Moreover, although “dfs.datanode.socket.reuse.keepalive” does not indicate
a direct socket timeout, we set it as SocketOptions#SO_TIMEOUT if
opsProcessed != 0 (to block read on InputStream only for this timeout,
beyond which it would result in SocketTimeoutException). Similarly,
“ipc.ping.interval” and “ipc.client.rpc-timeout.ms” are also used to set
SocketOptions#SO_TIMEOUT on the socket.

It's possible that I may have missed some socket timeout configs in the
above list. If anyone could provide feedback on this list or suggest any
missing configs, it would be greatly appreciated.


Re: [VOTE] Release Apache Hadoop 3.3.5 (RC2)

2023-03-02 Thread Viraj Jasani
While this RC is not going to be final, I just wanted to share the results
of the testing I have done so far with RC1 and RC2.

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_341): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_341): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_341): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

* Built images using the tarball, installed and started all of Hdfs, JHS
and Yarn components
* Ran Hbase (latest 2.5) tests against Hdfs, ran RowCounter Mapreduce job
* Hdfs CRUD tests
* MapReduce wordcount job

* Ran S3A tests with scale profile against us-west-2:
mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale

ITestS3AConcurrentOps#testParallelRename is timing out after ~960s. This is
consistently failing, looks like a recent regression.
I was also able to repro on trunk, will create Jira.


On Mon, Feb 27, 2023 at 9:59 AM Steve Loughran 
wrote:

> Mukund and I have put together a release candidate (RC2) for Hadoop 3.3.5.
>
> We need anyone who can to verify the source and binary artifacts,
> including those JARs staged on maven, the site documentation and the arm64
> tar file.
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC2/
>
> The git tag is release-3.3.5-RC2, commit 72f8c2a4888
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1369/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC2/CHANGELOG.md
>
> Release notes
>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC2/RELEASENOTES.md
>
> This is off branch-3.3 and is the first big release since 3.3.2.
>
> As to what changed since the RC1 attempt last week
>
>
>1. Version fixup in JIRA (credit due to Takanobu Asanuma there)
>2. HADOOP-18470. Remove HDFS RBF text in the 3.3.5 index.md file
>3. Revert "HADOOP-18590. Publish SBOM artifacts (#5281)" (creating build
>issues in maven 3.9.0)
>4. HADOOP-18641. Cloud connector dependency and LICENSE fixup. (#5429)
>
>
> Note, because the arm64 binaries are built separately on a different
> platform and JVM, their jar files may not match those of the x86
> release -and therefore the maven artifacts. I don't think this is
> an issue (the ASF actually releases source tarballs, the binaries are
> there for help only, though with the maven repo that's a bit blurred).
>
> The only way to be consistent would actually untar the x86.tar.gz,
> overwrite its binaries with the arm stuff, retar, sign and push out
> for the vote. Even automating that would be risky.
>
> Please try the release and vote. The vote will run for 5 days.
>
> Steve and Mukund
>


Re: [VOTE] Release Apache Hadoop 3.3.5 (RC2)

2023-03-04 Thread Viraj Jasani
A minor update on ITestS3AConcurrentOps#testParallelRename

I was previously connected to a vpn due to which bandwidth was getting
throttled earlier. Ran the test again today without vpn and had no issues
(earlier only 40% of the overall putObject were able to get completed
within timeout).


On Sat, Mar 4, 2023 at 4:29 AM Steve Loughran 
wrote:

> On Sat, 4 Mar 2023 at 01:47, Erik Krogen  wrote:
>
> > Thanks Steve. I see now that the branch cut was way back in October so I
> > definitely understand your frustration here!
> >
> > This made me realize that HDFS-16832
> > <https://issues.apache.org/jira/browse/HDFS-16832>, which resolves a
> very
> > similar issue as the aforementioned HDFS-16923, is also missing from the
> > RC. I erroneously marked it with a fix version of 3.3.5 -- it was before
> > the initial 3.3.5 RC was made and I didn't notice the branch was cut. My
> > apologies for that. I've pushed both HDFS-16832 and HDFS-16932 to
> > branch-3.3.5, so they are ready if/when an RC3 is cut.
> >
>
> thanks.
>
> >
> > In the meantime, I tested for RC2 that a local cluster of NN + standby +
> > observer + QJM works as expected for some basic HDFS commands.
> >
>
> OK. Could you have a go with a (locally built) patch release
>
> >
> > On Fri, Mar 3, 2023 at 2:52 AM Steve Loughran
> 
> > wrote:
> >
> >> shipping broken hdfs isn't something we'd want to do, but if we can be
> >> confident that all other issues can be addressed in RC3 then I'd be
> happy.
> >>
> >> On Fri, 3 Mar 2023 at 05:09, Ayush Saxena  wrote:
> >>
> >> > I will highlight that I am completely fed up with doing this  release
> >> and
> >> >> really want to get it out the way -for which I depend on support from
> >> as
> >> >> many other developers as possible.
> >> >
> >> >
> >> > hmm, I can feel the pain. I tried to find if there is any config or
> any
> >> > workaround which can dodge this HDFS issue, but unfortunately couldn't
> >> find
> >> > any. If someone does a getListing with needLocation and the file
> doesn't
> >> > exist at Observer he is gonna get a NPE rather than a FNF, It isn't
> just
> >> > the exception, AFAIK Observer reads have some logic around handling
> FNF
> >> > specifically, that it attempts Active NN or something like that in
> such
> >> > cases, So, that will be broken as well for this use case.
> >> >
> >> > Now, there is no denying the fact there is an issue on the HDFS side,
> >> and
> >> > it has already been too much work on your side, so you can argue that
> it
> >> > might not be a very frequent use case or so. It's your call.
> >> >
> >> > Just sharing, no intentions of saying you should do that, But as an RM
> >> > "nobody" can force you for a new iteration of a RC, it is gonna be
> your
> >> > call and discretion. As far as I know a release can not be vetoed by
> >> > "anybody" as per the apache by laws.(
> >> > https://www.apache.org/legal/release-policy.html#release-approval).
> >> Even
> >> > our bylaws say that product release requires a Lazy Majority not a
> >> > Consensus Approval.
> >> >
> >> > So, you have a way out. You guys are 2 already and 1 I will give you a
> >> > pass, in case you are really in a state: ''Get me out of this mess"
> >> state,
> >> > my basic validations on x86 & Aarch64 both are passing as of now,
> >> couldn't
> >> > reach the end for any of the RC's
> >> >
> >> > -Ayush
> >> >
> >> > On Fri, 3 Mar 2023 at 08:41, Viraj Jasani  wrote:
> >> >
> >> >> While this RC is not going to be final, I just wanted to share the
> >> results
> >> >> of the testing I have done so far with RC1 and RC2.
> >> >>
> >> >> * Signature: ok
> >> >> * Checksum : ok
> >> >> * Rat check (1.8.0_341): ok
> >> >>  - mvn clean apache-rat:check
> >> >> * Built from source (1.8.0_341): ok
> >> >>  - mvn clean install  -DskipTests
> >> >> * Built tar from source (1.8.0_341): ok
> >> >>  - mvn clean package  -Pdist -DskipTests -Dtar
> >> -Dmaven.javadoc.skip=true
> >> >>
> >> >> * Built images using the tarball, insta

Re: [VOTE] Release Apache Hadoop 3.3.5 (RC3)

2023-03-17 Thread Viraj Jasani
+1 (non-binding)

* Signature/Checksum: ok
* Rat check (1.8.0_341): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_341): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_341): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

Containerized deployments:
* Deployed and started Hdfs - NN, DN, JN with Hbase 2.5 and Zookeeper 3.7
* Deployed and started JHS, RM, NM
* Hbase, hdfs CRUD looks good
* Sample RowCount MapReduce job looks good

* S3A tests with scale profile looks good


On Wed, Mar 15, 2023 at 12:48 PM Steve Loughran 
wrote:

> Apache Hadoop 3.3.5
>
> Mukund and I have put together a release candidate (RC3) for Hadoop 3.3.5.
>
> What we would like is for anyone who can to verify the tarballs, especially
> anyone who can try the arm64 binaries as we want to include them too.
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC3/
>
> The git tag is release-3.3.5-RC3, commit 706d88266ab
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1369/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC3/CHANGELOG.md
>
> Release notes
>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC3/RELEASENOTES.md
>
> This is off branch-3.3 and is the first big release since 3.3.2.
>
> Key changes include
>
> * Big update of dependencies to try and keep those reports of
>   transitive CVEs under control -both genuine and false positives.
> * HDFS RBF enhancements
> * Critical fix to ABFS input stream prefetching for correct reading.
> * Vectored IO API for all FSDataInputStream implementations, with
>   high-performance versions for file:// and s3a:// filesystems.
>   file:// through java native io
>   s3a:// parallel GET requests.
> * This release includes Arm64 binaries. Please can anyone with
>   compatible systems validate these.
> * and compared to the previous RC, all the major changes are
>   HDFS issues.
>
> Note, because the arm64 binaries are built separately on a different
> platform and JVM, their jar files may not match those of the x86
> release -and therefore the maven artifacts. I don't think this is
> an issue (the ASF actually releases source tarballs, the binaries are
> there for help only, though with the maven repo that's a bit blurred).
>
> The only way to be consistent would actually untar the x86.tar.gz,
> overwrite its binaries with the arm stuff, retar, sign and push out
> for the vote. Even automating that would be risky.
>
> Please try the release and vote. The vote will run for 5 days.
>
> -Steve
>


Re: [DISCUSS] hadoop branch-3.3+ going to java11 only

2023-03-28 Thread Viraj Jasani
IIRC some of the ongoing major dependency upgrades (log4j 1 to 2, jersey 1
to 2 and junit 4 to 5) are blockers for java 11 compile + test stability.


On Tue, Mar 28, 2023 at 4:55 AM Steve Loughran 
wrote:

>  Now that hadoop 3.3.5 is out, i want to propose something new
>
> we switch branch-3.3 and trunk to being java11 only
>
>
>1. java 11 has been out for years
>2. oracle java 8 is no longer available under "premier support"; you
>can't really get upgrades
>https://www.oracle.com/java/technologies/java-se-support-roadmap.html
>3. openJDK 8 releases != oracle ones, and things you compile with them
>don't always link to oracle java 8 (some classes in java.nio have added
>more overrides)
>4. more and more libraries we want to upgrade to/bundle are java 11 only
>5. moving to java 11 would cut our yetus build workload in half, and
>line up for adding java 17 builds instead.
>
>
> I know there are some outstanding issues still in
> https://issues.apache.org/jira/browse/HADOOP-16795 -but are they blockers?
> Could we just move to java11 and enhance at our leisure, once java8 is no
> longer a concern.
>


Re: [VOTE] Release Apache Hadoop 3.3.6 RC1

2023-06-21 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum: ok
* Rat check (1.8.0_362): ok
- mvn clean apache-rat:check
* Built from source (1.8.0_362): ok
- mvn clean install -DskipTests
* Built tar from source (1.8.0_362): ok
- mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

* Brought up pseudo-distributed mode cluster with hdfs, zk 3.7 and hbase 2.5
* Basic RBF tests look good
* S3A tests with input stream prefetch enabled: looks good


On Sun, Jun 18, 2023 at 5:53 PM Wei-Chiu Chuang  wrote:

> I am inviting anyone to try and vote on this release candidate.
>
> Note:
> This is exactly the same as RC0, except the CHANGELOG.
>
> The RC is available at:
> https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-amd64/ (for amd64)
> https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-arm64/ (for arm64)
>
> Git tag: release-3.3.6-RC1
> https://github.com/apache/hadoop/releases/tag/release-3.3.6-RC1
>
> Maven artifacts is built by x86 machine and are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1380/
>
> My public key:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Changelog:
> https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-amd64/CHANGELOG.md
>
> Release notes:
> https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-amd64/RELEASENOTES.md
>
> This is a relatively small release (by Hadoop standard) containing about
> 120 commits.
> Please give it a try, this RC vote will run for 7 days.
>
>
> Feature highlights:
>
> SBOM artifacts
> 
> Starting from this release, Hadoop publishes Software Bill of Materials
> (SBOM) using
> CycloneDX Maven plugin. For more information about SBOM, please go to
> [SBOM](https://cwiki.apache.org/confluence/display/COMDEV/SBOM).
>
> HDFS RBF: RDBMS based token storage support
> 
> HDFS Router-Router Based Federation now supports storing delegation tokens
> on MySQL,
> [HADOOP-18535](https://issues.apache.org/jira/browse/HADOOP-18535)
> which improves token operation through over the original Zookeeper-based
> implementation.
>
>
> New File System APIs
> 
> [HADOOP-18671](https://issues.apache.org/jira/browse/HADOOP-18671) moved a
> number of
> HDFS-specific APIs to Hadoop Common to make it possible for certain
> applications that
> depend on HDFS semantics to run on other Hadoop compatible file systems.
>
> In particular, recoverLease() and isFileClosed() are exposed through
> LeaseRecoverable
> interface. While setSafeMode() is exposed through SafeMode interface.
>


Re: HADOOP-18207 hadoop-logging module about to land

2023-07-27 Thread Viraj Jasani
Thank you Wei-Chiu for the thread and extensive help with reviews! Thank
you Ayush for responding to the thread!
Let me try to address some points.

Please pardon my ignorance if I am not supposed to respond to any of the
questions.

> Regarding this entire activity including the parent tickets: Do we have
any dev list agreement for this?

HADOOP-16206  was
created back in Mar, 2019 and there has been tons of discussion on the Jira
since then. Duo is an expert and he has also worked with our esteemed Log4j
community to introduce changes that promise great benefits for both hbase
and hadoop projects (for instance, [1]). He has laid out the plan to tackle
the whole migration, one small piece at a time and there has been enough
agreement on the Jira from Hadoop experts, some of Log4j community members
also chimed in and provided their feedbacks, and it has been agreed upon to
proceed with Duo's proposed plan and integrate the changes into the trunk.
This will enable us to stabilize the work gradually over time.
The Jira has received many interactions over the past few years.


> What incompatibilities have been introduced till now for this and what
are planned.

Let me list down what has been done so far, that might be easier to discuss:


   - HADOOP-18206  removed
   commons-logging references, the project is no longer under any active
   development cycle (last release on 2014
   https://github.com/apache/commons-logging/tags), and without this
   cleanup, it becomes very difficult to chase log4j changes. No direct
   incompatibility involved.
   - HADOOP-18653 
follow-up
   to ensure we use slf4j log4j adaptor to ensure slf4j is in the classpath
   before we update loglevel (dynamically change log level using servlet). No
   incompatibility introduced.
   - HADOOP-18648  kms
   log4j properties should not be loaded dynamically as this is no longer
   supported by log4j2, instead use HADOOP_OPTS to provide log4j properties
   location. No incompatibility introduced.
   - HADOOP-18654 
TaskLogAppender
   is not being used, remove it. It was marked IA.Private and IS.Unstable. No
   incompatibility introduced.
   - HADOOP-18669 
remove Log4Json
   Layout as it is more suitable to be part of Log4j project rather than
   Hadoop, it's not being used anywhere. Each appender that we maintain, we
   pay for its maintenance cost. No incompatibility introduced.
   - HADOOP-18649  CLA
   and CLRA appenders to be replaced with log4j RFA appender. Both CLA and
   CLRA have been our custom appenders and they both provide the same
   capabilities as RFA hence their maintenance in our project would come with
   cost for any future upgrades of log4j. This has also been agreed upon on
   the parent Jira way before the work started.
   - HADOOP-18631  Migrate
   dynamic async appenders to log4j properties. This is *an incompatible
   change* because we replace "hadoop site configs" with "log4j
   properties". We are not losing out on our capability to generate async logs
   for namenode audit, but the way to configure it is now different. The
   release notes have been updated to reflect the same. For log4j upgrade, we
   don't have a choice here, log4j2 only supports async loggers as the
   configuration, not as programmatically loaded appenders. log4j properties
   to configure are provided on
   
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties#L64-L81


As for the current task that introduced hadoop-logging module HADOOP-18207
, we don't have any
incompatibility yet because our direct usage of log4j APIs and custom
appenders have not been marked IA.Public.

The major incompatibility is going to be introduced when we add log4j2 in
our classpath and remove log4j1 from our dependencies. This work has not
yet begun and it's going to take a while.


> What does this activity bring up for the downstream folks adapting this?
Upgrading Hadoop is indeed important for a lot of projects

Downstream projects should expect to no longer get log4j1 as a transitive
dependency from hadoop, instead they would get log4j2 as transitive
dependency (only after the whole upgrade is done, log4j2 upgrade has not
even started as I mentioned above :)).

This brings an interesting question: why do we need this upgrade? For us,
almost all of hadoop ecosystem projects that we use have migrated to
log4j2, and when we keep common thirdparty dependencies to be used by all
hadoop downstreamers, we can still not use log4j2 because hado

Full builds not reporting status back to Github PRs

2024-05-28 Thread Viraj Jasani
Some of the recent PRs that require full build runs, it seems that the
builds are not able to get the build results posted to the PRs. Just wanted
to check if anyone has any ideas reg this.

Sample recent PRs:

   - https://github.com/apache/hadoop/pull/6830
   - https://github.com/apache/hadoop/pull/6247
   - https://github.com/apache/hadoop/pull/6842


Re: Full builds not reporting status back to Github PRs

2024-05-28 Thread Viraj Jasani
Most important, yetus report is usually not available in such cases. e.g.
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6830/2/Yetus_20Report/


On Tue, May 28, 2024 at 2:05 PM Viraj Jasani  wrote:

> Some of the recent PRs that require full build runs, it seems that the
> builds are not able to get the build results posted to the PRs. Just wanted
> to check if anyone has any ideas reg this.
>
> Sample recent PRs:
>
>- https://github.com/apache/hadoop/pull/6830
>- https://github.com/apache/hadoop/pull/6247
>- https://github.com/apache/hadoop/pull/6842
>
>
>


[jira] [Created] (HADOOP-17642) Could not instantiate class org.apache.hadoop.log.metrics.EventCounter

2021-04-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17642:
-

 Summary: Could not instantiate class 
org.apache.hadoop.log.metrics.EventCounter
 Key: HADOOP-17642
 URL: https://issues.apache.org/jira/browse/HADOOP-17642
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


After removal of EventCounter class, we are not able to bring up HDFS cluster.
{code:java}
log4j:ERROR Could not instantiate class 
[org.apache.hadoop.log.metrics.EventCounter].
java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at 
org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at 
org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
at org.apache.hadoop.conf.Configuration.(Configuration.java:229)
at org.apache.hadoop.hdfs.tools.GetConf.(GetConf.java:131)
log4j:ERROR Could not instantiate appender named "EventCounter".
{code}
We need to clean up log4j.properties to avoid instantiating appender 
EventCounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-04-29 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17676:
-

 Summary: Restrict imports from org.apache.curator.shaded
 Key: HADOOP-17676
 URL: https://issues.apache.org/jira/browse/HADOOP-17676
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports as 
discussed on PR#2945. We can use enforcer-rule to restrict imports such that if 
ever used, mvn build fails.

Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17700:
-

 Summary: ExitUtil#halt info log with incorrect placeholders
 Key: HADOOP-17700
 URL: https://issues.apache.org/jira/browse/HADOOP-17700
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ExitUtil#halt with non-zero exit status code provides info log with incorrect 
no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17720) Replace Guava Sets usage by Hadoop's own Sets in HDFS

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17720:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in HDFS
 Key: HADOOP-17720
 URL: https://issues.apache.org/jira/browse/HADOOP-17720
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17721) Replace Guava Sets usage by Hadoop's own Sets in Yarn

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17721:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in Yarn
 Key: HADOOP-17721
 URL: https://issues.apache.org/jira/browse/HADOOP-17721
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17722) Replace Guava Sets usage by Hadoop's own Sets in MapReduce

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17722:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in MapReduce
 Key: HADOOP-17722
 URL: https://issues.apache.org/jira/browse/HADOOP-17722
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2021-05-21 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17726:
-

 Summary: Replace Sets#newHashSet() and newTreeSet() with 
constructors directly
 Key: HADOOP-17726
 URL: https://issues.apache.org/jira/browse/HADOOP-17726
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


As per the guidelines provided by Guava Sets#newHashSet() and 
Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
newTreeSet<>() directly.

Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17732) Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom

2021-05-25 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17732:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Sets in 
hadoop-main pom
 Key: HADOOP-17732
 URL: https://issues.apache.org/jira/browse/HADOOP-17732
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Now that all sub-tasks to remove dependency on Guava Sets are completed, we 
should move restrict-imports-enforcer-rule for Guava Sets import in hadoop-main 
pom and remove from individual project poms.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17743) Replace Guava Lists usage by Hadoop's own Lists in hadoop-common, hadoop-tools and cloud-storage projects

2021-06-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17743:
-

 Summary: Replace Guava Lists usage by Hadoop's own Lists in 
hadoop-common, hadoop-tools and cloud-storage projects
 Key: HADOOP-17743
 URL: https://issues.apache.org/jira/browse/HADOOP-17743
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17753) Keep restrict-imports-enforcer-rule for Guava Lists in hadoop-main pom

2021-06-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17753:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Lists in 
hadoop-main pom
 Key: HADOOP-17753
 URL: https://issues.apache.org/jira/browse/HADOOP-17753
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2021-06-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-17114.
---
Resolution: Duplicate

> Replace Guava initialization of Lists.newArrayList
> --
>
> Key: HADOOP-17114
> URL: https://issues.apache.org/jira/browse/HADOOP-17114
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> There are unjustified use of Guava APIs to initialize LinkedLists and 
> ArrayLists. This could be simply replaced by Java API.
> By analyzing hadoop code, the best way to replace guava  is to do the 
> following steps:
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)
>  
> After this class is created, we can simply replace the import statement in 
> all the source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.collect.Lists;' in project with mask 
> '*.java'
> Found Occurrences  (246 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 22 import com.google.common.collect.Lists;
> org.apache.hadoop.crypto  (1 usage found)
> CryptoCodec.java  (1 usage found)
> 35 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.azurebfs  (3 usages found)
> ITestAbfsIdentityTransformer.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> ITestAzureBlobFilesystemAcl.java  (1 usage found)
> 21 import com.google.common.collect.Lists;
> ITestAzureBlobFileSystemCheckAccess.java  (1 usage found)
> 20 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.http.client  (2 usages found)
> BaseTestHttpFSWith.java  (1 usage found)
> 77 import com.google.common.collect.Lists;
> HttpFSFileSystem.java  (1 usage found)
> 75 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.permission  (2 usages found)
> AclStatus.java  (1 usage found)
> 27 import com.google.common.collect.Lists;
> AclUtil.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a  (3 usages found)
> ITestS3AFailureHandling.java  (1 usage found)
> 23 import com.google.common.collect.Lists;
> ITestS3GuardListConsistency.java  (1 usage found)
> 34 import com.google.common.collect.Lists;
> S3AUtils.java  (1 usage found)
> 57 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> RolePolicies.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit  (2 usages found)
> ITestCommitOperations.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> TestMagicCommitPaths.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit.staging  (3 usages found)
> StagingTestBase.java  (1 usage found)
> 47 import com.google.common.collect.Lists;
> TestStagingPartitionedFileListing.java  (1 usage found)
> 31 import com.google.common.collect.Lists;
> TestStagingPartitionedTaskCommit.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.impl  (2 usages found)
> RenameOperation.java  (1 usage found)
> 30 import com.google.common.collect.Lists;
> TestPartialDeleteFailures.java  (1 usage found)
> 37 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.s3guard  (3 usages found)
> DumpS3GuardDynamoTable.java  (1 usage found)
> 38 import com.google.common.collect.Lists;
> DynamoDBMetadataStore.java  (1 usage found)
> 67 import co

[jira] [Created] (HADOOP-17788) Replace IOUtils#closeQuietly usages

2021-07-02 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17788:
-

 Summary: Replace IOUtils#closeQuietly usages
 Key: HADOOP-17788
 URL: https://issues.apache.org/jira/browse/HADOOP-17788
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


IOUtils#closeQuietly is deprecated since 2.6 release of commons-io without any 
replacement. Since we already have good replacement available in Hadoop's own 
IOUtils, we should use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17795) Provide fallbacks for callqueue.impl and scheduler.impl

2021-07-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17795:
-

 Summary: Provide fallbacks for callqueue.impl and scheduler.impl
 Key: HADOOP-17795
 URL: https://issues.apache.org/jira/browse/HADOOP-17795
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As mentioned in parent Jira, we should provide default properties for 
callqueue.impl and scheduler.impl such that if properties with port is not 
configured, we can fallback to default property. If "ipc.8020.callqueue.impl" 
is not present, fallback property could be "ipc.callqueue.impl" (without port). 
We can take up rest of the callqueue properties in separate sub-tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17808) ipc.Client not setting interrupt flag after catching InterruptedException

2021-07-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17808:
-

 Summary: ipc.Client not setting interrupt flag after catching 
InterruptedException
 Key: HADOOP-17808
 URL: https://issues.apache.org/jira/browse/HADOOP-17808
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ipc.Client is swallowing InterruptedException at a couple of places:
 # While waiting on all connections to be closed
 # While waiting to retrieve some RPC response

We should at least set the interrupt signal and also log the 
InterruptedException caught.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17814) Provide fallbacks for identity/cost providers and backoff enable

2021-07-24 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17814:
-

 Summary: Provide fallbacks for identity/cost providers and backoff 
enable
 Key: HADOOP-17814
 URL: https://issues.apache.org/jira/browse/HADOOP-17814
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


This sub-task is to provide default properties for identity-provider.impl, 
cost-provider.impl and backoff.enable such that if properties with port is not 
configured, we can fallback to default property (port-less).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-04 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17835:
-

 Summary: Use CuratorCache implementation instead of 
PathChildrenCache / TreeCache
 Key: HADOOP-17835
 URL: https://issues.apache.org/jira/browse/HADOOP-17835
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
CuratorCache service implementation in place of deprecated PathChildrenCache 
and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17808) ipc.Client not setting interrupt flag after catching InterruptedException

2021-08-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-17808:
---

Reopening for an addendum to remove excessive logging.

> ipc.Client not setting interrupt flag after catching InterruptedException
> -
>
> Key: HADOOP-17808
> URL: https://issues.apache.org/jira/browse/HADOOP-17808
> Project: Hadoop Common
>  Issue Type: Task
>        Reporter: Viraj Jasani
>    Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> ipc.Client is swallowing InterruptedException at a couple of places:
>  # While waiting on all connections to be closed
>  # While waiting to retrieve some RPC response
> We should at least set the interrupt signal and also log the 
> InterruptedException caught.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17841) Remove ListenerHandle from Hadoop registry

2021-08-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17841:
-

 Summary: Remove ListenerHandle from Hadoop registry
 Key: HADOOP-17841
 URL: https://issues.apache.org/jira/browse/HADOOP-17841
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of HADOOP-17835 (replacing PathChildrenCache/TreeCache by 
CuratorCache), realized that although registerPathListener() of CuratorService 
returns ListenerHandle, it is not used by RegistryDNSServer. We can remove 
ListenerHandle from hadoop-registry as it is not Public/LP interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17858) Avoid possible class loading deadlock with VerifierNone initialization

2021-08-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17858:
-

 Summary: Avoid possible class loading deadlock with VerifierNone 
initialization
 Key: HADOOP-17858
 URL: https://issues.apache.org/jira/browse/HADOOP-17858
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Superclass Verifier has a static initializer VERIFIER_NONE that initializes 
sub-class VerifierNone. This reference can result in deadlock during class 
loading as per 
[https://docs.oracle.com/javase/specs/jls/se8/html/jls-12.html#jls-12.4.2].

As of today, only RpcProgram use this instance and hence it is safe but if more 
clients start using this (specifically static ones), it has potential to bring 
deadlock. We should break this referencing before it is late.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17874) ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner

2021-08-26 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17874:
-

 Summary: ExceptionsHandler to add terse/suppressed Exceptions in 
thread-safe manner
 Key: HADOOP-17874
 URL: https://issues.apache.org/jira/browse/HADOOP-17874
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Even though we have explicit comments stating that we have thread-safe 
replacement of terseExceptions and suppressedExceptions, in reality we don't 
have it. As we can't guarantee only non-concurrent addition of Exceptions at a 
time from any Server implementation, we should make this thread-safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17892) Add Hadoop code formatter in dev-support

2021-09-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17892:
-

 Summary: Add Hadoop code formatter in dev-support
 Key: HADOOP-17892
 URL: https://issues.apache.org/jira/browse/HADOOP-17892
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


We should add Hadoop code formatter xml to dev-support specifically for new 
developers to refer to.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17947) Provide alternative to Guava VisibleForTesting in Hadoop common

2021-09-30 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17947:
-

 Summary: Provide alternative to Guava VisibleForTesting in Hadoop 
common
 Key: HADOOP-17947
 URL: https://issues.apache.org/jira/browse/HADOOP-17947
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


In an attempt to reduce the dependency on Guava, we should remove 
VisibleForTesting annotation usages as it has very high usage in our codebase. 
This Jira is to provide Hadoop's own alternative and use it in 
hadoop-common-project modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17950) Provide replacement for deprecated APIs of commons-io IOUtils

2021-10-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17950:
-

 Summary: Provide replacement for deprecated APIs of commons-io 
IOUtils
 Key: HADOOP-17950
 URL: https://issues.apache.org/jira/browse/HADOOP-17950
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17952) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-common-project modules

2021-10-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17952:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-common-project modules
 Key: HADOOP-17952
 URL: https://issues.apache.org/jira/browse/HADOOP-17952
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17947) Provide alternative to Guava VisibleForTesting

2021-10-05 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-17947:
---

Reopening for a minor addendum.

> Provide alternative to Guava VisibleForTesting
> --
>
> Key: HADOOP-17947
> URL: https://issues.apache.org/jira/browse/HADOOP-17947
> Project: Hadoop Common
>  Issue Type: Sub-task
>        Reporter: Viraj Jasani
>    Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In an attempt to reduce the dependency on Guava, we should remove 
> VisibleForTesting annotation usages as it has very high usage in our 
> codebase. This Jira is to provide Hadoop's own alternative and use it in 
> hadoop-common-project modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17956) Replace all default Charset usage with UTF-8

2021-10-07 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17956:
-

 Summary: Replace all default Charset usage with UTF-8
 Key: HADOOP-17956
 URL: https://issues.apache.org/jira/browse/HADOOP-17956
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed on PR#3515, creating this sub-task to replace all default charset 
with UTF-8 as default charset has some potential problems (e.g. HADOOP-11379, 
HADOOP-11389).

FYI [~aajisaka]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17957) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-hdfs-project modules

2021-10-07 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17957:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-hdfs-project modules
 Key: HADOOP-17957
 URL: https://issues.apache.org/jira/browse/HADOOP-17957
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17959) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-cloud-storage-project and hadoop-mapreduce-project modules

2021-10-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17959:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-cloud-storage-project and hadoop-mapreduce-project modules
 Key: HADOOP-17959
 URL: https://issues.apache.org/jira/browse/HADOOP-17959
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17962) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-tools modules

2021-10-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17962:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-tools modules
 Key: HADOOP-17962
 URL: https://issues.apache.org/jira/browse/HADOOP-17962
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17963) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-yarn-project modules

2021-10-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17963:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-yarn-project modules
 Key: HADOOP-17963
 URL: https://issues.apache.org/jira/browse/HADOOP-17963
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17967) Keep restrict-imports-enforcer-rule for Guava VisibleForTesting in hadoop-main pom

2021-10-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17967:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava 
VisibleForTesting in hadoop-main pom
 Key: HADOOP-17967
 URL: https://issues.apache.org/jira/browse/HADOOP-17967
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17968) Migrate checkstyle IllegalImport to banned-illegal-imports enforcer

2021-10-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17968:
-

 Summary: Migrate checkstyle IllegalImport to 
banned-illegal-imports enforcer
 Key: HADOOP-17968
 URL: https://issues.apache.org/jira/browse/HADOOP-17968
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed on PR [3503|https://github.com/apache/hadoop/pull/3503], we should 
migrate existing imports provided in IllegalImport tag in checkstyle.xml to 
maven-enforcer-plugin's banned-illegal-imports enforcer rule so that build 
never succeeds in the presence of any of the illegal imports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18006) maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms

2021-11-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18006:
-

 Summary: maven-enforcer-plugin's execution of 
banned-illegal-imports gets overridden in child poms
 Key: HADOOP-18006
 URL: https://issues.apache.org/jira/browse/HADOOP-18006
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


When we specify any maven plugin with execution tag in the parent as well as 
child modules, child module plugin overrides parent plugin. For instance, when 
{{banned-illegal-imports}} is applied for any child module with only one banned 
import (let’s say {{{}Preconditions{}}}), then only that banned import is 
covered by that child module and all imports defined in parent module (e.g 
Sets, Lists etc) are overridden and they are no longer applied.
After this 
[commit|https://github.com/apache/hadoop/commit/62c86eaa0e539a4307ca794e0fcd502a77ebceb8],
 hadoop-hdfs module will not complain about {{Sets}} even if i import it from 
guava banned imports but on the other hand, hadoop-yarn module doesn’t have any 
child level {{banned-illegal-imports}} defined so yarn modules will fail if 
{{Sets}} guava import is used.
So going forward, it would be good to replace guava imports with Hadoop’s own 
imports module-by-module and only at the end, we should add new entry to parent 
pom {{banned-illegal-imports}} list.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18017) unguava: remove Preconditions from hadoop-yarn-project modules

2021-11-19 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18017:
-

 Summary: unguava: remove Preconditions from hadoop-yarn-project 
modules
 Key: HADOOP-18017
 URL: https://issues.apache.org/jira/browse/HADOOP-18017
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Replace guava Preconditions by internal implementations that rely on java8+ 
APIs in the hadoop.util for all modules in hadoop-yarn-project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18018) unguava: remove Preconditions from hadoop-tools modules

2021-11-19 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18018:
-

 Summary: unguava: remove Preconditions from hadoop-tools modules
 Key: HADOOP-18018
 URL: https://issues.apache.org/jira/browse/HADOOP-18018
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Replace guava Preconditions by internal implementations that rely on java8+ 
APIs in the hadoop.util for all modules in hadoop-tools.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18022) Add restrict-imports-enforcer-rule for Guava Preconditions in hadoop-main pom

2021-11-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18022:
-

 Summary: Add restrict-imports-enforcer-rule for Guava 
Preconditions in hadoop-main pom
 Key: HADOOP-18022
 URL: https://issues.apache.org/jira/browse/HADOOP-18022
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Add restrict-imports-enforcer-rule for Guava Preconditions in hadoop-main pom 
to restrict any new import in future. Remove any remaining usages of Guava 
Preconditions from the codebase.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18025) Upgrade HBase version to 1.7.1 for hbase1 profile

2021-11-25 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18025:
-

 Summary: Upgrade HBase version to 1.7.1 for hbase1 profile
 Key: HADOOP-18025
 URL: https://issues.apache.org/jira/browse/HADOOP-18025
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18027) Include static imports in the maven plugin rules

2021-11-29 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18027:
-

 Summary: Include static imports in the maven plugin rules
 Key: HADOOP-18027
 URL: https://issues.apache.org/jira/browse/HADOOP-18027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Maven enforcer plugin to ban illegal imports require explicit mention of static 
imports in order to evaluate whether any publicly accessible static entities 
from the banned classes are directly imported by Hadoop code.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18039) Upgrade hbase2 version and fix TestTimelineWriterHBaseDown

2021-12-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18039:
-

 Summary: Upgrade hbase2 version and fix TestTimelineWriterHBaseDown
 Key: HADOOP-18039
 URL: https://issues.apache.org/jira/browse/HADOOP-18039
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As mentioned on the parent Jira, we can't upgrade hbase2 profile version beyond 
2.2.4 until we either have hbase 2 artifacts available that are built with 
hadoop 3 profile by default or hbase 3 is rolled out (hbase 3 is compatible 
with hadoop 3 versions only).

Let's upgrade hbase2 profile version to 2.2.4 as part of this Jira and also fix 
TestTimelineWriterHBaseDown to create connection only after mini cluster is up.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18055) Async Profiler endpoint for Hadoop daemons

2021-12-22 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18055:
-

 Summary: Async Profiler endpoint for Hadoop daemons
 Key: HADOOP-18055
 URL: https://issues.apache.org/jira/browse/HADOOP-18055
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Async profiler ([https://github.com/jvm-profiling-tools/async-profiler]) is a 
low overhead sampling profiler for Java that does not suffer from Safepoint 
bias problem. It features HotSpot-specific APIs to collect stack traces and to 
track memory allocations. The profiler works with OpenJDK, Oracle JDK and other 
Java runtimes based on the HotSpot JVM.

Async profiler can also profile heap allocations, lock contention, and HW 
performance counters in addition to CPU.

We have an httpserver based servlet stack hence we can use HIVE-20202 as an 
implementation template to provide async profiler as servlet for Hadoop 
daemons. Ideally we achieve these requirements:
 * Retrieve flamegraph SVG generated from latest profile trace.
 * Online enable and disable of profiling activity. (async-profiler does not do 
instrumentation based profiling so this should not cause the code gen related 
perf problems of that other approach and can be safely toggled on and off while 
under production load.)
 * CPU profiling.
 * ALLOCATION profiling.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18077) ProfileOutputServlet unable to proceed due to NPE

2022-01-10 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18077:
-

 Summary: ProfileOutputServlet unable to proceed due to NPE
 Key: HADOOP-18077
 URL: https://issues.apache.org/jira/browse/HADOOP-18077
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ProfileOutputServlet context doesn't have Hadoop configs available and hence 
async profiler redirection to output servlet is failing to identify if admin 
access is allowed:
{code:java}
HTTP ERROR 500 java.lang.NullPointerException
URI:    /prof-output-hadoop/async-prof-pid-98613-cpu-2.html
STATUS:    500
MESSAGE:    java.lang.NullPointerException
SERVLET:    org.apache.hadoop.http.ProfileOutputServlet-58c34bb3
CAUSED BY:    java.lang.NullPointerException
Caused by:
java.lang.NullPointerException
    at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1619)
    at 
org.apache.hadoop.http.ProfileOutputServlet.doGet(ProfileOutputServlet.java:51)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
    at 
org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    at org.eclipse.jetty.server.Server.handle(Server.java:516)
    at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
    at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
    at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
    at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
    at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
    at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
    at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18089) Test coverage for Async profiler servlets

2022-01-21 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18089:
-

 Summary: Test coverage for Async profiler servlets
 Key: HADOOP-18089
 URL: https://issues.apache.org/jira/browse/HADOOP-18089
 Project: Hadoop Common
  Issue Type: Test
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed in HADOOP-18077, we should provide sufficient test coverage to 
discover any potential regression in async profiler servlets: ProfileServlet 
and ProfileOutputServlet.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18098) Basic verification of release candidates

2022-01-28 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18098:
-

 Summary: Basic verification of release candidates
 Key: HADOOP-18098
 URL: https://issues.apache.org/jira/browse/HADOOP-18098
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


We should provide script for the basic sanity of Hadoop release candidates. It 
should include:
 * Signature
 * Checksum
 * Rat check
 * Build from src
 * Build tarball from src

 

Although we can include unit test as well, but overall unit test run is going 
to be significantly higher, and precommit Jenkins builds provide better view of 
UT sanity.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18125) Utility to identify git commit / Jira fixVersion discrepancies for RC preparation

2022-02-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18125:
-

 Summary: Utility to identify git commit / Jira fixVersion 
discrepancies for RC preparation
 Key: HADOOP-18125
 URL: https://issues.apache.org/jira/browse/HADOOP-18125
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of RC preparation,  we need to identify all git commits that landed on 
release branch, however their corresponding Jira is either not resolved yet or 
does not contain expected fixVersions. Only when we have git commits and 
corresponding Jiras with expected fixVersion resolved, we get all such Jiras 
included in auto-generated CHANGES.md as per Yetus changelog generator.

Proposal of this Jira is to provide such script that can be useful for all 
upcoming RC preparations and list down all Jiras where we need manual 
intervention. This utility script should use Jira API to retrieve individual 
fields and use git log to loop through commit history.

The script should identify these issues:
 # commit is reverted as per commit message
 # commit does not contain Jira number format (e.g. HADOOP- / HDFS- 
etc) in message
 # Jira does not have expected fixVersion
 # Jira has expected fixVersion, but it is not yet resolved
 # Jira has release corresponding fixVersion and is resolved, but no 
corresponding commit yet found

It can take inputs as:
 # First commit hash to start excluding commits from history
 # Fix Version
 # JIRA Project Name
 # Path of project's working dir
 # Jira server url



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18131) Upgrade maven enforcer plugin and relevant dependencies

2022-02-18 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18131:
-

 Summary: Upgrade maven enforcer plugin and relevant dependencies
 Key: HADOOP-18131
 URL: https://issues.apache.org/jira/browse/HADOOP-18131
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Maven enforcer plugin's latest version 3.0.0 has some noticeable improvements 
(e.g. MENFORCER-350, MENFORCER-388, MENFORCER-353) and fixes for us to 
incorporate. Besides, some of the relevant enforcer dependencies (e.g. extra 
enforcer rules and restrict import enforcer) too have good improvements.

We should upgrade maven enforcer plugin and the relevant dependencies.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18142) Increase precommit job timeout from 24 hr to 30 hr

2022-02-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18142:
-

 Summary: Increase precommit job timeout from 24 hr to 30 hr
 Key: HADOOP-18142
 URL: https://issues.apache.org/jira/browse/HADOOP-18142
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As per some recent precommit build results, full build QA is not getting 
completed in 24 hr (recent example 
[here|https://github.com/apache/hadoop/pull/4000] where more than 5 builds 
timed out after 24 hr). We should increase it to 30 hr.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18191) Log retry count while handling exceptions in RetryInvocationHandler

2022-04-04 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18191:
-

 Summary: Log retry count while handling exceptions in 
RetryInvocationHandler
 Key: HADOOP-18191
 URL: https://issues.apache.org/jira/browse/HADOOP-18191
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of failure handling in RetryInvocationHandler, we log details of the 
Exception details with which API was invoked, failover attempts, delay.

For the purpose of better debugging as well as fine-tuning of retry params, it 
would be good to also log retry count that we already maintain in the Counter 
object.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18196) Remove replace-guava from replacer plugin

2022-04-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18196:
-

 Summary: Remove replace-guava from replacer plugin
 Key: HADOOP-18196
 URL: https://issues.apache.org/jira/browse/HADOOP-18196
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


While running the build, realized that all replacer plugin executions run only 
after "banned-illegal-imports" enforcer plugin.

For instance,
{code:java}
[INFO] --- maven-enforcer-plugin:3.0.0:enforce (banned-illegal-imports) @ 
hadoop-cloud-storage ---
[INFO] 
[INFO] --- replacer:1.5.3:replace (replace-generated-sources) @ 
hadoop-cloud-storage ---
[INFO] Skipping
[INFO] 
[INFO] --- replacer:1.5.3:replace (replace-sources) @ hadoop-cloud-storage ---
[INFO] Skipping
[INFO] 
[INFO] --- replacer:1.5.3:replace (replace-guava) @ hadoop-cloud-storage ---
[INFO] Replacement run on 0 file.
[INFO]  {code}
Hence, if our source code uses com.google.common, banned-illegal-imports will 
cause the build failure and replacer plugin would not even get executed.

We should remove it as it is only redundant execution step.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18224) Upgrade maven compiler plugin

2022-05-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18224:
-

 Summary: Upgrade maven compiler plugin
 Key: HADOOP-18224
 URL: https://issues.apache.org/jira/browse/HADOOP-18224
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Currently we are using maven-compiler-plugin 3.1 version, which is quite old 
(2013) and it's also pulling in vulnerable log4j dependency:
{code:java}
[INFO]
org.apache.maven.plugins:maven-compiler-plugin:maven-plugin:3.1:runtime
[INFO]   org.apache.maven.plugins:maven-compiler-plugin:jar:3.1
[INFO]   org.apache.maven:maven-plugin-api:jar:2.0.9
[INFO]   org.apache.maven:maven-artifact:jar:2.0.9
[INFO]   org.codehaus.plexus:plexus-utils:jar:1.5.1
[INFO]   org.apache.maven:maven-core:jar:2.0.9
[INFO]   org.apache.maven:maven-settings:jar:2.0.9
[INFO]   org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.9
...
...
...
[INFO]   log4j:log4j:jar:1.2.12
[INFO]   commons-logging:commons-logging-api:jar:1.1
[INFO]   com.google.collections:google-collections:jar:1.0
[INFO]   junit:junit:jar:3.8.2
 {code}
 

We should upgrade to 3.10.1 (latest Mar, 2022) version of maven-compiler-plugin.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18228) Update hadoop-vote to use HADOOP_RC_VERSION dir

2022-05-06 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18228:
-

 Summary: Update hadoop-vote to use HADOOP_RC_VERSION dir
 Key: HADOOP-18228
 URL: https://issues.apache.org/jira/browse/HADOOP-18228
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


The recent changes in release script requires a minor change in hadoop-vote to 
use Hadoop RC version dir before verifying signature and checksum of .tar.gz 
files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18288) Total requests and total requests per sec served by RPC servers

2022-06-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18288:
-

 Summary: Total requests and total requests per sec served by RPC 
servers
 Key: HADOOP-18288
 URL: https://issues.apache.org/jira/browse/HADOOP-18288
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Viraj Jasani
Assignee: Viraj Jasani


RPC Servers provide bunch of useful information like num of open connections, 
slow requests, num of in-progress handlers, RPC processing time, queue time 
etc, however so far it doesn't provide accumulation of all requests as well as 
current snapshot of requests per second served by the server. Exposing them 
would benefit from operational viewpoint in identifying how busy the servers 
have been and how much load they are currently serving in the presence of 
cluster wide high load.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18303) Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime

2022-06-19 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18303:
-

 Summary: Remove shading exclusion of javax.ws.rs-api from 
hadoop-client-runtime
 Key: HADOOP-18303
 URL: https://issues.apache.org/jira/browse/HADOOP-18303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of HADOOP-18033, we have excluded shading of javax.ws.rs-api from both 
hadoop-client-runtime and hadoop-client-minicluster. This has caused issues for 
downstreamers e.g. [https://github.com/apache/incubator-kyuubi/issues/2904], 
more discussions.

We should put the shading back in hadoop-client-runtime to fix CNFE issues for 
downstreamers.

cc [~ayushsaxena] [~pan3793] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18303) Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime

2022-07-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-18303.
---
Resolution: Won't Fix

> Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime
> --
>
> Key: HADOOP-18303
> URL: https://issues.apache.org/jira/browse/HADOOP-18303
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Viraj Jasani
>    Assignee: Viraj Jasani
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As part of HADOOP-18033, we have excluded shading of javax.ws.rs-api from 
> both hadoop-client-runtime and hadoop-client-minicluster. This has caused 
> issues for downstreamers e.g. 
> [https://github.com/apache/incubator-kyuubi/issues/2904], more discussions.
> We should put the shading back in hadoop-client-runtime to fix CNFE issues 
> for downstreamers.
> cc [~ayushsaxena] [~pan3793] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18397) Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18397:
-

 Summary: Shutdown AWSSecurityTokenService when it's resources are 
no longer in use
 Key: HADOOP-18397
 URL: https://issues.apache.org/jira/browse/HADOOP-18397
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


AWSSecurityTokenService resources can be released whenever they are no longer 
in use. The documentation of AWSSecurityTokenService#shutdown says while it is 
not important for client to compulsorily shutdown the token service, client can 
definitely perform early release whenever client no longer requires token 
service resources. We achieve this by making STSClient closable, so we can 
certainly utilize it in all places where it's suitable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18403) Fix FileSystem leak in ITestS3AAWSCredentialsProvider

2022-08-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18403:
-

 Summary: Fix FileSystem leak in ITestS3AAWSCredentialsProvider
 Key: HADOOP-18403
 URL: https://issues.apache.org/jira/browse/HADOOP-18403
 Project: Hadoop Common
  Issue Type: Test
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ITestS3AAWSCredentialsProvider#testAnonymousProvider has FileSystem leak that 
should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18435) Remove usage of fs.s3a.executor.capacity

2022-08-31 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18435:
-

 Summary: Remove usage of fs.s3a.executor.capacity
 Key: HADOOP-18435
 URL: https://issues.apache.org/jira/browse/HADOOP-18435
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


When s3guard was part of s3a, DynamoDBMetadataStore was the only consumer of 
StoreContext that used throttled executor provided by StoreContext, which 
internally uses fs.s3a.executor.capacity to determine executor capacity for 
SemaphoredDelegatingExecutor. With the removal of s3guard from s3a, we should 
also remove fs.s3a.executor.capacity and it's usages as it's no longer being 
used by any StoreContext consumers. The config's existence and its description 
can be really confusing for the users.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-18186) s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-09-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-18186:
---

Re-opening for an addendum

> s3a prefetching to use SemaphoredDelegatingExecutor for submitting work
> ---
>
> Key: HADOOP-18186
> URL: https://issues.apache.org/jira/browse/HADOOP-18186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>    Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Use SemaphoredDelegatingExecutor for each to stream to submit work, if 
> possible, for better fairness in processes with many streams.
> this also takes a DurationTrackerFactory to count how long was spent in the 
> queue, something we would want to know



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18186) s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-09-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-18186.
---
Resolution: Fixed

> s3a prefetching to use SemaphoredDelegatingExecutor for submitting work
> ---
>
> Key: HADOOP-18186
> URL: https://issues.apache.org/jira/browse/HADOOP-18186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>    Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Use SemaphoredDelegatingExecutor for each to stream to submit work, if 
> possible, for better fairness in processes with many streams.
> this also takes a DurationTrackerFactory to count how long was spent in the 
> queue, something we would want to know



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18455) s3a prefetching Executor should be closed

2022-09-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18455:
-

 Summary: s3a prefetching Executor should be closed
 Key: HADOOP-18455
 URL: https://issues.apache.org/jira/browse/HADOOP-18455
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


This is the follow-up work for HADOOP-18186. The new executor service we use 
for s3a prefetching should be closed while shutting down the file system.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18466) Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field

2022-09-22 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18466:
-

 Summary: Limit the findbugs suppression IS2_INCONSISTENT_SYNC to 
S3AFileSystem field
 Key: HADOOP-18466
 URL: https://issues.apache.org/jira/browse/HADOOP-18466
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field 
futurePool to avoid letting it discover other synchronization bugs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18592) Sasl connection failure should log remote address

2023-01-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18592:
-

 Summary: Sasl connection failure should log remote address
 Key: HADOOP-18592
 URL: https://issues.apache.org/jira/browse/HADOOP-18592
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.4
Reporter: Viraj Jasani
Assignee: Viraj Jasani


If Sasl connection fails with some generic error, we miss logging remote server 
that the client was trying to connect to.

Sample log:
{code:java}
2023-01-12 00:22:28,148 WARN  [20%2C1673404849949,1] ipc.Client - Exception 
encountered while connecting to the server 
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
    at sun.nio.ch.IOUtil.read(IOUtil.java:197)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
    at 
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
    at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:141)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
    at java.io.FilterInputStream.read(FilterInputStream.java:133)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
    at java.io.DataInputStream.readInt(DataInputStream.java:387)
    at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1950)
    at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367)
    at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:623)
    at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:414)
...
... {code}
We should log the remote server address.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18620) Avoid using grizzly-http classes

2023-02-06 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18620:
-

 Summary: Avoid using grizzly-http classes
 Key: HADOOP-18620
 URL: https://issues.apache.org/jira/browse/HADOOP-18620
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed on the parent Jira HADOOP-15984, we do not have any 
grizzly-http-servlet version available that uses Jersey 2 dependencies. 

version 2.4.4 contains Jersey 1 artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/2.4.4/grizzly-http-servlet-2.4.4.pom]

The next higher version available is 3.0.0-M1 and it contains Jersey 3 
artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/3.0.0-M1/grizzly-http-servlet-3.0.0-M1.pom]

 

Moreover, we do not use grizzly-http-* modules extensively. We use them only 
for few tests such that we don't have to implement all the methods of 
HttpServletResponse for our custom test classes.

We should get rid of grizzly-http-servlet, grizzly-http and grizzly-http-server 
artifacts of org.glassfish.grizzly and rather implement HttpServletResponse 
directly to avoid having to depend on grizzly upgrades as part of overall 
Jersey upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



  1   2   >