[DISCUSS] Hadoop 2019 Release Planning

2019-08-09 Thread Wangda Tan
Hi all,

Hope this email finds you well

I want to hear your thoughts about what should be the release plan for
2019.

In 2018, we released:
- 1 maintenance release of 2.6
- 3 maintenance releases of 2.7
- 3 maintenance releases of 2.8
- 3 releases of 2.9
- 4 releases of 3.0
- 2 releases of 3.1

Total 16 releases in 2018.

In 2019, by far we only have two releases:
- 1 maintenance release of 3.1
- 1 minor release of 3.2.

However, the community put a lot of efforts to stabilize features of
various release branches.
There're:
- 217 fixed patches in 3.1.3 [1]
- 388 fixed patches in 3.2.1 [2]
- 1172 fixed patches in 3.3.0 [3] (OMG!)

I think it is the time to do maintenance releases of 3.1/3.2 and do a minor
release for 3.3.0.

In addition, I saw community discussion to do a 2.8.6 release for security
fixes.

Any other releases? I think there're release plans for Ozone as well. And
please add your thoughts.

Volunteers welcome! If you have interests to run a release as Release
Manager (or co-Resource Manager), please respond to this email thread so we
can coordinate.

Thanks,
Wangda Tan

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
fixVersion = 3.1.3
[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
fixVersion = 3.2.1
[3] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
fixVersion = 3.3.0


Re: [DISCUSS] Release 3.0.4 or branch-3.0 EOL

2019-08-09 Thread Wei-Chiu Chuang
Erik,
I've just got this message now. Something funky is going on with my
workplace mailbox.

With my Apache's hat on, I find it too costly to maintain a 3.0.x release
line: my personal time to try to backport commits, resolving conflicts.
Additionally, we would be forced to make additional 3.0.x releases if a new
CVE is discovered.

With my Cloudera's hat on, the last downtream release, CDH6.3.0 is the last
release on Apache Hadoop 3.0.x. The next release, CDP 1.0 is on 3.1.x, and
at some point, may rebase onto 3.2.x. The timeline to rebase onto 3.2 is
unclear (at least 6 months out), and so I am still actively maintaining
this branch as much as possible.

That is to say, assuming Cloudera's the only corporate sponsor for
branch-3.0, this branch is all but dead.

On another note, I recently learned that Apache Spark 3 will only support
Hadoop 3.2, skipping Hadoop 3.0/3.1. I don't know how you guys think, but I
feel like that'd expedite the death of branch-3.1, which is why we are
thinking about a 3.2 rebase in CDP.

On Fri, Aug 9, 2019 at 3:31 PM Erik Krogen  wrote:

> I'd like to revive this discussion. Today I see committers variously
> skipping backports to branch-3.0 and/or branch-3.1 (when backporting to
> branch-2, for example) and I'm concerned that we will have divergence
> between what commits are present in which branches.
>
> Wei-Chiu, is Cloudera still rebasing on top of branch-3.1? If so then it
> seems we should be continuing to actively maintain it and not skipping any
> backports here.
>
> > If HDFS-13596 is fixed in branch-3.1/3.2, will it be possible to
> > rolling upgrade from Hadoop 2 to Hadoop 3.1/3.2? If the answer is yes,
> > I'm thinking we can drop 3.0 and consider 3.1+ only.
>
>
> I agree on this front. I don't see any reason that we can't encourage
> upgrades from 2.x to 3.1+.
>
> Should we initiate a vote to officially EOL the 3.0 line?
>
> Thanks
> Erik
>
> P.S.: Sorry if you received this email twice, I was informed by members of
> this team that my original email was not received by most folks.
>
> On Sun, May 26, 2019 at 11:56 PM Akira Ajisaka 
> wrote:
>
> > Thank you for your feedback,
> >
> > > IMHO, I'd love to see HDFS-13596 <
> > https://issues.apache.org/jira/browse/HDFS-13596> fixed to make it
> > possible to rolling upgrade from Hadoop 2 to Hadoop 3.0
> >
> > If HDFS-13596 is fixed in branch-3.1/3.2, will it be possible to
> > rolling upgrade from Hadoop 2 to Hadoop 3.1/3.2? If the answer is yes,
> > I'm thinking we can drop 3.0 and consider 3.1+ only.
> >
> > Thanks,
> > Akira
> >
> > On Wed, May 22, 2019 at 4:39 AM Ajay Kumar 
> > wrote:
> > >
> > > +1 for keeping no of active branches low, specially when it is not used
> > actively.
> > >
> > > On Mon, May 20, 2019 at 3:33 AM Steve Loughran
> >  wrote:
> > >>
> > >> I've been intermittently backporting things to it, but like you say:
> not
> > >> getting much active use.
> > >>
> > >>
> > >>1. I'd like to view the 3.1 branch as the main "ready to play with"
> > >>Hadoop 3.x release, with 3.2 some new features.
> > >>2. I'm planning on backporting some of the hadoop 3.x ABFS and S3A
> > work
> > >>to that 3.x release, and for S3A, some to 3.1.x (and indeed, maybe
> we
> > >>should do the abfs connector, After all: it's not going to cause
> any
> > >>regressions).
> > >>3. The hadoop-aws changes will include HADOOP-16117, AWS SDK
> update -
> > >>Sean Mackrory has been advocating "keep all active branches current
> > with
> > >>the AWS SDKs", and I've come to agree. The testing there didn't
> find
> > any
> > >>regressions, which was a pleasant surprise.
> > >>4. For S3A, a big patch has just gone in, HADOOP-16085, which adds
> > etag
> > >>and version columns to the S3Guard DDB tables, and uses these at
> > load time.
> > >>Even if we don't backport that patch to the 3.1 line, it makes
> sense
> > for
> > >>all 3.1.x clients to be updating the DB with the relevant columns
> as
> > they
> > >>write it, so that on a mixed-client deployment, everyone keeps the
> > table up
> > >>to date
> > >>
> > >>
> > >> As usual, help welcome, especially with testing
> > >>
> > >> -steve
> > >>
> > >> On Fri, May 17, 2019 at 12:15 PM Wei-Chiu Chuang 
> > wrote:
> > >>
> > >> > Thanks for initiating the discussion, Akira.
> > >> >
> > >> > I've given similar thoughts on the possible EOL of Hadoop branch-3.0
> > line.
> > >> > IMHO, I'd love to see HDFS-13596
> > >> >  fixed to make it
> > >> > possible to rolling upgrade from Hadoop 2 to Hadoop 3.0, and then
> roll
> > >> > another maint. release before we declare EOL.
> > >> >
> > >> > But in all seriousness, with my Cloudera hat on, CDH will soon
> rebase
> > onto
> > >> > branch-3.1 so it's unlikely Cloudera can sponsor another 3.0 maint
> > release.
> > >> > With my Apache hat on, Apache is an all-volunteer organization and
> we
> > all
> > >> > act individually, 

Re: [DISCUSS] Release 3.0.4 or branch-3.0 EOL

2019-08-09 Thread Erik Krogen
I'd like to revive this discussion. Today I see committers variously
skipping backports to branch-3.0 and/or branch-3.1 (when backporting to
branch-2, for example) and I'm concerned that we will have divergence
between what commits are present in which branches.

Wei-Chiu, is Cloudera still rebasing on top of branch-3.1? If so then it
seems we should be continuing to actively maintain it and not skipping any
backports here.

> If HDFS-13596 is fixed in branch-3.1/3.2, will it be possible to
> rolling upgrade from Hadoop 2 to Hadoop 3.1/3.2? If the answer is yes,
> I'm thinking we can drop 3.0 and consider 3.1+ only.


I agree on this front. I don't see any reason that we can't encourage
upgrades from 2.x to 3.1+.

Should we initiate a vote to officially EOL the 3.0 line?

Thanks
Erik

P.S.: Sorry if you received this email twice, I was informed by members of
this team that my original email was not received by most folks.

On Sun, May 26, 2019 at 11:56 PM Akira Ajisaka  wrote:

> Thank you for your feedback,
>
> > IMHO, I'd love to see HDFS-13596 <
> https://issues.apache.org/jira/browse/HDFS-13596> fixed to make it
> possible to rolling upgrade from Hadoop 2 to Hadoop 3.0
>
> If HDFS-13596 is fixed in branch-3.1/3.2, will it be possible to
> rolling upgrade from Hadoop 2 to Hadoop 3.1/3.2? If the answer is yes,
> I'm thinking we can drop 3.0 and consider 3.1+ only.
>
> Thanks,
> Akira
>
> On Wed, May 22, 2019 at 4:39 AM Ajay Kumar 
> wrote:
> >
> > +1 for keeping no of active branches low, specially when it is not used
> actively.
> >
> > On Mon, May 20, 2019 at 3:33 AM Steve Loughran
>  wrote:
> >>
> >> I've been intermittently backporting things to it, but like you say: not
> >> getting much active use.
> >>
> >>
> >>1. I'd like to view the 3.1 branch as the main "ready to play with"
> >>Hadoop 3.x release, with 3.2 some new features.
> >>2. I'm planning on backporting some of the hadoop 3.x ABFS and S3A
> work
> >>to that 3.x release, and for S3A, some to 3.1.x (and indeed, maybe we
> >>should do the abfs connector, After all: it's not going to cause any
> >>regressions).
> >>3. The hadoop-aws changes will include HADOOP-16117, AWS SDK update -
> >>Sean Mackrory has been advocating "keep all active branches current
> with
> >>the AWS SDKs", and I've come to agree. The testing there didn't find
> any
> >>regressions, which was a pleasant surprise.
> >>4. For S3A, a big patch has just gone in, HADOOP-16085, which adds
> etag
> >>and version columns to the S3Guard DDB tables, and uses these at
> load time.
> >>Even if we don't backport that patch to the 3.1 line, it makes sense
> for
> >>all 3.1.x clients to be updating the DB with the relevant columns as
> they
> >>write it, so that on a mixed-client deployment, everyone keeps the
> table up
> >>to date
> >>
> >>
> >> As usual, help welcome, especially with testing
> >>
> >> -steve
> >>
> >> On Fri, May 17, 2019 at 12:15 PM Wei-Chiu Chuang 
> wrote:
> >>
> >> > Thanks for initiating the discussion, Akira.
> >> >
> >> > I've given similar thoughts on the possible EOL of Hadoop branch-3.0
> line.
> >> > IMHO, I'd love to see HDFS-13596
> >> >  fixed to make it
> >> > possible to rolling upgrade from Hadoop 2 to Hadoop 3.0, and then roll
> >> > another maint. release before we declare EOL.
> >> >
> >> > But in all seriousness, with my Cloudera hat on, CDH will soon rebase
> onto
> >> > branch-3.1 so it's unlikely Cloudera can sponsor another 3.0 maint
> release.
> >> > With my Apache hat on, Apache is an all-volunteer organization and we
> all
> >> > act individually, but I am just being realistic that there would not
> be
> >> > much motivation to roll another release in the future.
> >> >
> >> > Thoughts?
> >> >
> >> > On Fri, May 17, 2019 at 8:31 AM Akira Ajisaka 
> wrote:
> >> >
> >> > > Hi folks,
> >> > >
> >> > > In branch-3.0, it is almost a year since 3.0.3 was released. Is
> there
> >> > > any committer who wants to be a release manager of 3.0.4?
> >> > >
> >> > > If there are no users running Apache Hadoop 3.0.x in production, I'm
> >> > > thinking we can stop maintaining branch-3.0. Please let me know
> there
> >> > > are any users running Apache Hadoop 3.0.x.
> >> > >
> >> > > Any thoughts?
> >> > >
> >> > > -Akira
> >> > >
> >> > >
> -
> >> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >> > >
> >> > >
> >> >
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Aug Hadoop Community Meetup in China

2019-08-09 Thread 俊平堵
Hi all,
 Kindly remind that we will have Beijing meetup on 8/10 soon, and the
event will start since 10 am (CST time).
 The link has details on address:
https://docs.google.com/document/d/1wDfyuQv6PDeKZGVdYJw6IfXUa_MIzIstbb8iva6Zt-U/edit?usp=sharing

Thanks,

Junping

俊平堵  于2019年7月23日周二 下午5:07写道:

> Thanks for these positive feedbacks! The local community has voted the
> date and location to be 8/10, Beijing. So please book your time ahead if
> you have interest to join.
> I have gathered a few topics, and some candidate places for hosting this
> meetup. If you would like to propose more topics, please nominate it here
> or ping me before this weekend (7/28, CST time).
> Will update here when I have more to share. thx!
>
>
>
>
> [image: Screen Shot 2019-07-23 at 10.15.54.png]
>
> [image: Screen Shot 2019-07-23 at 10.16.06.png]
>
>
>
> Thanks,
>
> Junping
>
> 俊平堵  于2019年7月18日周四 下午3:28写道:
>
>> Hi, all!
>>
>> I am glad to let you know that we are organizing
>> Hadoop Contributors Meetup in China on Aug.
>>
>>
>> This could be the first time hadoop community meetup in China and many
>> attendees are expected to come from big data pioneers, such as: Cloudera,
>> Tencent, Alibaba, Xiaomi, Didi, JD, Meituan, Toutiao, Sina, etc.
>>
>>
>> We're still working out the details, such as dates, contents and
>> locations. Here is a quick survey: https://www.surveymonkey.com/r/Y99RT3W
>> where you can vote your prefer dates and locations if you would like to
>> attend - the survey will end in July. 21. 12PM China Standard Time, and
>> result will go public in next day.
>>
>>
>> Also, please feel free to reach out to me if you have a topic to propose
>> for the meetup.  Will send out an update later with more details when I get
>> more to share. Thanks!
>>
>>
>> Cheers,
>>
>>
>> Junping
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/

[Aug 8, 2019 12:37:50 PM] (ericp) YARN-9685: NPE when rendering the info table 
of leaf queue in
[Aug 8, 2019 1:41:04 PM] (weichiu) YARN-9711. Missing spaces in NMClientImpl 
(#1177) Contributed by Charles
[Aug 8, 2019 1:52:04 PM] (elek) HDDS-1926. The new caching layer is used for 
old OM requests but not
[Aug 8, 2019 4:55:46 PM] (github) HDDS-1619. Support volume acl operations for 
OM HA. Contributed by…
[Aug 8, 2019 6:08:48 PM] (stevel) HADOOP-16479. ABFS 
FileStatus.getModificationTime returns localized time
[Aug 8, 2019 8:36:39 PM] (weichiu) HDFS-14459. ClosedChannelException silently 
ignored in
[Aug 8, 2019 8:38:10 PM] (arp7) HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated.
[Aug 8, 2019 8:45:29 PM] (weichiu) HDFS-14662. Document the usage of the new 
Balancer "asService"
[Aug 8, 2019 8:46:31 PM] (weichiu) HDFS-14701. Change Log Level to warn in 
SlotReleaser. Contributed by
[Aug 8, 2019 8:48:29 PM] (weichiu) HDFS-14705. Remove unused configuration 
dfs.min.replication. Contributed
[Aug 8, 2019 8:50:30 PM] (weichiu) HDFS-14693. NameNode should log a warning 
when EditLog IPC logger's
[Aug 8, 2019 10:40:19 PM] (github) HDDS-1863. Freon RandomKeyGenerator even if 
keySize is set to 0, it




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestBasicDiskValidator 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.datanode.TestLargeBlockReport 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1223/artifa

[jira] [Resolved] (HADOOP-16481) ITestS3GuardDDBRootOperations.test_300_MetastorePrune needs to set region

2019-08-09 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16481.
-
  Resolution: Fixed
   Fix Version/s: 3.3.0
Target Version/s: 3.3.0

> ITestS3GuardDDBRootOperations.test_300_MetastorePrune needs to set region
> -
>
> Key: HADOOP-16481
> URL: https://issues.apache.org/jira/browse/HADOOP-16481
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> The new  test {{ITestS3GuardDDBRootOperations.test_300_MetastorePrune}} fails 
> if you don't explicitly set the region
> {code}
> [ERROR] 
> test_300_MetastorePrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
>   Time elapsed: 0.845 s  <<< ERROR!
> org.apache.hadoop.util.ExitUtil$ExitException: No region found from -region 
> flag, config, or S3 bucket
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations.test_300_MetastorePrune(ITestS3GuardDDBRootOperations.java:186)
> {code}
> it should be picked up from the test filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16499) S3A retry policy to be exponential

2019-08-09 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16499.
-
  Resolution: Fixed
   Fix Version/s: 3.3.0
Target Version/s: 3.3.0

> S3A retry policy to be exponential
> --
>
> Key: HADOOP-16499
> URL: https://issues.apache.org/jira/browse/HADOOP-16499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.3.0
>
>
> the fixed s3a retry policy doesnt leave big enough gaps for cached 404s to 
> expire; we cant recover from this
> HADOOP-16490 is a full fix for this, but one we can backport is moving from 
> fixed to exponential retries



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/

[Aug 8, 2019 5:18:00 PM] (ekrogen) HDFS-14034. Support getQuotaUsage API in 
WebHDFS. Contributed by Chao
[Aug 8, 2019 9:55:41 PM] (weichiu) HDFS-14696. Backport HDFS-11273 to branch-2 
(Move




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.TestSafeMode 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.datanode.TestFsDatasetCache 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestNMClient 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   hadoop.mapreduce.v2.TestMRJobs 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/408/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [344K]
   
https://builds.

[jira] [Created] (HADOOP-16502) Add fsck to S3A tests where additional diagnosis is needed

2019-08-09 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16502:
---

 Summary: Add fsck to S3A tests where additional diagnosis is needed
 Key: HADOOP-16502
 URL: https://issues.apache.org/jira/browse/HADOOP-16502
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Gabor Bota


Extend 
{{org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore#testPruneTombstoneUnderTombstone}}

{code:java}
// the child2 entry is still there, though it's now orphan (the store isn't
// meeting the rule "all entries must have a parent which exists"
getFile(child2);

+ // todo create a raw fs
+ S3GuardFsck fsck = new S3GuardFsck(rawFs, ms);

// a full prune will still find and delete it, as this
// doesn't walk the tree
getDynamoMetadataStore().prune(PruneMode.ALL_BY_MODTIME,
now + MINUTE);
{code}

Extend 
{{org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore#testPutFileDeepUnderTombstone}}:

{code:java}
// now put the tombstone
putTombstone(base, now, null);
assertIsTombstone(base);

+ // todo create a raw fs for checking
+ S3GuardFsck fsck = new S3GuardFsck(rawFs, ms);

/*- */
/* Begin S3FileSystem.finishedWrite() sequence. */
/* -*/
AncestorState ancestorState = getDynamoMetadataStore()
.initiateBulkWrite(BulkOperationState.OperationType.Put,
childPath);
{code}



Add new test: 
{{org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations#test_070_run_fsck_on_store}}
{code:java}

  @Test
  public void test_070_run_fsck_on_store() throws Throwable {
// todo create a raw fs
S3AFileSystem rawFs = ;
S3GuardFsck s3GuardFsck = new S3GuardFsck(rawFs, metastore);
  }
{code}





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org