[jira] [Created] (HDFS-15185) StartupProgress reports edits segments until the entire startup completes

2020-02-19 Thread Konstantin Shvachko (Jira)
Konstantin Shvachko created HDFS-15185:
--

 Summary: StartupProgress reports edits segments until the entire 
startup completes
 Key: HDFS-15185
 URL: https://issues.apache.org/jira/browse/HDFS-15185
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.10.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


Startup Progress page keeps reporting edits segments after the {{LOAD_EDITS}} 
stage is complete. New steps are added to StartupProgress while journal tailing 
until all startup phases are completed. This adds a lot of edits steps, since 
{{SAFEMODE}} phase can take a long time on a large cluster.
With fast tailing the segments are small, but the number of them is large - 
160K. This makes the page load forever.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Discuss] Ozone moving to Beta tag

2020-02-19 Thread Dinesh Chitlangia
+1 for beta given major improvements.



On Wed, Feb 19, 2020 at 4:17 PM Jitendra Pandey  wrote:

> +1. Given massive improvements in performance and stability, ozone is ready
> for beta.
>
> On Wed, Feb 19, 2020 at 1:04 PM Salvatore LaMendola (BLOOMBERG/ 919 3RD A)
> <
> slamendo...@bloomberg.net> wrote:
>
> > +1 on moving to beta. This move makes sense to me.
> >
> > I've tested each point release so far in 0.4, including building several
> > master snapshots along the way, and I agree with what Anu is saying below
> > with regard to Ozone's stability having improved beyond "alpha" state.
> >
> >
> > From: aengin...@apache.org At: 02/19/20 15:17:38To:
> > ozone-...@hadoop.apache.org,  hdfs-dev@hadoop.apache.org
> > Subject: [Discuss] Ozone moving to Beta tag
> >
> > Hi All,
> >
> >
> > I would like to propose moving Ozone from 'Alpha' tags to 'Beta' tags
> when
> > we do future releases. Here are a couple of reasons why I think we should
> > make this move.
> >
> >
> >1. Ozone Manager or the Namenode for Ozone scales to more than 1
> billion
> >keys. We tested this in our labs in an organic fashion; that is, we
> were
> >able to create more than 1 billion keys from external clients with no
> > loss
> >in performance.
> >2. The ozone Manager meets the performance and resource constraints
> that
> >we set out to achieve. We were able to sustain the same throughput at
> > Ozone
> >manager for over three days that took us to get this 1 billion keys.
> > That
> >is, we did not have to shut down or resize memory for the namenode as
> we
> >went through this exercise.
> >3.  The most critical, we did this experiment with 64GB of memory
> >allocation in JVM and 64 GB of RAM off-heap allocation. That is, the
> > Ozone
> >Manager was able to achieve this scale with far less memory footprint
> > than
> >HDFS.
> >4. Ozone's performance is at par with HDFS when running workloads like
> >Hive (
> >
> >
> >
> https://blog.cloudera.com/benchmarking-ozone-clouderas-next-generation-storage-f
> > or-cdp/
> > <
> https://blog.cloudera.com/benchmarking-ozone-clouderas-next-generation-storage-for-cdp/
> >
> >)
> >5. We have been able to run long-running clusters with Ozone.
> >
> >
> > Having achieved these goals, I propose that we move from the planned
> > 0.4.2-Alpha release to 0.5.0-Beta as our next release. If we hear no
> > concerns about this, we would like to move Ozone from Alpha to Beta
> > releases.
> >
> >
> > Thanks
> >
> > Anu
> >
> >
> > P.S. I am CC-ing HDFS dev since many people who are interested in Ozone
> > still have not subscribed to Ozone dev lists. My apologies if it feels
> like
> > spam, I promise that over time we will become less noisy in the HDFS
> > channel.
> >
> >
> > PPS. I know lots of you will want to know more specifics; Our blog
> presses
> > are working overtime and I promise you that you will get to see all the
> > details pretty soon.
> >
> >
> >
>


Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-19 Thread Dinesh Chitlangia
Thanks Wei-Chiu for initiating this.

+1 for 2.8 EOL.

On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka  wrote:

> Thanks Wei-Chiu for starting the discussion,
>
> +1 for the EoL.
>
> -Akira
>
> On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena  wrote:
>
> > Thanx Wei-Chiu for initiating this
> > +1 for marking 2.8 EOL
> >
> > -Ayush
> >
> > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang 
> wrote:
> > >
> > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.
> > >
> > > It's been 17 months since the release and the community by and large
> have
> > > moved up to 2.9/2.10/3.x.
> > >
> > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> > discussion
> > > and reduce the number of active branches?
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
>


Re: [Discuss] Ozone moving to Beta tag

2020-02-19 Thread Jitendra Pandey
+1. Given massive improvements in performance and stability, ozone is ready
for beta.

On Wed, Feb 19, 2020 at 1:04 PM Salvatore LaMendola (BLOOMBERG/ 919 3RD A) <
slamendo...@bloomberg.net> wrote:

> +1 on moving to beta. This move makes sense to me.
>
> I've tested each point release so far in 0.4, including building several
> master snapshots along the way, and I agree with what Anu is saying below
> with regard to Ozone's stability having improved beyond "alpha" state.
>
>
> From: aengin...@apache.org At: 02/19/20 15:17:38To:
> ozone-...@hadoop.apache.org,  hdfs-dev@hadoop.apache.org
> Subject: [Discuss] Ozone moving to Beta tag
>
> Hi All,
>
>
> I would like to propose moving Ozone from 'Alpha' tags to 'Beta' tags when
> we do future releases. Here are a couple of reasons why I think we should
> make this move.
>
>
>1. Ozone Manager or the Namenode for Ozone scales to more than 1 billion
>keys. We tested this in our labs in an organic fashion; that is, we were
>able to create more than 1 billion keys from external clients with no
> loss
>in performance.
>2. The ozone Manager meets the performance and resource constraints that
>we set out to achieve. We were able to sustain the same throughput at
> Ozone
>manager for over three days that took us to get this 1 billion keys.
> That
>is, we did not have to shut down or resize memory for the namenode as we
>went through this exercise.
>3.  The most critical, we did this experiment with 64GB of memory
>allocation in JVM and 64 GB of RAM off-heap allocation. That is, the
> Ozone
>Manager was able to achieve this scale with far less memory footprint
> than
>HDFS.
>4. Ozone's performance is at par with HDFS when running workloads like
>Hive (
>
>
> https://blog.cloudera.com/benchmarking-ozone-clouderas-next-generation-storage-f
> or-cdp/
> 
>)
>5. We have been able to run long-running clusters with Ozone.
>
>
> Having achieved these goals, I propose that we move from the planned
> 0.4.2-Alpha release to 0.5.0-Beta as our next release. If we hear no
> concerns about this, we would like to move Ozone from Alpha to Beta
> releases.
>
>
> Thanks
>
> Anu
>
>
> P.S. I am CC-ing HDFS dev since many people who are interested in Ozone
> still have not subscribed to Ozone dev lists. My apologies if it feels like
> spam, I promise that over time we will become less noisy in the HDFS
> channel.
>
>
> PPS. I know lots of you will want to know more specifics; Our blog presses
> are working overtime and I promise you that you will get to see all the
> details pretty soon.
>
>
>


Re:[Discuss] Ozone moving to Beta tag

2020-02-19 Thread Salvatore LaMendola (BLOOMBERG/ 919 3RD A)
+1 on moving to beta. This move makes sense to me.

I've tested each point release so far in 0.4, including building several master 
snapshots along the way, and I agree with what Anu is saying below with regard 
to Ozone's stability having improved beyond "alpha" state.


From: aengin...@apache.org At: 02/19/20 15:17:38To:  
ozone-...@hadoop.apache.org,  hdfs-dev@hadoop.apache.org
Subject: [Discuss] Ozone moving to Beta tag

Hi All,


I would like to propose moving Ozone from 'Alpha' tags to 'Beta' tags when
we do future releases. Here are a couple of reasons why I think we should
make this move.


   1. Ozone Manager or the Namenode for Ozone scales to more than 1 billion
   keys. We tested this in our labs in an organic fashion; that is, we were
   able to create more than 1 billion keys from external clients with no loss
   in performance.
   2. The ozone Manager meets the performance and resource constraints that
   we set out to achieve. We were able to sustain the same throughput at Ozone
   manager for over three days that took us to get this 1 billion keys. That
   is, we did not have to shut down or resize memory for the namenode as we
   went through this exercise.
   3.  The most critical, we did this experiment with 64GB of memory
   allocation in JVM and 64 GB of RAM off-heap allocation. That is, the Ozone
   Manager was able to achieve this scale with far less memory footprint than
   HDFS.
   4. Ozone's performance is at par with HDFS when running workloads like
   Hive (
   
https://blog.cloudera.com/benchmarking-ozone-clouderas-next-generation-storage-f
or-cdp/
   )
   5. We have been able to run long-running clusters with Ozone.


Having achieved these goals, I propose that we move from the planned
0.4.2-Alpha release to 0.5.0-Beta as our next release. If we hear no
concerns about this, we would like to move Ozone from Alpha to Beta
releases.


Thanks

Anu


P.S. I am CC-ing HDFS dev since many people who are interested in Ozone
still have not subscribed to Ozone dev lists. My apologies if it feels like
spam, I promise that over time we will become less noisy in the HDFS
channel.


PPS. I know lots of you will want to know more specifics; Our blog presses
are working overtime and I promise you that you will get to see all the
details pretty soon.




[Discuss] Ozone moving to Beta tag

2020-02-19 Thread Anu Engineer
Hi All,


I would like to propose moving Ozone from 'Alpha' tags to 'Beta' tags when
we do future releases. Here are a couple of reasons why I think we should
make this move.



   1. Ozone Manager or the Namenode for Ozone scales to more than 1 billion
   keys. We tested this in our labs in an organic fashion; that is, we were
   able to create more than 1 billion keys from external clients with no loss
   in performance.
   2. The ozone Manager meets the performance and resource constraints that
   we set out to achieve. We were able to sustain the same throughput at Ozone
   manager for over three days that took us to get this 1 billion keys. That
   is, we did not have to shut down or resize memory for the namenode as we
   went through this exercise.
   3.  The most critical, we did this experiment with 64GB of memory
   allocation in JVM and 64 GB of RAM off-heap allocation. That is, the Ozone
   Manager was able to achieve this scale with far less memory footprint than
   HDFS.
   4. Ozone's performance is at par with HDFS when running workloads like
   Hive (
   
https://blog.cloudera.com/benchmarking-ozone-clouderas-next-generation-storage-for-cdp/
   )
   5. We have been able to run long-running clusters with Ozone.


Having achieved these goals, I propose that we move from the planned
0.4.2-Alpha release to 0.5.0-Beta as our next release. If we hear no
concerns about this, we would like to move Ozone from Alpha to Beta
releases.


Thanks

Anu


P.S. I am CC-ing HDFS dev since many people who are interested in Ozone
still have not subscribed to Ozone dev lists. My apologies if it feels like
spam, I promise that over time we will become less noisy in the HDFS
channel.


PPS. I know lots of you will want to know more specifics; Our blog presses
are working overtime and I promise you that you will get to see all the
details pretty soon.


Re: This week's Hadoop storage community online meetup (APAC)

2020-02-19 Thread Ahmed Hussein
Hi Wei-Chiu,

I joined the meeting on Feb 19th on the zoom link sent in the original
message but no one was there.
Is the schedule up to date? (
https://calendar.google.com/calendar/b/3?cid=aGFkb29wLmNvbW11bml0eS5zeW5jLnVwQGdtYWlsLmNvbQ
)

On Thu, Feb 13, 2020 at 11:47 AM Wei-Chiu Chuang
 wrote:

> Thanks for joining the call last night/yesterday.
>
> Please find the video recording here:
>
> https://cloudera.zoom.us/rec/play/tMYpdeyp_Ts3EoaR5ASDVPV7W429Kays03Id-PMPzxq9WyZQZlGkbrtBM-Hk48mM3YD3z9xi2zWexHZz?continueMode=true
>
> The presentation slides is available in my personal Google Drive:
> https://drive.google.com/open?id=1IUPtknaPUeKIL74TpNt-R6CK5ICz5veW
>
> On Wed, Feb 12, 2020 at 9:50 PM Wei-Chiu Chuang 
> wrote:
>
> > Gentle reminder for this event.
> > Siyao will lead the session first, followed by a demo.
> >
> > Zoom link: https://cloudera.zoom.us/j/880548968
> >
> > Past meeting minutes:
> >
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit?usp=sharing
> >
> >
> > On Mon, Feb 10, 2020 at 1:42 PM Wei-Chiu Chuang 
> > wrote:
> >
> >> Hi!
> >>
> >> I would like to spend this week's session to discuss Distributed Tracing
> >> in Hadoop.
> >>
> >> We had a session at the last Bay Area Hadoop meetup back in June
> >> discussing the Distributed Tracing work we have been doing. I'd like to
> >> share our latest update with the APAC community.
> >>
> >> Zoom link: https://cloudera.zoom.us/j/880548968
> >>
> >> Time/Date:
> >> Feb 12 10PM (US west coast PST) / Feb 13 2pm (Beijing) / Feb 13 3pm
> >> (Tokyo) / Feb 13 11:30am (New Delhi)
> >>
> >
>


[jira] [Created] (HDFS-15184) Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned:

2020-02-19 Thread Jira
任建亭 created HDFS-15184:
--

 Summary: Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
 Key: HDFS-15184
 URL: https://issues.apache.org/jira/browse/HDFS-15184
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.2.1
 Environment: windows 10

JDK 1.8

maven3.6.1

ProtocolBuffer 2.5.0

CMake 3.1.3

git 2.25.0

zlib 1.2.5

Visual Studio 2010 Professional
Reporter: 任建亭
 Fix For: 3.2.1


When I build hadoop 3.2.1 on windows10, it failed. My command is 'mvn clean 
package -Pdist,native-win -DskipTests -Dtar'.
{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
[ERROR] around Ant part .. @ 9:122 in 
D:\h3s\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15183) For AzureNativeFS, when BlockCompaction is enabled, FileSystem.create(path).close() would throw exception.

2020-02-19 Thread Xiaolei Liu (Jira)
Xiaolei Liu created HDFS-15183:
--

 Summary: For AzureNativeFS, when BlockCompaction is enabled, 
FileSystem.create(path).close() would throw exception.
 Key: HDFS-15183
 URL: https://issues.apache.org/jira/browse/HDFS-15183
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.2.1, 2.9.2
 Environment: macOS Mojave 10.14.6

 
Reporter: Xiaolei Liu


For AzureNativeFS, when BlockCompaction is enabled, 
FileSystem.create(path).close() would throw blob not existed exception.

Block Compaction Setting: fs.azure.block.blob.with.compaction.dir
Exception is thrown from close(), this would happen when no write happened. 
When actually write any content in the file, same context close() won't trigger 
the exception. 

When BlockCompaction is not enabled, this issue won't happen. 

Call Stack:

org.apache.hadoop.fs.azure.AzureException: Source blob 
_$azuretmpfolder$/956457df-4a3e-4285-bc68-29f68b9b36c4test1911.log does not 
exist.
org.apache.hadoop.fs.azure.AzureException: Source blob 
_$azuretmpfolder$/956457df-4a3e-4285-bc68-29f68b9b36c4test1911.log does not 
exist. 
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2648)
 
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2608)
 
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.restoreKey(NativeAzureFileSystem.java:1199)
 
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.close(NativeAzureFileSystem.java:1068)
 
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
 
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/

[Feb 18, 2020 5:50:11 PM] (aagarwal) HADOOP-16833. InstrumentedLock should log 
lock queue time. Contributed
[Feb 19, 2020 12:50:37 AM] (github) YARN-8374. Upgrade objenesis to 2.6 (#1798)




-1 overall


The following subsystems voted -1:
asflicense compile findbugs mvninstall mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.rm.TestRMCommunicator 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.v2.app.TestStagingCleanup 
   hadoop.yarn.sls.TestReservationSystemInvariants 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvninstall-root.txt
  [1.2M]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt
  [20K]

   checkstyle:

   

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-02-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/

No changes




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint mvninstall mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.TestNameNodeMXBean 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-mvninstall-root.txt
  [0]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [64K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [64K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [64K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-compile-root-jdk1.8.0_242.txt
  [0]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-compile-root-jdk1.8.0_242.txt
  [0]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-compile-root-jdk1.8.0_242.txt
  [0]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt
  []

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/patch-mvnsite-root.txt
  [52K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/601/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc: