[
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15611719#comment-15611719
]
Steve Loughran commented on HADOOP-13514:
-----------------------------------------
OK, I see it in
[https://builds.apache.org/job/PreCommit-HDFS-Build/17313/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt]
{code}
Exception in thread "Thread-1246" Exception in thread "Thread-1267"
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at java.io.BufferedReader.readLine(BufferedReader.java:356)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at
org.apache.maven.surefire.shade.org.apache.maven.shared.utils.cli.StreamPumper.run(StreamPumper.java:76)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Thread-1249" java.lang.OutOfMemoryError: GC overhead limit
exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at java.io.BufferedReader.readLine(BufferedReader.java:356)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at
org.apache.maven.surefire.shade.org.apache.maven.shared.utils.cli.StreamPumper.run(StreamPumper.java:76)
Exception in thread "Thread-1264" java.lang.OutOfMemoryError: GC overhead limit
exceeded
Exception in thread "ping-timer-10sec" java.lang.OutOfMemoryError: GC overhead
limit exceeded
{code}
Maybe it's asking for even more than usual. Apparently a search for the error
online brings up a StackOverflow topic placing this near the Xerces code —and
remember, to generate the XML result, surefire has to build up the entire DOM
of the output. If we've got a test which generates lots of console output, it
may be close to the edge in memory use —and the SF update tipping it over the
edge.
# I'm going to see if I can tweak the memory consumption of the HDFS preruns
... increase that, rerun the failed build. If the problem goes away, then roll
out the change to the (many) other builds. If it doesn't, rollback
> Upgrade maven surefire plugin to 2.19.1
> ---------------------------------------
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
> Issue Type: Improvement
> Components: build
> Affects Versions: 2.8.0
> Reporter: Ewan Higgs
> Assignee: Ewan Higgs
> Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch,
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they
> develop; only the bits they're working on. Surefire 2.19 introduced more
> useful test filters which let us run a subset of the tests that brings the
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*,
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*,
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]