What does /tmp/jvm-21940/hs_error.log tell you? It might give hints to what
threads are allocating the extra off-heap memory.


On Fri, Nov 21, 2014 at 1:50 PM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:

> Howdy folks,
>
> I’m trying to understand why I’m getting “insufficient memory” errors when
> trying to run Spark Units tests within a CentOS Docker container.
>
> I’m building Spark and running the tests as follows:
>
> # build
> sbt/sbt -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Pkinesis-asl
> -Phive -Phive-thriftserver package assembly/assembly
>
> # Scala unit tests
> sbt/sbt -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Pkinesis-asl
> -Phive -Phive-thriftserver catalyst/test sql/test hive/test mllib/test
>
> The build completes successfully. After humming along for many minutes, the
> unit tests fail with this:
>
> OpenJDK 64-Bit Server VM warning: INFO:
> os::commit_memory(0x000000074a580000, 30932992, 0) failed;
> error='Cannot allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (malloc) failed to allocate 30932992 bytes
> for committing reserved memory.
> # An error report file with more information is saved as:
> # /tmp/jvm-21940/hs_error.log
> Exception in thread "Thread-20" Exception in thread "Thread-16"
> java.io.EOFException
>     at
> java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2598)
>     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1318)
>     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>     at
> org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1$React.react(Framework.scala:945)
>     at
> org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1.run(Framework.scala:934)
>     at java.lang.Thread.run(Thread.java:745)
> java.net.SocketException: Connection reset
>     at java.net.SocketInputStream.read(SocketInputStream.java:196)
>     at java.net.SocketInputStream.read(SocketInputStream.java:122)
>     at java.net.SocketInputStream.read(SocketInputStream.java:210)
>     at
> java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2293)
>     at
> java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2586)
>     at
> java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2596)
>     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1318)
>     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>     at sbt.React.react(ForkTests.scala:114)
>     at
> sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:74)
>     at java.lang.Thread.run(Thread.java:745)
>
> Here are some (I think) relevant environment variables I have set:
>
> export
> JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71-2.5.3.1.el7_0.x86_64"
> export JAVA_OPTS="-Xms128m -Xmx1g -XX:MaxPermSize=128m"
> export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=128m"
>
> How do I narrow down why this is happening? I know that running this thing
> within a Docker container may be playing a role here, but before poking
> around with Docker configs I want to make an effort at getting the Java
> setup right within the container.
>
> I’ve already tried giving the container 2GB of memory, so I don’t think at
> this point it’s a restriction on the container.
>
> Any pointers on how to narrow the problem down?
>
> Nick
>
> P.S. If you’re wondering why I’m trying to run unit tests within a Docker
> container, I’m exploring a different angle on SPARK-3431
> <https://issues.apache.org/jira/browse/SPARK-3431>.
> ​
>

Reply via email to