Are you using Spark 1.6+ ?

See SPARK-11293

On Wed, Aug 3, 2016 at 5:03 AM, Rychnovsky, Dusan <
dusan.rychnov...@firma.seznam.cz> wrote:

> Hi,
>
>
> I have a Spark workflow that when run on a relatively small portion of
> data works fine, but when run on big data fails with strange errors. In the
> log files of failed executors I found the following errors:
>
>
> Firstly
>
>
> > Managed memory leak detected; size = 263403077 bytes, TID = 6524
>
> And then a series of
>
> > java.lang.OutOfMemoryError: Unable to acquire 241 bytes of memory, got 0
>
> > at
> org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:120)
>
>
> > at
> org.apache.spark.shuffle.sort.ShuffleExternalSorter.acquireNewPageIfNecessary(ShuffleExternalSorter.java:346)
>
>
> > at
> org.apache.spark.shuffle.sort.ShuffleExternalSorter.insertRecord(ShuffleExternalSorter.java:367)
>
>
> > at
> org.apache.spark.shuffle.sort.UnsafeShuffleWriter.insertRecordIntoSorter(UnsafeShuffleWriter.java:237)
>
>
> > at
> org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:164)
>
>
> > at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>
> > at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>
> > at org.apache.spark.scheduler.Task.run(Task.scala:89)
>
> > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>
> > at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>
> > at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>
> > at java.lang.Thread.run(Thread.java:745)
>
>
> The job keeps failing in the same way (I tried a few times).
>
>
> What could be causing such error?
>
> I have a feeling that I'm not providing enough context necessary to
> understand the issue. Please ask for any other information needed.
>
>
> Thank you,
>
> Dusan
>
>
>

Reply via email to