Not quite sure if error is resolved. Upon further probing, the setting
spark.memory.offHeap.enabled is not getting applied in this build. When I
print its value from
core/src/main/scala/org/apache/spark/memory/MemoryManager.scala, it returns
false even though the webUI is indicating that it's been
Thanks Ted. That stack trace is from 1.5.1 build.
I tried on the latest code as you suggested. Memory management seems to
have changed quite a bit and this error has been fixed as well. :)
Thanks for the help!
Regards,
~Mayuresh
On Mon, Dec 21, 2015 at 10:10 AM, Ted Yu
Any intuition on this?
~Mayuresh
On Thu, Dec 17, 2015 at 8:04 PM, Mayuresh Kunjir
wrote:
> I am testing a simple Sort program written using Dataframe APIs. When I
> enable spark.unsafe.offHeap, the output stage fails with a NPE. The
> exception when run on spark-1.5.1
w.r.t.
at
org.apache.spark.sql.execution.UnsafeExternalRowSorter$RowComparator.compare(UnsafeExternalRowSorter.java:202)
I looked at UnsafeExternalRowSorter.java in 1.6.0 which only has 192 lines
of code.
Can you run with latest RC of 1.6.0 and paste the stack trace ?
Thanks
On Thu, Dec 17,
I am testing a simple Sort program written using Dataframe APIs. When I
enable spark.unsafe.offHeap, the output stage fails with a NPE. The
exception when run on spark-1.5.1 is copied below.
Job aborted due to stage failure: Task 23 in stage 3.0 failed 4 times, most
recent failure: Lost task