Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/5725#issuecomment-97274034
  
    The most recent test failure looks like a possible double-free somewhere.  
Here's what I saw when I ran that test locally:
    
    ```
    [info] Test 
org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite.randomizedStressTest 
started
    java(7099,0x111e21000) malloc: *** error for object 0x7fafec90e800: pointer 
being freed was not allocated
    *** set a breakpoint in malloc_error_break to debug
    ```
    
    This error is non-deterministic.  Based on some local debugging with 
`Thread.dumpStack()`, I think that this was due to a race between 
BytesToBytesMap's finalizer and the tearDown() method in 
AbstractBytesToBytesMapSuite.  I think that we should remove the finalizers, 
since they add complexity and are no longer necessary since we have leak 
detection / cleanup at higher levels.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to