Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1722#discussion_r15725785
  
    --- Diff: 
core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala
 ---
    @@ -30,8 +30,19 @@ class ExternalAppendOnlyMapSuite extends FunSuite with 
LocalSparkContext {
       private def mergeValue(buffer: ArrayBuffer[Int], i: Int) = buffer += i
       private def mergeCombiners(buf1: ArrayBuffer[Int], buf2: 
ArrayBuffer[Int]) = buf1 ++= buf2
     
    +  private def createSparkConf(loadDefaults: Boolean): SparkConf = {
    +    val conf = new SparkConf(loadDefaults)
    +    // Make the Java serializer write a reset instruction (TC_RESET) after 
each object to test
    +    // for a bug we had with bytes written past the last object in a batch 
(SPARK-2792)
    +    conf.set("spark.serializer.objectStreamReset", "0")
    +    conf.set("spark.serializer", 
"org.apache.spark.serializer.JavaSerializer")
    +    // Ensure that we actually have multiple batches per spill file
    +    conf.set("spark.shuffle.spill.batchSize", "10")
    --- End diff --
    
    Are you sure about that? The spark.serializer.objectStreamReset above is 
set to 0, and that's what causes a TC_RESET after each object written, but the 
batchSize is about how many objects you write before closing a batch. I 
definitely saw crashes when I set this to 10 and did not have the fixes in 
ExternalSorter. There were more when it was 1, so I can also do that. I don't 
think 0 will work with the code we have written.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to