Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11250#discussion_r53296925
  
    --- Diff: 
extras/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala
 ---
    @@ -241,13 +243,13 @@ abstract class KinesisStreamTests(aggregateTestData: 
Boolean) extends KinesisFun
         kinesisStream.foreachRDD((rdd: RDD[Array[Byte]], time: Time) => {
           val kRdd = rdd.asInstanceOf[KinesisBackedBlockRDD[Array[Byte]]]
           val data = rdd.map { bytes => new String(bytes).toInt 
}.collect().toSeq
    -      collectedData(time) = (kRdd.arrayOfseqNumberRanges, data)
    +      collectedData.put(time, (kRdd.arrayOfseqNumberRanges, data))
         })
     
         ssc.remember(Minutes(60)) // remember all the batches so that they are 
all saved in checkpoint
         ssc.start()
     
    -    def numBatchesWithData: Int = collectedData.count(_._2._2.nonEmpty)
    +    def numBatchesWithData: Int = 
collectedData.asScala.count(_._2._2.nonEmpty)
    --- End diff --
    
    I think that we have a problem in lines like this, still. I think this is 
what @holdenk was alluding to. This returns a wrapper on the collection, and 
then iterates over it to count non-empty elements. But it may be modified by 
the `put` above while that happens, throwing `ConcurrentModificationException`. 
We'd have to clone it, or synchronize on the whole object while counting (the 
latter is probably better).
    
    In that case, it may not add any value to use Java's `ConcurrentHashMap`. 
Synchronizing access to `mutable.HashMap` is the same and doesn't require using 
a Java type.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to