Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16114#discussion_r90756693
  
    --- Diff: 
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisRecordProcessor.scala
 ---
    @@ -56,6 +56,27 @@ private[kinesis] class 
KinesisRecordProcessor[T](receiver: KinesisReceiver[T], w
         logInfo(s"Initialized workerId $workerId with shardId $shardId")
       }
     
    +  private def addRecords(batch: List[Record], checkpointer: 
IRecordProcessorCheckpointer): Unit = {
    +    receiver.addRecords(shardId, batch)
    +    logDebug(s"Stored: Worker $workerId stored ${batch.size} records for 
shardId $shardId")
    +    receiver.setCheckpointer(shardId, checkpointer)
    +  }
    +
    +  /**
    +   * Limit the number of processed records from Kinesis stream. This is 
because the KCL cannot
    +   * control the number of aggregated records to be fetched even if we set 
`MaxRecords`
    +   * in `KinesisClientLibConfiguration`. For example, if we set 10 to the 
number of max records
    +   * in a worker and a producer aggregates two records into one message, 
the worker possibly
    +   * 20 records every callback function called.
    +   */
    +  private def processRecordsWithLimit(
    +      batch: List[Record], checkpointer: IRecordProcessorCheckpointer): 
Unit = {
    +    val maxRecords = receiver.getCurrentLimit
    +    for (start <- 0 until batch.size by maxRecords) {
    --- End diff --
    
    Hm, it just occurred to me that you would have a problem here if batch.size 
and maxRecords were both over Int.MaxValue / 2, and maxRecords were a bit 
smaller than batch.size. The addition below overflows.
    
    It seems like a corner case but I note above you already defensively capped 
the maxRecords at Int.MaxValue so maybe it's less unlikely than it sounds.
    
    You can fix it by letting the addition and min comparison take place over 
longs and then convert back to int.
    
    Alternatively I think this is even simpler in Scala, though I imagine 
there's some extra overhead here:
    
    ```
    batch.grouped(maxRecords).foreach(batch => addRecords(batch, checkpointer))
    ```
    
    I don't know of a good reviewer for this component but I think I'm 
comfortable merging a straightforward change like this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to