Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/7825#discussion_r36041346
--- Diff:
extras/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisRecordProcessor.scala
---
@@ -75,21 +76,13 @@ private[kinesis] class KinesisRecordProcessor(
override def processRecords(batch: List[Record], checkpointer:
IRecordProcessorCheckpointer) {
if (!receiver.isStopped()) {
try {
- /*
- * Notes:
- * 1) If we try to store the raw ByteBuffer from record.getData(),
the Spark Streaming
- * Receiver.store(ByteBuffer) attempts to deserialize the
ByteBuffer using the
- * internally-configured Spark serializer (kryo, etc).
- * 2) This is not desirable, so we instead store a raw Array[Byte]
and decouple
- * ourselves from Spark's internal serialization strategy.
- * 3) For performance, the BlockGenerator is asynchronously
queuing elements within its
- * memory before creating blocks. This prevents the small
block scenario, but requires
- * that you register callbacks to know when a block has been
generated and stored
- * (WAL is sufficient for storage) before can checkpoint back
to the source.
- */
- batch.foreach(record => receiver.store(record.getData().array()))
-
- logDebug(s"Stored: Worker $workerId stored ${batch.size} records
for shardId $shardId")
+ if (batch.size() > 0) {
+ val dataIterator = batch.iterator().map { _.getData().array() }
--- End diff --
Oops, this problem once again. Sorry!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]