Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-68093197
Hi @koeninger , several simple questions:
1. How to map each RDD partition to Kafka partition, each Kafka partition
is a RDD partition?
2. How to do receiver injection rate control, in other words, how to decide
at which offset current task should read?
3. Do you have any consideration of fault tolerance?
In general it is quite similar to what I did long ago a Kafka InputFormat
(https://github.com/jerryshao/kafka-input-format) which can be loaded by
HadoopRDD. I'm not sure is this the streaming way of fixing the exact-once
semantics?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]