Here I use the 'KafkaUtils.createDirectStream' to integrate Kafka with Spark 
Streaming. I submitted the app, then I changed (increased) Kafka's partition 
number after it's running for a while. Then I check the input offset with 
'rdd.asInstanceOf[HasOffsetRanges].offsetRanges', seeing that only the offset 
of the initial partitions are returned.


Does this mean Spark Streaming's Kafka integration can't update its parallelism 
when Kafka's partition number is changed?

Reply via email to