Hi,
To better debug the issue, please check the below config properties:
- max.partition.fetch.bytes within spark kafka consumer. If not set for
consumer then the global config at broker level.
- spark.streaming.kafka.consumer.poll.ms
- spark.network.timeout (If the above is not set, then poll.ms is
default to spark.network.timeout)
-
-
Akshay Bhardwaj
+91-97111-33849
On Wed, Mar 6, 2019 at 8:39 AM JF Chen <[email protected]> wrote:
> When my kafka executor reads data from kafka, sometimes it throws the
> error "java.lang.AssertionError: assertion failed: Failed to get records
> for **** after polling for 180000" , which after 3 minutes of executing.
> The data waiting for read is not so huge, which is about 1GB. And other
> partitions read by other tasks are very fast, the error always occurs on
> some specific executor..
>
> Regard,
> Junfeng Chen
>