Thanks Cody,

It worked for me buy keeping num executor with each having 1 core = num of
partitions of kafka.



On Mon, Sep 18, 2017 at 8:47 PM Cody Koeninger <c...@koeninger.org> wrote:

> Have you searched in jira, e.g.
>
> https://issues.apache.org/jira/browse/SPARK-19185
>
> On Mon, Sep 18, 2017 at 1:56 AM, HARSH TAKKAR <takkarha...@gmail.com>
> wrote:
> > Hi
> >
> > Changing spark version if my last resort, is there any other workaround
> for
> > this problem.
> >
> >
> > On Mon, Sep 18, 2017 at 11:43 AM pandees waran <pande...@gmail.com>
> wrote:
> >>
> >> All, May I know what exactly changed in 2.1.1 which solved this problem?
> >>
> >> Sent from my iPhone
> >>
> >> On Sep 17, 2017, at 11:08 PM, Anastasios Zouzias <zouz...@gmail.com>
> >> wrote:
> >>
> >> Hi,
> >>
> >> I had a similar issue using 2.1.0 but not with Kafka. Updating to 2.1.1
> >> solved my issue. Can you try with 2.1.1 as well and report back?
> >>
> >> Best,
> >> Anastasios
> >>
> >> Am 17.09.2017 16:48 schrieb "HARSH TAKKAR" <takkarha...@gmail.com>:
> >>
> >>
> >> Hi
> >>
> >> I am using spark 2.1.0 with scala  2.11.8, and while iterating over the
> >> partitions of each rdd in a dStream formed using KafkaUtils, i am
> getting
> >> the below exception, please suggest a fix.
> >>
> >> I have following config
> >>
> >> kafka :
> >> enable.auto.commit:"true",
> >> auto.commit.interval.ms:"1000",
> >> session.timeout.ms:"30000",
> >>
> >> Spark:
> >>
> >> spark.streaming.backpressure.enabled=true
> >>
> >> spark.streaming.kafka.maxRatePerPartition=200
> >>
> >>
> >> Exception in task 0.2 in stage 3236.0 (TID 77795)
> >> java.util.ConcurrentModificationException: KafkaConsumer is not safe for
> >> multi-threaded access
> >>
> >> --
> >> Kind Regards
> >> Harsh
> >>
> >>
> >
>

Reply via email to