These issues have likely been solved in future versions. Please use the
latest release - Spark 2.3.0.

On Tue, Mar 6, 2018 at 5:11 PM, Junfeng Chen <darou...@gmail.com> wrote:

> Spark 2.1.1.
>
> Actually it is a warning rather than an exception, so there is no stack
> trace. Just many this line:
>
>> CachedKafkaConsumer: CachedKafkaConsumer is not running in
>> UninterruptibleThread. It may hang when CachedKafkaConsumer's method are
>> interrupted because of KAFKA-1894.
>
>
>
> Regard,
> Junfeng Chen
>
> On Wed, Mar 7, 2018 at 3:34 AM, Tathagata Das <tathagata.das1...@gmail.com
> > wrote:
>
>> Which version of Spark are you using? And can you give us the full stack
>> trace of the exception?
>>
>> On Tue, Mar 6, 2018 at 1:53 AM, Junfeng Chen <darou...@gmail.com> wrote:
>>
>>> I am trying to read kafka and save the data as parquet file on hdfs
>>> according to this  https://stackoverflow.co
>>> m/questions/45827664/read-from-kafka-and-write-to-hdfs-in-parquet
>>> <https://stackoverflow.com/questions/45827664/read-from-kafka-and-write-to-hdfs-in-parquet>
>>>
>>>
>>> The code is similar to :
>>>
>>> val df = spark
>>>   .read
>>>   .format("kafka")
>>>   .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
>>>   .option("subscribe", "topic1")
>>>   .load()
>>>
>>> while I am writing in Java.
>>>
>>> However, I keep throwing the following warning:
>>> CachedKafkaConsumer: CachedKafkaConsumer is not running in
>>> UninterruptibleThread. It may hang when CachedKafkaConsumer's method are
>>> interrupted because of KAFKA-1894.
>>>
>>> How to solve it? Thanks!
>>>
>>>
>>> Regard,
>>> Junfeng Chen
>>>
>>
>>
>

Reply via email to