Do you have any sciprt or steps to write the broker topic messages into a
hdfs please help me in this
On Tue, Sep 5, 2017 at 11:33 AM, Sagar Nadagoud <
sagar.nadag...@wildjasmine.com> wrote:
> thank you for reply. issue got solved i had mistake in configuration
> broker port.i just fixed fit.
>
thank you for reply. issue got solved i had mistake in configuration broker
port.i just fixed fit.
On Mon, Sep 4, 2017 at 9:10 PM, Ted Yu wrote:
> Please give us more information:
>
> release of Kafka
>
> Did your consumer get any error ?
> Have you inspected broker log(s) ?
>
> Cheers
>
> O
Hi,
Please help me in this issue i am unable to read the message from topic.
Please give us more information:
release of Kafka
Did your consumer get any error ?
Have you inspected broker log(s) ?
Cheers
On Sun, Sep 3, 2017 at 11:08 PM, Sagar Nadagoud <
sagar.nadag...@wildjasmine.com> wrote:
> Hi,
>
> Please help me in this issue i am unable to read the message from top
Hi,
We are planning to expand cluster from 2 node to 8 node. The partition
reassignment tool has the option to move topic or partition.
Irrespective of number of node additions. If I give all the topics in the
topic-to-move.json and all the brokers in the below command then it will
give equal di
Thanks Sameer, yes this looks like a bug. Can you file a JIRA?
On Mon, 4 Sep 2017 at 12:23 Sameer Kumar wrote:
> Hi,
>
> I am using InMemoryStore along with GlobalKTable. I came to realize that I
> was losing on data once I restart my stream application while it was
> consuming data from kafka t
Hi,
I am using InMemoryStore along with GlobalKTable. I came to realize that I
was losing on data once I restart my stream application while it was
consuming data from kafka topic since it would always start with last saved
checkpoint. This shall work fine with RocksDB it being a persistent store.