I haven't got a chance to go through the acknowledgement process. Probably
there is an issue regarding the way of acknowledgement in Metron which
hasn't fixed properly.

I am still facing the issue with the high rate of failure in enrichments
and indexing topologies, and I have noticed in the new version of HCP the
default value for the number of ackers regarding these two topologies has
been set to 0! I would like to monitor the behaviour through a profiler.
Unfortunately, I am very busy these days and haven't got a chance to
touched that yet. However, it might be due to the same type of issue with
the old client...

On Wed, May 17, 2017 at 12:41 AM, Casey Stella <[email protected]> wrote:

> Yeah, I've seen the same issue.  It appears that the storm-kafka-client in
> versions < 1.1 has significant throughput problems.  We saw a 10x speedup
> in moving to the 1.1 version.  There is a PR out for this currently:
> https://github.com/apache/metron/pull/584
>
> Casey
>
> On Tue, May 16, 2017 at 4:26 AM, Ali Nazemian <[email protected]>
> wrote:
>
>> I am still facing this issue and couldn't manage to fix it. I would be
>> really grateful If somebody can help me.
>>
>> Thanks,
>> Ali
>>
>> On Sun, May 14, 2017 at 1:58 PM, Ali Nazemian <[email protected]>
>> wrote:
>>
>>> I was wrong. I think I couldn't increase the timeout value for Kafka
>>> spout properly. Therefore, I was wondering how I can increase the timeout
>>> value for Kafka spout? What is the right "-esc" property name I need to set
>>> in this case? Also, what has changed in the newer version since I didn't
>>> have this issue with the previous version?
>>>
>>>
>>>
>>> On Sun, May 14, 2017 at 3:00 AM, Ali Nazemian <[email protected]>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have installed the new version of HCP recently. I can see that the
>>>> following error has appeared in Storm UI at Kafka spout section related to
>>>> Parser topologies:
>>>>
>>>> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot
>>>> be completed since the group has already rebalanced and assigned the
>>>> partitions to another member. This means that the time between subsequent
>>>> calls to poll() was longer than the configured session.timeout.ms,
>>>> which typically implies that the poll loop is spending too much time
>>>> message processing. You can address this either by increasing the session
>>>> timeout or by reducing the maximum size of batches returned in poll() with
>>>> max.poll.records. at org.apache.kafka.clients.consu
>>>> mer.internals.ConsumerCoordinator$OffsetCommitResponseHandle
>>>> r.handle(ConsumerCoordinator.java:600) at
>>>> org.apache.kafka.clients.consumer.internals.ConsumerCoordina
>>>> tor$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:541)
>>>> at org.apache.kafka.clients.consumer.internals.AbstractCoordina
>>>> tor$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
>>>> at org.apache.kafka.clients.consumer.internals.AbstractCoordina
>>>> tor$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
>>>> at org.apache.kafka.clients.consumer.internals.RequestFuture$1.
>>>> onSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consu
>>>> mer.internals.RequestFuture.fireSuccess(RequestFuture.java:133) at
>>>> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>>>> at org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>>> lient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
>>>> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
>>>> at org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>>> lient.clientPoll(ConsumerNetworkClient.java:360) at
>>>> org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>>> lient.poll(ConsumerNetworkClient.java:224) at
>>>> org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>>> lient.poll(ConsumerNetworkClient.java:192) at
>>>> org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>>> lient.poll(ConsumerNetworkClient.java:163) at
>>>> org.apache.kafka.clients.consumer.internals.ConsumerCoordina
>>>> tor.commitOffsetsSync(ConsumerCoordinator.java:426) at
>>>> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1059)
>>>> at 
>>>> org.apache.storm.kafka.spout.KafkaSpout.commitOffsetsForAckedTuples(KafkaSpout.java:302)
>>>> at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:204)
>>>> at 
>>>> org.apache.storm.daemon.executor$fn__6505$fn__6520$fn__6551.invoke(executor.clj:651)
>>>> at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) at
>>>> clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.ja
>>>> va:748)
>>>>
>>>>
>>>> This error has affected the Parsers throughput significantly!
>>>>
>>>> I have tried to increase the session timeout, but It didn't affect my
>>>> situation. I would be grateful if you can help me to find the source of
>>>> this issue. Please be advised that I haven't had this issue with the
>>>> previous version of Metron (0.3.1).
>>>>
>>>> Regards,
>>>> Ali
>>>>
>>>>
>>>
>>>
>>> --
>>> A.Nazemian
>>>
>>
>>
>>
>> --
>> A.Nazemian
>>
>
>


-- 
A.Nazemian

Reply via email to