in
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread#KIP-62:Allowconsumertosendheartbeatsfromabackgroundthread-PublicInterfaces
in the line:
*We've reduced the default session timeout, but actually increased the
amount of time
Hello,
If KafkaConsumer is subscribed to more than one topic or even for same
topic, if the consumer is assigned more than one partition, what is the
behavior of KafkaConsumer.poll()?
In our use case, we would like to use, for example, "user id" as a key for
records for topics. Naturally, for som
Hi Marina,
We hit a similar problem with our S3 connectors. We added a level of
indirection, a JSON validating microservice, before publishing to the Kafka
topic. The microservice published non-JSON formatted messages to a separate
Kafka topic called error-jsons and we flushed those messages using
Thanks for pointing out, Jun, Ismael.
Will update the statement.
Guozhang
On Wed, Oct 18, 2017 at 9:51 AM, Ismael Juma wrote:
> Also, only part 1 of KIP-113 landed. The release planning page has the
> correct info for what it's worth.
>
> Ismael
>
> On 18 Oct 2017 5:42 pm, "Jun Rao" wrote:
>
Folks,
I am having trouble enabling SASL_PLAINTEXT protocol for Kafka REST
component to work with secure cluster (that also uses same protocol). I am
sure I am missing something trivial. If someone can help, I'd really
appreciate it.
Here're my configs:
Startup script:
cat /bin/kafka-rest-start
Hi,
Is there any configuration to allow the new client design group coordinator
recover after a crash?
I have a testing topology with 3 brokers and once the Group Coordinator
crash, the topic gets correctly re balanced, producer not affected, but the
consumer group stop receiving messages.
Monit
Considering Ewen's response, you can open a JIRA for applying the
suggestion toward FileStreamSinkConnector.
Cheers
On Wed, Oct 18, 2017 at 10:39 AM, Marina Popova
wrote:
> Hi,
> I wanted to give this question a second try as I feel it is very
> important to understand how to control error
Hi,
I wanted to give this question a second try as I feel it is very important
to understand how to control error cases with Connectors.
Any advice on how to control handling of "poison" messages in case of
connectors?
Thanks!
Marina
> Hi,
> I have the FileStreamSinkConnector working perfec
Also, only part 1 of KIP-113 landed. The release planning page has the
correct info for what it's worth.
Ismael
On 18 Oct 2017 5:42 pm, "Jun Rao" wrote:
> Hi, Guozhang,
>
> Thanks for running the release. Just a quick clarification. The statement
> that "* Controller improvements: async ZK acce
Images didn't come thru.
Consider using third party website.
On Tue, Oct 17, 2017 at 9:36 PM, Pavan Patani
wrote:
> Hello,
>
> Previously I was using old version of Kafka-manager and it was showing
> "Producer Message/Sec and Summed Recent Offsets" parameters in topics as
> below.
>
> [image: I
Hi, Guozhang,
Thanks for running the release. Just a quick clarification. The statement
that "* Controller improvements: async ZK access for faster administrative
request handling" is not accurate. What's included in 1.0.0 is a logging
change improvement in the controller, which does give signific
Hi Tony,
The issue is that the GlobalStore doesn't use the Processor when restoring
the state. It just reads the raw records from the underlying topic. You
could work around this by doing the processing and writing to another
topic. Then use the other topic as the source for your global-store.
It
I can confirm that the message size check in the producer works on the
uncompressed size as of 0.11.0.1, as I had to investigate this internally.
:)
I've got a similar problem with messages that can occasionally exceed this
limit. We're taking the approach of enforcing a hard size limit when event
Hello All,
I have been trying to create an application on top of Kafka Streams. I am
newbie to Kafka & Kakfa streams. So please excuse if I my understanding are
wrong.
I got the application running fine on a single instance ec2 instance in
AWS. Now I am looking at scaling and ran in to some issue
Hello,
Previously I was using old version of Kafka-manager and it was showing
"Producer Message/Sec and Summed Recent Offsets" parameters in topics as
below.
[image: Inline image 1]
Currently I have installed kafka-manager-1.3.3.14 and now I can not see
these two "Producer Message/Sec and Summed
Hi all,
As far as I understand this Jira
https://issues.apache.org/jira/browse/KAFKA-4169, unfortunately
max.request.size works on *uncompressed* messages. Which means producer
won't be able to send messages larger than that limit, no matter
compression. The fragment above is still right though, be
> On Oct 18, 2017, at 7:05 AM, Jehan Bruggeman wrote:
>
> Hi all,
>
> quick follow-up: thank you Philip, you were right ! Indeed, I got rid of
> this error by putting the converter in the connector's folder.
>
> I also tried something else: create a custom connector and use that custom
> conn
Hi all,
quick follow-up: thank you Philip, you were right ! Indeed, I got rid of
this error by putting the converter in the connector's folder.
I also tried something else: create a custom connector and use that custom
connector with the custom converter, both loaded as plugins. It also works.
S
Hi Ahmad,
>1. Given SessionTime can continue to expand the window that is
>considered part of the same session, i.e., it's based on data arriving
> for
>that key. What happens with retention time?
As the session expands the data for the session will continue to be
retained as it is
Hi John,
It looks like this is an issue. Guozhang has provided a fix for it here:
https://github.com/apache/kafka/pull/4085
You can try it by cloning his repository and build the streams jar.
Thanks,
Damian
On Tue, 17 Oct 2017 at 23:07 Johan Genberg wrote:
> Yes, it's noticeably faster in my t
20 matches
Mail list logo