Hi,
I've read the guide below, and filed up a PR:
https://github.com/apache/kafka/pull/5809
Started without creating a JIRA ticket.
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
Thank you,
Tomoyuki
On Wed, Oct 17, 2018 at 9:19 AM Tomoyuki Saito wrote:
> Hi,
>
>
Hi,
> Would you like to contribute a PR?
Yes! Sounds great.
Should I file a JIRA ticket first?
Tomoyuki
On Wed, Oct 17, 2018 at 12:19 AM Guozhang Wang wrote:
> I think we should not allow negative values, and today it seems that this
> is not checked against.
>
> In fact, it should be a
Producer and consumer logs would be in the respective client applications.
To enable them , you would enable them for the kafka packages in the client
application.
For example if you were using log4j , you would add something like
org.apache.kafka.clients=INFO
regards
On Tue, Oct 16, 2018
Thanks for managing the release Manikumar!
Ismael
On Tue, 16 Oct 2018, 12:13 Manikumar, wrote:
> Hi all,
>
> I would like to volunteer to be the release manager for 2.0.1 bug fix
> release.
> 2.0 was released July 30, 2018 and 44 issues are fixed so far.
>
> Please find all the resolved
Hi,
I have a usecase to stream messages from Kafka and buffer it in memory till
a message count is reached and then write these to output file . I am using
manual commit . I have a question on whats the maximum time i can wait
after consuming the message and till we commit back to Kafka . Is there
Hi all,
I would like to volunteer to be the release manager for 2.0.1 bug fix
release.
2.0 was released July 30, 2018 and 44 issues are fixed so far.
Please find all the resolved tickets here:
Hi,
I am using openjdk-jre 8.
https://pkgs.alpinelinux.org/package/v3.6/community/x86/openjdk8-jre
I also noticed that this error occurs when Kafka tries to rebuild corrupt
indices.
The code flow looks like this :
ProduceraStateManager.readSnapshot() -> Crc32C.compute() -> SIGSEGV
Thanks,
Looks like a JVM bug. What Java distribution and version are you using?
Ismael
On Mon, Oct 15, 2018 at 6:34 PM Vishnu Viswanath <
vishnu.viswanat...@gmail.com> wrote:
> Hello Kafka experts,
>
> I am running Kafka version 1.1.0 in docker container and the container is
> exiting with SIGSEGV.
>
Hey everyone,
Thanks for all the contribution! Just a kind reminder that the code is now
frozen for 2.1.0 release.
Thanks,
Dong
On Mon, Oct 1, 2018 at 4:31 PM Dong Lin wrote:
> Hey everyone,
>
> Hope things are going well!
>
> Just a kind reminder that the feature freeze time is end of day
Hi Niklas,
By default, the retention.ms config is set to 7 days, and currently streams
does not try to override this value. What you observed, is probably because
your application is being reset and hence resuming on some very old offsets
which is for more than 7 days ago. As a result the log
I think we should not allow negative values, and today it seems that this
is not checked against.
In fact, it should be a one-liner fix in the `config.define` function call
to constraint its possible value range. Would you like to contribute a PR?
Guozhang
On Fri, Oct 12, 2018 at 10:56 PM
Hello Tom,
Thanks for reporting the observed issue. I've looked into the source code
and I agree with you that there is indeed a mistake interpreting the
parameters.
I'll file a JIRA to keep track, and fix it asap.
Guozhang
On Fri, Oct 12, 2018 at 10:06 AM Thomas Becker
wrote:
> I've been
Hello,
I am using a Kafka with version 0.11.x.x.
We are producing and consuming messages via different applications.
But due to some reason,we are able to produce the messages on a particular
topic but not able to consume them.
The issue is for a specific topic,for the others m able to produce
13 matches
Mail list logo