Re: [VOTE] 1.0.0 RC0

2017-10-11 Thread Vahid S Hashemian
Hi Guozhang, Thanks for running the release. I tested building from source and the quickstarts on Linux, Mac, and Windows 64 (with Java 8 and Gradle 4.2.1). Everything worked well on Linux and Mac, but I ran into some issues on my Windows 64 VM: I reported one issue in KAFKA-6055, but it's

Re: kafka broker loosing offsets?

2017-10-11 Thread Vincent Dautremont
Hi, We have 4 differents Kafka cluster running, 2 on 0.10.1.0 1 on 0.10.0.1 1 that was on 0.11.0.0 and last week updated to 0.11.0.1 I’ve only seen the issue happen 2 times in production usage on the cluster on 0.11.0.0 since it’s running (about 3months). But I’ll monitor and report it here if

Re: [VOTE] 1.0.0 RC0

2017-10-11 Thread Guozhang Wang
Thanks for the check Ted. I just made the jars available at mvn staging now: https://repository.apache.org/content/groups/staging/org/apache/kafka/ Guozhang On Tue, Oct 10, 2017 at 6:43 PM, Ted Yu wrote: > Guozhang: > I took a brief look under the staging tree. > e.g. >

Re: Getting started with stream processing

2017-10-11 Thread Matthias J. Sax
Glad it works. If you want to use windows, what seems more natural and also allows you to "expire old windows eventually" (with your current approach, you never delete old window, and thus each window create a new entry in the internal key-value store, thus, you store grows unbounded over time)

Kafka Consumer - org.apache.kafka.common.errors.TimeoutException: Failed to get offsets by times in 305000 ms

2017-10-11 Thread SenthilKumar K
Hi All , Recently we starting seeing Kafka Consumer error with Timeout . What could be the cause here ? Version : kafka_2.11-0.11.0.0 Consumer Properties: *bootstrap.servers, enable.auto.commit,auto.commit.interval.ms ,session.timeout.ms

Re: Kafka cluster Error

2017-10-11 Thread Kannappan, Saravanan (Contractor)
Can you please help me to resolve this ? Thanks Saravanan From: "Kannappan, Saravanan (Contractor)" Date: Tuesday, October 10, 2017 at 6:35 PM To: "users@kafka.apache.org" Subject: Kafka cluster Error Hello, Someone can you help me

Re: Getting started with stream processing

2017-10-11 Thread Guozhang Wang
Regarding windowing: actually the window boundaries are aligned at epoch (i.e. UTC 1970, 00.00.00), so the latest window is not NOW - 1 hour. Guozhang On Wed, Oct 11, 2017 at 1:42 AM, RedShift wrote: > Matthias > > > Thanks, using grouping key of "deviceId + timestamp"

Re: kafka broker loosing offsets?

2017-10-11 Thread Michal Michalski
Hi Dmitriy, I didn't follow the whole thread, but if it's not an issue with Kafka 0.11.0.0 (there was another thread about it recently), make sure your Replication Factor for the offsets topic is 3 (you mentioned "RF=3 for all topics", but I wasn't sure it includes the offsets one). There was a

Retrieve unacknowledged messages from partition

2017-10-11 Thread Tarik Courdy
Good morning - First off, thank you for apache kafka. I have been having fun learning it and getting it set up. I do have a couple of questions that I haven't been able to find the answer to yet that I'm hoping some can help with. 1.) Is there a programmatic API way to retrieve the list of

Re: kafka broker loosing offsets?

2017-10-11 Thread Dmitriy Vsekhvalnov
Yeah just pops up in my list. Thanks, i'll take a look. Vincent Dautremont, if you still reading it, did you try upgrade to 0.11.0.1? Fixed issue? On Wed, Oct 11, 2017 at 6:46 PM, Ben Davison wrote: > Hi Dmitriy, > > Did you check out this thread "Incorrect consumer

Re: kafka broker loosing offsets?

2017-10-11 Thread Ben Davison
Hi Dmitriy, Did you check out this thread "Incorrect consumer offsets after broker restart 0.11.0.0" from Phil Luckhurst, it sounds similar. Thanks, Ben On Wed, Oct 11, 2017 at 4:44 PM Dmitriy Vsekhvalnov wrote: > Hey, want to resurrect this thread. > > Decided to do

Re: kafka broker loosing offsets?

2017-10-11 Thread Dmitriy Vsekhvalnov
Hey, want to resurrect this thread. Decided to do idle test, where no load data is produced to topic at all. And when we kill #101 or #102 - nothing happening. But when we kill #200 - consumers starts to re-consume old events from random position. Anybody have ideas what to check? I really

Questions about Apache Kafka messages type/size and publish/subscribe

2017-10-11 Thread Heloise Chevalier
Hi, I'm not entirely certain this is the right place to ask, but I have questions about the functioning of Apache Kafka to implement a publish/subscribe messaging system. I am investigating Kafka to see if it fits the needs of the company I work for, and I have quite a few questions I can't find

Re: Incorrect consumer offsets after broker restart 0.11.0.0

2017-10-11 Thread Vincent Dautremont
I would also like to know the related Jira ticket if any, to check that what I experience the same phenomenon. I see this happening even without restarting the kafka broker process : I sometime have a Zookeeper socket that fails, the Kafka broker then step down from its leader duties for a few

RE: Incorrect consumer offsets after broker restart 0.11.0.0

2017-10-11 Thread Phil Luckhurst
Upgrading the broker to version 0.11.0.1 has fixed the problem. Thanks.

Re: Getting started with stream processing

2017-10-11 Thread RedShift
Matthias Thanks, using grouping key of "deviceId + timestamp" with *aggregation* _instead of_ reducing solved it: KGroupedStream grouped = data.groupBy( (k, v) -> { Date dt =

Re: Add Kafka user list

2017-10-11 Thread Jaikiran Pai
I'm curious how these emails even get delivered to this (and sometimes the dev list) if the user isn't yet subscribed (through the users-subscribe mailing list)? Is this mailing list setup to accept mails from unsubscribed users? -Jaikiran On 11/10/17 12:32 PM, Jakub Scholz wrote: Out of

Re: Add Kafka user list

2017-10-11 Thread Jakub Scholz
Out of curiosity ... there seem to be quite a lot of these emails. I wonder if we can do something to improve on this. Was someone thinking about changing the UX on the Kafka website? Maybe removing the links from the users@kafka... email? Or rephrasing the sentence to make the subscribe email be