Hi all,
I have a kafka 0.9.0.0 cluster with 11 nodes.
First,I found server logs as below,
server.log.2016-10-17-22:[2016-10-17 22:22:13,885] WARN
[ReplicaFetcherThread-0-4], Error in fetch
kafka.server.ReplicaFetcherThread$FetchRequest@367c9f98. Possible cause:
org.apache.kafka.common.pr
Thanks. I patch it, and everything goes ok.
> 在 2016年10月9日,下午12:39,Becket Qin 写道:
>
> Can you check if you have KAFKA-3003 when you run the code?
>
> On Sat, Oct 8, 2016 at 12:52 AM, Kafka wrote:
>
>> Hi all,
>>we found our consumer have high cpu load in our product
>> enviroment,as we
Glad to know :)
On Tue, Oct 18, 2016 at 1:24 AM, Json Tu wrote:
> Thanks. I patch it, and everything goes ok.
> > 在 2016年10月9日,下午12:39,Becket Qin 写道:
> >
> > Can you check if you have KAFKA-3003 when you run the code?
> >
> > On Sat, Oct 8, 2016 at 12:52 AM, Kafka wrote:
> >
> >> Hi all,
> >>
I might have run into a related problem:
[StreamThread-1] ERROR
org.apache.kafka.streams.processor.internals.StreamThread - stream-thread
[StreamThread-1] Failed to close state manager for StreamTask 0_0:
org.apache.kafka.streams.errors.ProcessorStateException: task [0_0] Failed
to close state sto
Hi,
I could successfully run Kafka at my environment. I want to monitor Queries
per Second at my search application with Kafka. Whenever a search request
is done I create a ProducerRecord which holds current nano time of the
system.
I know that I have to use a streaming API for calculation i.e. K
Is there a maven artifact that can be used to create instances
of EmbeddedSingleNodeKafkaCluster for unit / integration tests?
Hi Frank,
Are you able to reproduce this? I'll have a look into it, but it is not
immediately clear how it could get into this state.
Thanks,
Damian
On Tue, 18 Oct 2016 at 11:08 Frank Lyaruu wrote:
> I might have run into a related problem:
>
> [StreamThread-1] ERROR
> org.apache.kafka.stream
Also, it'd be great if you could share your streams topology.
Thanks,
Damian
On Tue, 18 Oct 2016 at 15:48 Damian Guy wrote:
> Hi Frank,
>
> Are you able to reproduce this? I'll have a look into it, but it is not
> immediately clear how it could get into this state.
>
> Thanks,
> Damian
>
>
> On
Hi Team,
We are using Kafka 0.10 with Kerberos security . We have a use case where we
want to use a DNS alias name instead of the physical hostnames in the
"bootstrap.servers" property . Using DNS alias name is helpful from operational
perspective ( ex : it's easy to add/remove new brokers in t
Hi Frank,
Which version of kafka are you running? The line numbers in the stack trace
don't match up with what i am seeing on 0.10.1 or on trunk.
FYI - I created a JIRA for this here:
https://issues.apache.org/jira/browse/KAFKA-4311
Thanks,
Damian
On Tue, 18 Oct 2016 at 15:52 Damian Guy wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi,
You just need to read you stream and apply an (windowed) aggregation
on it.
If you use non-windowed aggregation you will get "since the
beginning". If you use windowed aggregation you can specify the window
size as 1 hour and get those results.
Hi Joel,
Would it be possible to stream the presentations ?
Cheers,
João Reis
From: Joel Koshy
Sent: Monday, October 17, 2016 10:25:10 PM
Cc: eyakabo...@linkedin.com
Subject: Stream processing meetup at LinkedIn (Sunnyvale) on Wednesday,
November 2 at 6pm
Hi e
Hi Matthias,
Thanks for your detailed answer. By the way I couldn't find "KGroupedStream"
at version of 0.10.0.1?
Kind Regards,
Furkan KAMACI
On Tue, Oct 18, 2016 at 8:41 PM, Matthias J. Sax
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hi,
>
> You just need to read you stream
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
I see. KGroupedStream will be part of 0.10.1.0 (should be release the
next weeks).
So, instead of
> .groupByKey().count()
you need to do
> .countByKey()
- -Matthias
On 10/18/16 12:05 PM, Furkan KAMACI wrote:
> Hi Matthias,
>
> Thanks for you
Yes it will be streamed and archived. The streaming link and subsequent
recording will be posted in the comments on the meetup page.
Thanks,
Joel
On Tue, Oct 18, 2016 at 11:25 AM, João Reis wrote:
> Hi Joel,
>
> Would it be possible to stream the presentations ?
>
> Cheers,
> João Reis
>
> ___
Hi All
Was testing with 0.10.1.0 rc3 build for my new streams app
Seeing issues starting my kafk streams app( 0.10.1.0) on the old version
broker 0.10.0.1. I dont know if it is supposed to work as is. Will upgrade
the broker to same version and see whether it goes away
client side issues
==
Hi Matthias,
I've tried this code:
*final Properties streamsConfiguration = new Properties();*
*streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG,
"myapp");*
*streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");*
*streamsCo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Two things:
1) you should not apply the window to the first count, but to the base
stream to get correct results.
2) your windowed aggregation, doew not just return String type, but
Window type. Thus, you need to either insert a .map() to transform
Sorry about concurrent questions. Tried below code, didn't get any error
but couldn't get created output topic:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.pu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
You should create input/intermediate and output topic manually before
you start you Kafka Streams application.
- -Matthias
On 10/18/16 3:34 PM, Furkan KAMACI wrote:
> Sorry about concurrent questions. Tried below code, didn't get any
> error but c
Hello Kafka/Cassandra experts,
*I am getting below error….*
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(
NetworkReceive.java:91)
at org.apache.kafka.common.network.NetworkReceive.
readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.Kaf
i hava test my kafka programe using kafka-console-producer.sh and
kafka-console-consumer.sh.
setp1 :
[oracle@bd02 bin]$ ./kafka-console-producer.sh --broker-list
133.224.217.175:9092 --topic oggtopic
setp2:
[oracle@bd02 bin]$ ./kafka-console-consumer.sh --zookeeper
133.224.217.
Hi Yifan,
You can try this procedure with kafka0.10, stop consumer group before do
that.
consumer.subscribe(Arrays.asList(topic));//consumer "enable.auto.commit"
set false
consumer.poll(1000);
consumer.commitSync(offsets);//the offsets is to be updated consumer offset
Yuanjia L
i hava test my kafka programe using kafka-console-producer.sh and
kafka-console-consumer.sh.
Hi,
Anybody could help to answer below question? If compression type could be
modified through command " bin/kafka-console-producer.sh --producer.config
"? Thanks!
Best Regards
Johnny
-Original Message-
From: ZHU Hua B
Sent: 2016年10月17日 14:52
To: users@kafka.apache.org; Radosla
Hi,
If I deleted a topic on the target Kafka cluster, if Kafka mirror maker could
mirror the same topic again from source cluster? Thanks!
Best Regards
Johnny
+1 (binding)
Verified quick start and artifacts.
On Mon, Oct 17, 2016 at 10:39 PM Dana Powers wrote:
> +1 -- passes kafka-python integration tests
>
> On Mon, Oct 17, 2016 at 1:28 PM, Jun Rao wrote:
> > Thanks for preparing the release. Verified quick start on scala 2.11
> binary.
> > +1
> >
>
27 matches
Mail list logo