Hi,
you should be able to retrieve your store with
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java#L1021
This would give you access to the store from inside your current
application. In your Streams application your could then
expose
Hi Kafka Community,
Has anyone taken a look at this blog post, comparing pulsar and kafka from
architectural view? I am wondering how you guys think about segment centric
vs partition centric.
https://streaml.io/blog/pulsar-segment-based-architecture/
- KN
Andrew,
Thank you! Is there any estimation on when I can try out Kafka Connect with
Pulsar?
Can you also point me when I can find the Kafka-to-Pulsar source and sink?
- KN
On Wed, Dec 6, 2017 at 2:48 AM, Andrew Stevenson wrote:
> In terms of building out the Apache Pulsar
Hello Guozhang,
Thanks for your reply.
I created the following issue: https://issues.apache.org/jira/browse/KAFKA-6323
I also further analyzed the memory problem and found out that it is a
non-issue. It was only a consequence of the above issue and happened
because at each punctuation I
I've written a Streams application which creates a KTable like this:
val myTable: KTable[String, GenericRecord] = myStream
.groupByKey()
.aggregate(myInitializer, myAdder, myStore)
where myStore was configured like this:
val myStore
: Materialized[String, GenericRecord,
Hello Fred,
Thanks for reporting the issue.
1) Nice find about the punctuation start time with WALL_CLOCK_TIME type. I
agree with you that this should better be initialized as current time
+ interval.
Do you mind creating a JIRA for Kafka? And if you'd like to submit a patch
for it that would be
This kind of sounds to me like there’s packet loss somewhere and TCP is closing
the window to try to limit congestion. But from the snippets you posted, I
didn’t see any sacks in the tcpdump output. If there *are* sacks, that’d be a
strong indicator of loss somewhere, whether it’s in the
Mirror mare is placed to close to target and send/receive buffer size set
to 10MB which is the result of bandwidth-delay product. OS level tcp buffer
has also been increased to 16MB max
On Wed, 6 Dec 2017 at 15:19 Jan Filipiak wrote:
> Hi,
>
> two questions. Is your
Running Streams 1.0.0 should just work with 0.10.2 brokers. Of course,
you can't use EOS feature.
Not sure how your app got into this bad state. But if is does not
recover from it, delete the store directory seems to be reasonable --
also for stateful application, you won't loose data, as we
I will take the rebound of this subject to post a question if that's ok.
What is the best solution between start X instances with
num_stream_thread=1 to one instance with X threads, knowing that I have
only 2 dedicated servers at my disposal ?
The best should be to have more hosts but currently
Thanks Manikumar for replying. One more query regarding your first reply
What if I set both inter.broker.protocol.version & log.message.format.version
to 0.10 and update the binaries? How is Kafka supposed to behave & what we
are going to miss?
On Wed, Dec 6, 2017 at 12:34 PM, Manikumar
> Sometimes it seems to receive messages from all partitions (0, 1, 2).
> At other times, it seems to start up only listening to just one
> partition.
Eh, never mind. Looks like pilot error. I accidentally had a stalled
client still running. ("Stalled", as in had been doing something
useful, but
I'm using the Python API in my clients (Conda kafka-python package, v
1.3.3). I have one client which is kind of a catch-all, subscribing to
several topics. There is just one instance, but I pass the group_id
arg to the constructor because I want to control commit points. As I
understand it, use
Hi Artur,
What’s your Kubernetes set up? Azure, AWS, GKE? You should be able to hit your
brokers from any node in the cluster but it’s best at abstract via Kubernetes
service.
BTW, Lenses will support deployment into Kubernetes our next release
Andrew Stevenson
https://landoop.com
In terms of building out the Apache Pulsar ecosystem, Landoop is working on
porting our Kafka Connect Connectors to Pulsars framework,
We already have a Kafka to Pulsar source and sink.
On 05/12/2017, 19:59, "Jason Gustafson" wrote:
> I believe a lot of users are using
And Lenses - https://www.landoop.com/kafka-lenses/
On 06/12/2017, 11:21, "Abhimanyu Nagrath" wrote:
There are a couple of tools for this.
1. Linkedin Burrow - https://github.com/linkedin/Burrow
2. Yahoo Kafka Monitoring Tool -
Hi,
two questions. Is your MirrorMaker collocated with the source or the target?
what are the send and receive buffer sizes on the connections that do span
across WAN?
Hope we can get you some help.
Best jan
On 06.12.2017 14:36, Xu, Zhaohui wrote:
Any update on this issue?
We also run
This problem was solved by upgrading from 0.10 to 0.11 (broker + client).
Thanks for your feedback.
On Thu, Nov 30, 2017 at 10:03 AM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:
> The consumers are using default settings, which means that
> enable.auto.commit=true and
Hi Irtiza,
We're using Jolokia and we had no problems with it.
It would be useful to know what exactly you did (how you "plugged in"
Jolokia, how you configured it, what endpoint are you querying etc.) to
help you.
On 6 December 2017 at 10:36, Irtiza Ali wrote:
> Hello everyone,
I’ll give a +1 for jmxtrans here. We use it with great success paired with
Graphite and Grafana for monitoring.
On December 6, 2017 at 7:43:31 AM, Subhash Sriram (subhash.sri...@gmail.com)
wrote:
Hi Irtiza,
Have you looked at jmxtrans? It has multiple output writers for the metrics
and one of
Any update on this issue?
We also run into similar situation recently. The mirrormaker is leveraged to
replicate messages between clusters in different dc. But sometimes a portion of
partitions are with high consumer lag and tcpdump also shows similar packet
delivery pattern. The behavior is
Hi Irtiza,
Have you looked at jmxtrans? It has multiple output writers for the metrics and
one of them is the KeyOutWriter which just writes to disk.
https://github.com/jmxtrans/jmxtrans/wiki
Hope that helps!
Thanks,
Subhash
Sent from my iPhone
> On Dec 6, 2017, at 5:36 AM, Irtiza Ali
Thanks Matthias. Please find my response inline.
On Wed, Dec 6, 2017 at 12:34 AM, Matthias J. Sax
wrote:
> Hard to say.
>
> However, deleting state directories will not have any negative impact as
> you don't use stores. Thus, why do you not want to do this?
> We are
Hello everyone,
I am working python based Kafka monitoring application. I am unable to
figure out how to retrieve the metrics using Jolokia. I have enable the
port for metrics retrieval to .
I Have two questions
1) Is there something that I am not doing correctly.
2) Is there some other way
Hi,
I'd like to measure performance om my KStreams app. In first step brutally
simple, when it starts reading from a topic and when its done processing
the data and writing output to topic.
What is simple way of achieving it?
/Artur
There are a couple of tools for this.
1. Linkedin Burrow - https://github.com/linkedin/Burrow
2. Yahoo Kafka Monitoring Tool - https://github.com/yahoo/kafka-manager
3. Uber Kafka Monitoring Tool - https://github.com/uber/chaperone
Regards,
Abhimanyu
On Wed, Dec 6, 2017 at 3:14 PM,
Hello,
I am working on the Kafka monitoring and writing an application for it. Can
anyone tell me how to get the metrics in crude form. I don't want to use
Datadog or JConsole some tools like these one.
I am currently using jolokia to retrieve metrics.
With Regards
Irtiza Ali
27 matches
Mail list logo