Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Garrett Barton
Could you take in both topics via the same stream? Meaning don't do a kafka streams join, literally just read both streams. If KStream cant do this, dunno haven't tried, then simple upstream merge job to throw them into 1 topic with same partitioning scheme. I'd assume you would have the products

Joining two topics and emitting each key only once within a sliding window.

2017-07-21 Thread Leif Wickland
Howdy, I'm trying to make a Mafka application which will join two topics with different semantics than the KStreams join has natively. Specifically I'd like to: - Immediately emit matches to a topic. - Track when matches are found so subsequent matches (which within the join window would be

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread José Antonio Iñigo
Hi Chris, *"if I understand your problem correctly, the issue is that you need todecrement the stock count when you reserve it, rather than splitting it* *into a second phase."* That's exactly the problem, I would need to: 1) Read the OrderPlaced event from Kafka in ProductService... 2)

Fwd: Spark Structured Streaming - Spark Consumer does not display messages

2017-07-21 Thread Cassa L
Hi, This is first time I am trying structured streaming with Kafka. I have simple code to read from Kafka and display it on the console. Message is in JSON format. However, when I run my code nothin after below line gets printed. 17/07/21 13:43:41 INFO AppInfoParser: Kafka commitId :

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-21 Thread Vahid S Hashemian
Hi Jason, Yes, I meant as a separate KIP. I can start a KIP for that sometime soon. Thanks. --Vahid From: Jason Gustafson To: d...@kafka.apache.org Cc: Kafka Users Date: 07/21/2017 11:37 AM Subject:Re: [DISCUSS] KIP-175:

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Jay Kreps
Hey Chris, I heard a similar complaint from a few people. I am quite ignorant about event sourcing and don't feel I understand the relationship fully but I am interested in understanding a little better what you are saying. I think we see the world this way: 1. You store the log of primary

Re: Using JMXMP to access Kafka metrics

2017-07-21 Thread Fernando Vega
We use jmxtrans and works pretty good. [image: Turn] *Fernando Vega* Sr. Operations Engineer *cell* (415) 810-0242 901 Marshall St, Suite 200, Redwood City, CA 94063 turn.com | @TurnPlatform This message is

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread Chris Neal
Gotcha. Thanks again. Will post back once I've tried this with an update! Chris On Fri, Jul 21, 2017 at 1:12 PM, Carl Haferd wrote: > I would recommend allowing each broker enough time to catch-up before > starting the next, but this may be less of a concern if

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-21 Thread Jason Gustafson
> > Regarding your comment about the current limitation on the information > returned for a consumer group, do you think it's worth expanding the API > to return some additional info (e.g. generation id, group leader, ...)? Seems outside the scope of this KIP. Up to you, but I'd probably leave

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread Carl Haferd
I would recommend allowing each broker enough time to catch-up before starting the next, but this may be less of a concern if the entire cluster is being brought down and then started from scratch. To automate, we poll until the Kafka process binds to its configured port (9092), and then once all

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread Chris Neal
Thanks Carl. Always fun to do this stuff in production... ;) Appreciate the input. I'll try a full cycle and see how that works. In your opinion, if I stop all brokers and all Zookeeper nodes, then restart all Zookeepers...at that point can I start both brokers at the same time, or should I

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread Carl Haferd
I have encountered similar difficulties in a test environment and it may be necessary to stop the Kafka process on each broker and take Zookeeper offline before removing the files and zookeeper paths. Otherwise there may be a race condition between brokers which could cause the cluster to retain

Tuning up mirror maker for high thruput

2017-07-21 Thread Sunil Parmar
We're trying to set up mirror maker to mirror data from EU dc to US dc. The network delay is ~150 ms. In recent test; we realized that mirror maker is not keeping up with load and have a lag trending upward all the time. What are configurations that can be tuned up to make it work for the higher

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread Chris Neal
Welp. Surprisingly, that did not fix the problem. :( I cleaned out all the entries for these topics from /config/topics, and removed the logs from the file system for those topics, and the messages are still flying by in the server.log file. Also, more concerning, when I was looking through the

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread M. Manna
Just to add (in case the platoform is Windows) For Windows based cluster implementation, log/topic cleanup doesn't work out of the box. Users are more or less aware of it, and doing their own maintenance as workaround. If you have issues on Topic deletion not working properly on Windows (i.e.

Re: Help please: Topics deleted, but Kafka continues to try and sync them with Zookeeper

2017-07-21 Thread Chris Neal
@Carl, There is nothing under /admin/delete_topics other than [] And nothing under /admin other than delete_topics :) The topics DO exist, however, under /config/topics! We may be on to something. I will remove them here and see if that clears it up. Thanks so much for all the help! Chris

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Ben Stopford
Hi Jose If I understand your problem correctly, the issue is that you need to decrement the stock count when you reserve it, rather than splitting it into a second phase. You can do this via the DSL with a Transfomer. There's a related example below. Alternatively you could do it with the

Re: Spring release using apache clients 11

2017-07-21 Thread David Espinosa
Hi Gary, The feature I'm looking for to use from apache clients 11 is the custom headers creation (if i'm not wrong you have already created the support for it). Up now, I was populating a list of headers (metadata) into the payload of the message itself, so as soon as I can move them to the kafka

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Debasish Ghosh
Kafka has quite a few tricks up its sleeve that can help implementing event sourced systems .. - application state - one of the things that u may want to do in an event sourced system is manage and query the state of the application. If you use Kafka Streams, you get the full

UnknownServerException on multiple broker registration with 3 node ZK

2017-07-21 Thread M. Manna
Hello, I suppose I must confirm that I have read the following: https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-WhydoIseeerror%22Shouldnotsetlogendoffsetonpartition%22inthebrokerlog? I have a 3 node cluster (3 zookeepers and 3 brokers - 3 different physical servers). my

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Michal Borowiecki
With Kafka Streams you get those and atomicity via Exactly-once-Semantics. Michał On 21/07/17 14:51, Chris Richardson wrote: Hi, I like Kafka but I don't understand the claim that it can be used for Event Sourcing (here and here

Re: Spring release using apache clients 11

2017-07-21 Thread Gary Russell
We are also considering releasing a 1.3 version, which will have a subset of the 2.0 features, and also support the 0.11 clients, while not requiring Spring Framework 5.0 and Java 8. On 2017-07-20 15:58 (-0400), David Espinosa wrote: > Thanks Rajini! > > El dia 20 jul.

Re: Consumer throughput drop

2017-07-21 Thread Ismael Juma
Thanks for reporting the results. Maybe you could submit a PR that updates the ops section? https://github.com/apache/kafka/blob/trunk/docs/ops.html Ismael On Fri, Jul 21, 2017 at 2:49 PM, Ovidiu-Cristian MARCU < ovidiu-cristian.ma...@inria.fr> wrote: > After some tuning, I got better results.

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Chris Richardson
Hi, I like Kafka but I don't understand the claim that it can be used for Event Sourcing (here and here ) One part of the event sourcing is the ability to subscribe to events published

Re: Consumer throughput drop

2017-07-21 Thread Ovidiu-Cristian MARCU
After some tuning, I got better results. What I changed, as suggested: dirty_ratio = 10 (previously 20) dirty_background_ratio=3 (previously 10) It results that disk read I/O is almost completely 0 (I have enough cache, the consumer is keeping with the producer). - producer throughput remains

Re: Kafka Streams: why aren't offsets being committed?

2017-07-21 Thread Matthias J. Sax
My guess is that offsets are committed only when all tasks in the >>> topology have received input. Is this what's happening? No. Task offsets are committed independently from each other. You can you double check the logs in DEBUG mode. It indicates when offsets get committed. Also

Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread José Antonio Iñigo
Hi everybody, I have been struggling with this problem for quite a while now, resorting to stackoverflow for some help with no success. I am hoping to that here I'll get a more

Kafka Upgrade from 8.2 to 10.2

2017-07-21 Thread Kiran Singh
Hi Team, I am having Kafka setup as follows: Kafka server version: 10.2 inter.broker.protocol.version: 8.2 log.message.format.version: 8.2 Kafka client version: 8.2 Now i need to change following properties inter.broker.protocol.version: 10.2 log.message.format.version: 10.2 Kafka Client