Different Schemas on same Kafka Topic

2017-08-16 Thread Shajahan, Nishanth
Hello, Does kafka support writing differentavro record types(very different schema) to the same topic . I guess we would have to write our own avro serializer and de serializer to do this ?. Is there a preferred way to do this ?.It would be great if some one can point me in the right direc

Re: Altered retention.ms not working

2017-08-16 Thread Vinay Gulani
I am also facing the same issue. Can you please suggest, if you found any solution on this. Thanks, Vinay

Re: Few questions about how Kafka Streams manages tasks

2017-08-16 Thread Guozhang Wang
I see. For normal maintenance operations, before you kill your container you could shuts down the Streams application by calling `KafkaStreams#close()`. Upon shutting down it would write a local checkpoint file indicating at which point in terms of offsets it has stopped at. So on resuming if the

Re: [kafka streams] discuss: dynamically update subscription pattern

2017-08-16 Thread Guozhang Wang
Bart, Thanks for providing your observations and conclusions. Stay tuned on further discussions on adding dynamic subscriptions in Streams. Guozhang On Wed, Aug 16, 2017 at 1:37 AM, Bart Vercammen wrote: > Hi Guozhang, > > In the end I opted for a native Kafka consumer/producer application i

Avro With Kafka

2017-08-16 Thread Nishanth S
Hello, We are investigating on ingesting avro records to kafka using avro kafka serializer. Our schemas are nested and are of type record .Does the current avro kafka serializer support avro record type ?. If not is there a way to ingest records and consume using a consumer without using a

Re: Synchronized methods in RockSB key store

2017-08-16 Thread Guozhang Wang
One rationale behind it is on the implementation of windowed store, which may span over multiple RocksDB instances. When you have a range query over a window store, we need to make sure that the underlying stores provide a consistent "snapshot" at the time when the query is issued. Such synchroniz

Re: We use Apache Kafka!

2017-08-16 Thread Guozhang Wang
Hello John, Please feel free to submit a PR to the kafka-site repo: https://github.com/apache/kafka-site. Particularly for this file: https://github.com/apache/kafka-site/blob/asf-site/powered-by.html Some committed will then come to review the PR and merge it afterwards. Guozhang 2017-08-1

Re: Kafka Producer Errors

2017-08-16 Thread Saladi Naidu
I thought the same and checked, GC pause was maximum of 8 secondsĀ Naidu Saladi On Monday, August 14, 2017 1:59 AM, Kamal C wrote: I think your application (where the producer resides) is facing GC issues. The time taken for the GC might be higher than the `request.timeout.ms`. Check

Re: Upgrade to Kafka 11 and Zookeeper 3.4.10

2017-08-16 Thread Carmen Molatch
Thank you Ismael Carmen Molatch Software Quality Engineer 4, iDDS cell: 303-506-8849 carmen.mola...@jeppesen.com 55 Inverness Dr East | Englewood, CO and 80112 | www.jeppesen.com On 8/16/17, 6:08 AM, "Ismael Juma" wrote: >Yes, Kafka 0.11.x is compatible with ZooKee

We use Apache Kafka!

2017-08-16 Thread John Medeiros
Hi! We would like to be shown on your site (Powered By page) as a Company which uses Apache Kafka in production. *Usage*: We use Kafka in production for online and near real-time solutions. Kafka is a core part for many products, such as our Credit Card System. *Our main website*: http://www.por

RE: New Partition Strategy for Even Disk Usage

2017-08-16 Thread Tauzell, Dave
What sort of skew do you expect. For example do you expect one key to have 1000x as many messages as others? The consumer API allows you to pick a partition. So if you know that you have N partition groups then you could setup N consumers each pull from one partition in the group. You could

New Partition Strategy for Even Disk Usage

2017-08-16 Thread Matt Andruff
Good Day, I'm looking for someone to poke holes in my theory. I want to balance my disk usage across brokers. I want to maintain order per partition. Yes there are tools but they require manual intervention. What if created a custom partition strategy. The strategy is to take the existing part

Re: Querying consumer groups programmatically (from Golang)

2017-08-16 Thread Gabriel Machado
Hi Jens and Ian, Very usefuls projects :). What's the difference between the 2 softwares ? Do they support kafka ssl clusters ? Thanks, Gabriel. 2017-08-13 3:29 GMT+02:00 Ian Duffy : > Hi Jens, > > We did something similar to this at Zalando. > > https://github.com/zalando-incubator/remora > >

Re: Upgrade to Kafka 11 and Zookeeper 3.4.10

2017-08-16 Thread Ismael Juma
Yes, Kafka 0.11.x is compatible with ZooKeeper 3.4.10. Ismael On Tue, Aug 15, 2017 at 3:38 PM, Carmen Molatch wrote: > Hello > > I¹ve been asked to upgrade kafka (2.10.8.2.0) and zookeeper (3.4.8). Is > Kafka 11 and Zookeeper 3.4.10 compatible? Are there some gotchas? > > Thanks > Carmen > >

Re: RocksDB error

2017-08-16 Thread Sameer Kumar
ok.. got it.. Thanks...changed it, and it works. -Sameer. On Wed, Aug 16, 2017 at 4:06 PM, Damian Guy wrote: > I see. It is the same issue, though. The problem is that Long.MAX_VALUE is > actually too large, it causes an overflow so the task will still run, i.e, > in this bit of code: > > if (n

Re: Synchronized methods in RockSB key store

2017-08-16 Thread Sameer Kumar
>From the rocksdb writeup, it doesnt seem so. I am interested to know if there were any issues that we faced and added synchronization. -Sameer. On Wed, Aug 16, 2017 at 2:01 PM, Damian Guy wrote: > Sameer, > It might be that put, delete, putIfAbsent etc operations can be > non-synchronized. How

Re: RocksDB error

2017-08-16 Thread Damian Guy
I see. It is the same issue, though. The problem is that Long.MAX_VALUE is actually too large, it causes an overflow so the task will still run, i.e, in this bit of code: if (now > lastCleanMs + cleanTimeMs) { stateDirectory.cleanRemovedTasks(cleanTimeMs); lastCleanMs = now; } So, you wil

Re: RocksDB error

2017-08-16 Thread Sameer Kumar
I have already set this configuration. This info is there in logs as well. state.cleanup.delay.ms = 9223372036854775807 -Sameer. On Wed, Aug 16, 2017 at 1:56 PM, Damian Guy wrote: > I believe it is related to a bug in the state directory cleanup. This has > been fixed on trunk and also on the

Re: [kafka streams] discuss: dynamically update subscription pattern

2017-08-16 Thread Bart Vercammen
Hi Guozhang, In the end I opted for a native Kafka consumer/producer application instead of using Kafka streams for this. The overhead in creating new streams applications for each update of the metadata was a bit to cumbersome. But still, the issue remains that, although this works (thanks for th

Re: Synchronized methods in RockSB key store

2017-08-16 Thread Damian Guy
Sameer, It might be that put, delete, putIfAbsent etc operations can be non-synchronized. However for get and range operations that can be performed by IQ, i.e, other threads, we need to guard against the store being closed by the StreamThread, hence the synchronization. Thanks, Damian On Wed, 1

Re: RocksDB error

2017-08-16 Thread Damian Guy
I believe it is related to a bug in the state directory cleanup. This has been fixed on trunk and also on the 0.11 branch (will be part of 0.11.0.1 that will hopefully be released soon). The fix is in this JIRA: https://issues.apache.org/jira/browse/KAFKA-5562 To work around it you should set Stre