> On Sep 15, 2017, at 23:08, Amir Nagri wrote:
>
> Were you able to resolve above?
No, not yet. And I haven’t had a chance to open that JIRA ticket… sorry about
that. Will try to get to it soon.
Software Architect @ Park Assist » http://tech.parkassist.com/
> On Sep 11, 2017, at 14:31, Bill Bejeck wrote:
>
> I've done a little digging. Unfortunately, I don't have any answers at this
> point.
>
> ...
>
> For us to go further, I think you'll need to put a JIRA ticket in, sorry this
> isn't very helpful at this point.
Thanks
> On Sep 8, 2017, at 17:57, Bill Bejeck wrote:
>
> I'm not super familiar with Kotiln, but I'm taking a look.
Thanks Bill! I appreciate the help, and I look forward to your findings.
Avi
Software Architect @ Park Assist » http://tech.parkassist.com/
Hi all, I’m trying to experiment with Kotlin and Streams, and I’m encountering
an error:
$ kotlinc -cp
kafka-streams-0.11.0.0-cp1.jar:kafka-clients-0.11.0.0-cp1.jar:slf4j-api-1.7.25.jar
-jvm-target 1.8
Welcome to Kotlin version 1.1.4-3 (JRE 1.8.0_141-b15)
Type :help for help, :quit for quit
> On Jan 24, 2017, at 14:17, Jon Yeargers wrote:
>
> It may be picking a random partition but it sticks with it indefinitely
> despite there being a significant disparity in traffic.
Ah, I forgot to mention that IIRC the default Partitioner impl doesn’t choose a
> On Jan 24, 2017, at 11:18, Jon Yeargers wrote:
>
> If I don't specify a key when I call send a value to kafka (something akin
> to 'kafkaProducer.send(new ProducerRecord<>(TOPIC_PRODUCE, jsonView))') how
> is it keyed?
IIRC, in this case the key is null; i.e. there
> On Dec 19, 2016, at 16:53, Guozhang Wang wrote:
>
> Similar questions have been discussed previously:
>
> https://issues.apache.org/jira/browse/KAFKA-3775
>
> One concern of doing this, though is that if you want to do the change
> on-the-fly by rolling bounce (i.e.
> On Dec 15, 2016, at 16:39, Avi Flax <avi.f...@parkassist.com> wrote:
>
> Hi all,
>
> I apologize for flooding the list with questions lately. I guess I’m having a
> rough week.
>
> I thought my app was finally running fine after Damian’s help on Monday, but
Hi all,
I apologize for flooding the list with questions lately. I guess I’m having a
rough week.
I thought my app was finally running fine after Damian’s help on Monday, but it
turns out that it hasn’t been (successfully) consuming 2 of the topics it
should be (out of 11 total).
I’ve been
> On Dec 15, 2016, at 11:33, Damian Guy wrote:
>
> Technically you can, but not without writing some code. If you want to use
> consumer groups then you would need to write a custom PartitionAssignor and
> configure it in your Consumer Config, like so:
>
I’m trying to debug something about a Kafka Streams app, and it would be
helpful to me to start up a new instance of the app that’s only consuming from
a subset of the topics from which this app consumes. I’m hesitating though
because I don’t know if the consumer group scheme will support this
> On Dec 12, 2016, at 12:29, Damian Guy wrote:
>
> Yep - that looks correct
Fantastic! Thanks again!
Software Architect @ Park Assist
We’re hiring! http://tech.parkassist.com/jobs/
> On Dec 12, 2016, at 11:42, Damian Guy wrote:
>
> If you want to split these out so that they can run in parallel, then you
> will need to create a new stream for each topic.
Just to make sure I’m understanding this, does this code change look right?
From this:
> On Dec 12, 2016, at 11:42, Damian Guy wrote:
>
> If you want to split these out so that they can run in parallel, then you
> will need to create a new stream for each topic.
Super, super helpful — thanks so much!
Avi
> On Dec 12, 2016, at 10:24, Damian Guy wrote:
>
> The code for your Streams Application. Doesn't have to be the actual code,
> but an example of how you are using Kafka Streams.
OK, I’ve prepared a gist with (I hope) the relevant code, and also some log
records just in
> On Dec 12, 2016, at 10:08, Damian Guy wrote:
>
> Can you provide an example of your topology?
I’d be happy to, but I’m not sure what you mean? Are you interested in the
implementation at the code level? You want a better understanding of the work
being done? The
Hi all,
I’m running Kafka 0.10.0.1 on Java 8 on Linux — same for brokers and streams
nodes.
I’m attempting to scale up a Streams app that does a lot of I/O — in
particular, I’m hoping to isolate each partition into its own thread — and I’m
confused:
* I recently created 11 new topics and
> On Oct 6, 2016, at 09:59, Ismael Juma <ism...@juma.me.uk> wrote:
>
> On Thu, Oct 6, 2016 at 2:51 PM, Avi Flax <avi.f...@parkassist.com> wrote:
>>
>> Does this mean that the next release (after 0.10.1.0, maybe ~Feb?) might
>> remove altogether th
> On Oct 6, 2016, at 06:24, Ismael Juma wrote:
>
> It's worth mentioning that Streams is in the process of transitioning from
> updating ZooKeeper directly to using the newly introduced create topics and
> delete topics protocol requests. It was too late for 0.10.1.0, but
> On Sep 29, 2016, at 16:39, Ali Akhtar wrote:
>
> Why did you choose Druid over Postgres / Cassandra / Elasticsearch?
Well, to be clear, we haven’t chosen it yet — we’re evaluating it.
That said, it is looking quite promising for our use case.
The Druid docs say it
> On Sep 29, 2016, at 09:54, Ali Akhtar wrote:
>
> I'd appreciate some thoughts / suggestions on which of these alternatives I
> should go with (e.g, using raw Kafka consumers vs Spark for ETL, which
> persistent data store to use, and how to query that data store in the
>
On 7/14/16, 04:35, "Michael Noll" wrote:
> Also, a heads up: It turned out that user questions around joins in Kafka
> Streams are pretty common. We are currently working on improving the
> documentation for joins to make this more clear.
Excellent!
On Jun 29, 2016, at 22:44, Guozhang Wang wrote:
>
> One way to mentally quantify your state store usage is to consider the
> total key space in your reduceByKey() operator, and multiply by the average
> key-value pair size. Then you need to consider the RocksDB write / space
On Jun 29, 2016, at 14:15, Matthias J. Sax wrote:
>
> If you use window-operations, windows are kept until there retention
> time expires. Thus, reducing the retention time, should decrease the
> memory RocksDB needs to preserve windows.
Thanks Matthias, that makes sense
On Jun 29, 2016, at 11:49, Eno Thereska wrote:
> These are internal files to RockDb.
Yeah, that makes sense.
However, since Streams is encapsulating/employing RocksDB, in my view it’s
Streams’ responsibility to configure RocksDB well with good defaults and/or at
least
Hello all,
I’ve been working on a Streams app and so far it’s going quite well. I’ve had
the app up and running in a staging environment for a few weeks, processing
real data.
Today I logged into the server to check in on some things, and I found the disk
was full.
I managed (with ncdu) to
> On Jun 10, 2016, at 18:47, Guozhang Wang wrote:
>
> Yes, this is possible
OK, good to know — thanks!
I just checked the code in the Deserializers included with Kafka and I see that
they check for null values and simply pass them through; I guess that’s the
correct
> On Jun 10, 2016, at 14:24, Avi Flax <avi.f...@parkassist.com> wrote:
>
> Hi, I’m using Kafka Streams (0.10.0) with JRuby, most of my scripts/nodes are
> working well at this point, except for one which is using reduceByKey.
Whoops, I should probably share my cod
Hi, I’m using Kafka Streams (0.10.0) with JRuby, most of my scripts/nodes are
working well at this point, except for one which is using reduceByKey.
This is the first time I’m trying to use the local state store so it’s possible
there’s something misconfigured, I’m not sure. My config is pretty
On 6/3/16, 05:24, "Michael Noll" wrote:
> If you are using the DSL, you can use the `process()` method to "do
> whatever you want".
Ah, perfect — thank you!
> Alternatively, you can also use the low-level Processor API directly.
> Here, you'd implement the `Processor`
On 6/2/16, 22:26, "Christian Posta" wrote:
> Hate to bring up "non-flashy" technology... but Apache Camel would be a
> great fit for something like this. Two java libraries each with very strong
> suits.
Thanks for the tip!
BTW, I love boring technology!
One of my
On 6/2/16, 07:03, "Eno Thereska" wrote:
> Using the low-level streams API you can definitely read or write to arbitrary
> locations inside the process() method.
Ah, good to know — thank you!
> However, back to your original question: even with the low-level streams
>
On 6/1/16, 18:59, "Gwen Shapira" wrote:
>This would be Michael and Guozhang job to answer
I look forward to hearing from them ;)
> but I'd look at two options if I were you:
>
> 1) If the connector you need exists (
> http://www.confluent.io/product/connectors), then you
On 6/1/16, 11:53, "Gwen Shapira" wrote:
> Currently this is not part of the DSL and needs to be done separately
> through KafkaConnect. Here's an example:
> http://www.confluent.io/blog/hello-world-kafka-connect-kafka-streams
Ah, I see! Thank you! And thanks for the
Hi all,
Is it possible to use the Kafka Streams DSL to build a topology that has a
source and/or sink that is/are not Kafka Topic(s)?
As an example, I would like to consume some events from an API via WebSockets
and use that as a source in a Kafka Streams topology — ideally one defined with
Mike, the schema registry you’re using is a part of Confluent’s platform, which
builds upon but is not affiliated with Kafka itself. You might want to post
your question to Confluent’s mailing list:
https://groups.google.com/d/forum/confluent-platform
HTH,
Avi
On 5/20/16, 21:40, "Mike
On Thu, Feb 18, 2016 at 8:03 PM, Jason Gustafson wrote:
> The consumer is single-threaded, so we only trigger commits in the call to
> poll(). As long as you consume all the records returned from each poll
> call, the committed offset will never get ahead of the consumed
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote:
> The default semantics of the new consumer with auto commit are
> at-least-once-delivery. Basically during the poll() call the commit will be
> triggered and will commit the offset for the messages consumed during the
>
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote:
> The default semantics of the new consumer with auto commit are
> at-least-once-delivery. Basically during the poll() call the commit will be
> triggered and will commit the offset for the messages consumed during the
>
Hello all, I have a question about Kafka Streams, which I’m evaluating
for a new project. (I know it’s still a work in progress but it might
work anyway for this project.)
I’m new to Kafka in particular and distributed systems in general so
please forgive me I’m confused about any of these
On Thursday, February 11, 2016 at 19:53, Jason Gustafson wrote:
> We have them in the Confluent docs:
> http://docs.confluent.io/2.0.0/kafka/monitoring.html#new-consumer-metrics.
Thanks Jason, that’s very helpful!
Any chance this could be copied over to the canonical Kafka docs?
On Thursday, December 17, 2015 at 18:08, Guozhang Wang wrote:
> We should add a section for that. Siyuan can you file a JIRA?
Did this ever happen? This documentation would be very helpful.
Hi all!
I’m working on a project that would really benefit from Kafka Streams,
so I’m avidly following its development and eagerly awaiting its
release and I’m considering how to approach my project such that we
could migrate to Kafka Streams as soon as it’s ready.
In looking into this, I had a
On Wed, Feb 10, 2016 at 5:19 PM, Guozhang Wang wrote:
> I have updated the corresponding wiki pages for the release plan of Kafka
> Streams. Hope it helps for you.
Thanks, that is helpful!
I hope this question makes sense, I’m kinda a newbie when it comes to
reasoning about distributed systems.
Let’s say I have a consumer that needs to be able to detect when a
given message was delayed by some period of time (could be due to
network partition or producer errors or whatever). By
45 matches
Mail list logo