I don’t feel it would be a big hit in performance because Kafka works very
fast. I think the speed difference would be negligible. Why are you worried
about stability? I’m just curious because it doesn’t seem like it would be
unstable, but maybe it would be a bit overkill for one app and some
r table
> represents number of view per page, so try to compute number of views
> per page category.
>
> Just like when you use aggregation in SQL using some aggregation
> function, in order for this to make sense you must map several rows in
> the source into one row in the
uot;.
Hope that clears it up.
On Mon, Sep 24, 2018 at 12:03 PM Michael Eugene wrote:
>
> First off thanks or taking the time out of your schedule to respond.
>
> You lost me at almost the beginning, specifically at mapping to a different
> key. If those records come in...
>
&g
KStream[Change[T]] where change carries both new and old value, and
over the wire this change gets transmitted as two separate kafka
messages.
On Mon, Sep 24, 2018 at 10:56 AM Michael Eugene wrote:
>
> Can someone explain to me the point of the Subtractor in an aggregator? I
> have to have
Can someone explain to me the point of the Subtractor in an aggregator? I have
to have one, because there is no concrete default implentation of it, but I am
just trying to get a "normal" aggregation working and I don't see why I need a
subtractor. Other than of course I need to make the
compiler can't tell if "scala" references the
top-level package or the intermediate package inside Streams.
Hope this helps!
-John
On Wed, Sep 12, 2018 at 3:02 PM Michael Eugene wrote:
> Hey thanks for the help everyone, I’m gonna use the new scala 2.0
> libraries. Im getting the cra
> Hi Michael Eugene,
>
> You're correct - you only need to upgrade your Kafka Streams dependencies
> in your build file. Looking at MVN Repository, the streams lib will
> implicitly bring in its dependency on kafka-clients, but you can always
> include your own explicit d
; you don't need to get
> everyone else to upgrade their clients or your cluster.
>
> Regards
> Sean
>
> On Sun, Sep 9, 2018 at 3:59 PM Michael Eugene wrote:
>
> > I’m integrating with lot of other applications and it would be a little
> > cowboy to chose my own ver
ow deprecated
>>> and I remember we ran into some SAM related issues with Scala 2.11 (which
>>> worked fine with 2.12). They were finally fixed in the Kafka distribution -
>>> there are some differences in the APIs as well ..
>>>
>>> regards.
>>>
gt; there are some differences in the APIs as well ..
>
> regards.
>
>> On Sun, Sep 9, 2018 at 11:32 PM Michael Eugene wrote:
>>
>> I’m using 2.11.11
>>
>> Sent from my iPhone
>>
>>> On Sep 9, 2018, at 12:13 PM, Debasish Ghosh
>> wrote:
>&
I’m using 2.11.11
Sent from my iPhone
> On Sep 9, 2018, at 12:13 PM, Debasish Ghosh wrote:
>
> Which version of Scala are u using ?
>
>> On Sun, 9 Sep 2018 at 10:44 AM, Michael Eugene wrote:
>>
>> Hi,
>>
>> I am using kafak-sreams-scala
>>
k, v, vr don't match those expected by the
> aggregate() function. Add explicit types to your code and you'll find the
> problem. You'll probably find that Scala is inferring an Any somewhere.
>
> Ryanne
>
>> On Sun, Sep 9, 2018, 12:14 AM Michael Eugene wrote:
>>
>>
Hi,
I am using kafak-sreams-scala
https://github.com/lightbend/kafka-streams-scala, and I am trying to implement
something very simple and I am getting a compilation error by the "aggregate"
method. The error is "Cannot resolve overload method 'aggregate'" and
"Unspecified value parameters:
n you start the application, it should the config in the logs. Can
> you double check if it did pick up the handler there?
>
> -Matthias
>
>> On 6/24/18 6:42 PM, Michael Eugene wrote:
>> The thing about that is, when I try to register the handler, it doesn’t
>> work.
ble thought.
>
>
> For exactly-once, producer retries are set to MAX_VALUE and thus the
> application would re-try practically forever.
>
>
> -Matthias
>
>> On 6/16/18 1:52 PM, Michael Eugene wrote:
>> Hi I am trying to understand when to retry sending messages to topic
Hi two questions - Does it make sense to have an error topic for “retryable”
errors if the retries parameter is already set to like 10 (default)?
For non retryable errors, is the approach to try to send to the topic, catch
the specific exception and send to another topic (just send the whole
Hi I am trying to understand when to retry sending messages to topics and when
to start trying to send to "retry" topics. The scenario is basically
1. A KafkaStreams application is consuming from a topic and sending to a topic.
The "retries" is set at the default of 10.
2a. After 10 retries,
17 matches
Mail list logo