This looks more like a spark issue than it does a Kafka judging by the
stack trace, are you using Spark structured streaming with Kafka
integration by chance?
On Mon, Apr 9, 2018 at 8:47 AM, M Singh
wrote:
> Hi Folks:
> Just wanted to see if anyone has any suggestions on this issue.
> Thanks
>
>
Show some code Rahul.
On Mon, Jan 21, 2019 at 11:02 AM Rahul Singh <
rahul.si...@smartsensesolutions.com> wrote:
> I am using node-kafka, I have used consumer.commit to commit offsets but
> don't know why when I restart the consumer it consume the committed
> offsets.
>
> Thanks
>
> On Mon, Jan 2
I would go with Kafka Streams. Kafka Streams is just a library. So you can
make a simple or complex application. You can include a rules engine or
machine learning model as a dependency as well as a kafka dependency and
based on your message send to being different topics. Run a number of your
app
Hope this helps, I tried copying your code into a sample application. I got
it to compile with the implicits all resolving. I think the trick was there
were two implementations for Windowing Serdes. You just need to block one
from the imports. See if that fits with what you are doing. Oh also, I
m John Roesler, wrote:
> >
> > > Oh, nice. Thanks, Daniel!
> > >
> > > That’s much nicer than my ham-handed approach.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Nov 19, 2020, at 17:44, Daniel Hinojosa wrote:
>
FYI, my goal was to get it to compile, since that is where
implicit resolution takes place. Obviously, you would need to plug in the
properties, start the stream, add a shutdown hook, etc.
On Fri, Nov 20, 2020 at 12:56 AM Daniel Hinojosa <
dhinoj...@evolutionnext.com> wrote:
> Done. I
org.apache.kafka.streams.scala.ImplicitConversions._
import org.apache.kafka.streams.scala.Serdes.{timeWindowedSerde => _, _}
Check the code on the repository I just did a push. Let me know what you
think.
Danno!
On Fri, Nov 20, 2020 at 12:58 AM Daniel Hinojosa <
dhinoj...@evolutionnext.com> wrote:
> FYI,
Yes, murmur2(key) % number of partitions.
On Fri, May 7, 2021 at 7:24 PM sunil chaudhari
wrote:
> Hi Neeraj,
> I dont think there is relation of key and the partition in that sense..
>
>
>
> On Sat, 8 May 2021 at 3:16 AM, Neeraj Vaidya
> wrote:
>
> > Hi all,
> > I think I kind of know the answe
The keys are already part of the stream. When you run builder.stream or
builder.table it returns a Stream or a Table. From there every
operation has a lambda that accepts both key and value. You can use map for
example to accept the key and do something with that. Let me know if you
have any other
Ah, nevermind, just saw the title that this is KSQLDB.
On Tue, Aug 10, 2021 at 12:09 PM Daniel Hinojosa <
dhinoj...@evolutionnext.com> wrote:
> The keys are already part of the stream. When you run builder.stream or
> builder.table it returns a Stream or a Table. From there every
>
189976158142 |FEMALE |
On Tue, Aug 10, 2021 at 12:11 PM Daniel Hinojosa <
dhinoj...@evolutionnext.com> wrote:
> Ah, nevermind, just saw the title that this is KSQLDB.
>
> On Tue, Aug 10, 2021 at 12:09 PM Daniel Hinojosa <
> dhinoj...@evolutionnext.com> wrote:
>
>
I can only assume that no one has come forward to translate. You can in
most browsers, translate the page. In Chrome, you can right-click on the
page and select "Translate to" and change to whatever language you need. It
may not be perfect but it would get you going. Here is a snapshot (Not sure
i
As a stop-gap you can create something like the following.
private static boolean isClosed(KafkaConsumer consumer) {
try {
var consumerClass = consumer.getClass();
Field field = consumerClass.getDeclaredField("closed");
field.setAccessible(true);
return (Boolean) field.get(consumer);
} catch (NoSu
Warning, though. `closed` is volatile. That shouldn't be too much of a
problem, but if something is odd, make a mental bookmark. Consumer and
KafkaConsumer should have a permanent isClosed() method. It sure would be
handy.
I'll put it as an issue.
On Mon, May 29, 2023 at 8:47
Hey all,
Question. I have three brokers. I also have 3 consumers on their own
thread consuming 3 partitions of the same topic "scaled-states". Here are
the configs when I run kafka-topics.sh --describe --topic 'scaled-cities'
--zookeeper zoo2:2181
Topic:scaled-cities PartitionCount:3 Replicatio
Hey all,
Question. I have three brokers. I also have 3 consumers on their own
thread consuming 3 partitions of the same topic "scaled-states". Here are
the configs when I run:
kafka-topics.sh --describe --topic 'scaled-cities' --zookeeper zoo2:2181
Topic:scaled-cities PartitionCount:3 Replicat
Kafka uses Linux OS page cache and flushing techniques as well as in-sync
replicas to bring down latency since storing to the hard drive is likely
the slowest part. It seems it would be defeating the point if you were to
install it on Windows, just like everyone here mentioned. ;)
Just wanted to
17 matches
Mail list logo