I have just did wikifeed example and given the output of wikifeed example
to another topologyProcessor to find the even numbers.while testing for
multiple Consumers in three JVM, the output topic is revoked and rebalanced
across three JVMs. i have got 10 tasks (max no of partitions).
i have three
HI:
I use kafka streams for real-time data analysis
and I meet a problem.
now I process a record in kafka and compute it and send to db.
but db concurrency level is not suit for me.
so I want that
1)when there is not data in kakfa ,the statestore is no results.
2) when there is a lot of data
Hello Nicu,
For your aggregation application, is it windowed or non windowed? If it is
windowed aggregation then you can specify your window specs so that the
underlying RocksDB state store would only keep the most recent windows,
while your Cassandra keeps the full history of all past windows.
Hi Arunkumar,
have you take a look if your MBean are exposed with Zookeeper thank to
JVisualvm yet ? As like in my screen in attachment.
regards Adrien
De : Arunkumar
Envoyé : mardi 27 février 2018 23:19:33
À :
Hi Jason,
Ok - thanks. Let me know how you get on.
Cheers,
Damian
On Wed, 28 Feb 2018 at 19:23 Jason Gustafson wrote:
> Hey Damian,
>
> I think we should consider
> https://issues.apache.org/jira/browse/KAFKA-6593
> for the release. I have a patch available, but still
Hey Damian,
I think we should consider https://issues.apache.org/jira/browse/KAFKA-6593
for the release. I have a patch available, but still working on validating
both the bug and the fix.
-Jason
On Wed, Feb 28, 2018 at 9:34 AM, Matthias J. Sax
wrote:
> No. Both will be
we can use "kafka-consumer-groups.sh --reset-offsets" option to reset
offsets. This is available from Kafka 0.11.0.0..
On Wed, Feb 28, 2018 at 2:59 PM, UMESH CHAUDHARY
wrote:
> You might want to set group.id config in kafka-console-consumer (or in any
> other consumer) to
No. Both will be released.
-Matthias
On 2/28/18 6:32 AM, Marina Popova wrote:
> Sorry, maybe a stupid question, but:
> I see that Kafka 1.0.1 RC2 is still not released, but now 1.1.0 RC0 is
> coming up...
> Does it mean 1.0.1 will be abandoned and we should be looking forward to
> 1.1.0
Why not just mock out the Kafka client in your tests and have it call a
function which yields a kafka message every call?
```
def consumer():
for _ in range(99):
yield KafkaMessage('key', 'value')mock_consumer =
mocker.patch.object(foo, 'consumer', consumer())
```
Is there any
I think I've seen mention of a MockConsumer class, but the
kafka-python package I have available in my Conda setup doesn't seem
to have anything like that. Is this a Java-only thing or is there a
Python MockConsumer class I just haven't yet encountered in the wild?
I see this:
Sorry, maybe a stupid question, but:
I see that Kafka 1.0.1 RC2 is still not released, but now 1.1.0 RC0 is coming
up...
Does it mean 1.0.1 will be abandoned and we should be looking forward to 1.1.0
instead?
thanks!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On
Hi,
Currently we have an aggregation system (without kafka) where events are
aggregated into Cassandra tables holding aggregate results.
We are considering moving to a KafkaStreams solution with exactly-once
processing but in this case it seems that all the aggregation tables (reaching
TB) need
Hi Nicu,
I think the best you can do is start with an empty dir, that will give you
access to a not layered filesystem, but you need to take in consideration
that you will stream data every time that the container restart, so a
rolling update can be slow.
My suggestion is to check with you
You might want to set group.id config in kafka-console-consumer (or in any
other consumer) to the value which you haven't used before. This will
replay all available messages in the topic from start if you use
--from-beginning in console consumer.
On Wed, 28 Feb 2018 at 14:19 Zoran
Hi,
If I have a topic that has been fully read by consumers, how to set the
offset from the shell to some previous value in order to reread again
several messages?
Regards.
Hi,
When using kafka streams & stateful transformations (rocksdb) and docker and
Kubernetes, what are the concerns for storage - I mean, I know that writing to
disk in Docker into the container is less performant than mounting a direct
volume. However, in our setup a separate team is handling,
16 matches
Mail list logo