Hi,
I have two streams and I want to enrich stream2 records based off stream1
records.
I really cannot join those two streams is because there is no common key
between them.
Hence only way I can do that is using a timestamp field property.
This is how I have built my pipeline.
.
//create and
Hello Kafka Community,
Kindly help me on this.
Thanks and regards,
Naveen
On Wed, Feb 19, 2020, 1:41 PM Naveen Kumar M
wrote:
> Hello Team,
>
> Kindly help me to understand hardware assessment details to setup Kafka as
> messaging broker to de-couple applications/systems.
>
> Thanks and
Hi Sunil,
Producers emit metrics via JMX that will help you, assuming that your
producers are using a round robin partition assignment strategy, you could
divide this metric by your number of partitions,
kafka.producer:type=producer-metrics,client-id=(.+),topic=(.+)record-send-rate
Kind
Hi
I was referring to the article by Mr. June Rao about partitions in kafka
cluster.
https://www.confluent.io/blog/how-choose-number-topics-partitions-kafka-cluster/
"A rough formula for picking the number of partitions is based on throughput.
You measure the throughout that you can achieve
Not ideal...
but, that way we can use a single combined, development/testing cluster for
development, and 90% of the flows, scripts to deploy the environment and
also test replication configuration etc, and building some experience how
to manage it.
G
On Thu, Feb 20, 2020 at 6:32 AM Peter
That is possible as long and you include a topic.rename.format argument in the
replication.properties file. The origin and destination cluster configs can
point to the same cluster.
See the example here
Hi all.
is it possible, for testing purposes to replicate topic A from Cluster 1 to
topic B on cluster 1/same cluster?
G
--
You have the obligation to inform one honestly of the risk, and as a person
you are committed to educate yourself to the total risk in any activity!
Once informed &
That was it, thanks!
-
Maurício Linhares
http://mauricio.github.io/ - http://twitter.com/#!/mauriciojr
On Wed, Feb 19, 2020 at 7:55 PM Brian Sang wrote:
>
> There's lots of open source tooling for this! Some examples, but there's
> plenty more:
>
> https://github.com/Yelp/kafka-utils (more easy
There's lots of open source tooling for this! Some examples, but there's
plenty more:
https://github.com/Yelp/kafka-utils (more easy to use scripts)
https://github.com/linkedin/cruise-control (automated system)
https://github.com/DataDog/kafka-kit
On Wed, Feb 19, 2020 at 4:52 PM Maurício
Imagine I have a kafka node that holds a lot of topics, the disk is
degraded so I have to start moving topics out of it to the other nodes
in the cluster. Kafka manager allows me to do that but the process is
cumbersome and requires a lot of clicks.
The simplest solution to prevent most clicking
Hi John,
Thank you for your reply.
Let me clarify.
I used the word aggregate, but we are using aggregate functions. Our case is a
relationship whole-part between messageA and message1, 2, n. Like order and
order items.
So translating our case, messageA is the order and message1 and 2 are
Hi Renato,
Can you describe a little more about the nature of the join+aggregation
logic? It sounds a little like the KTable represents the result of aggregating
messages from the KStream?
If that's the case, the operation you probably wanted was like:
> KStream.groupBy().aggregate()
which
Hi Kafka Community,
Please take a look into my use case:
Fist message1
1. We have a KStream joined to a KTable(Compact Topic).
2. We received a message1 from the KStream, aggregates the message1 to the
joined messageA from KTable.
3. We pushes back the messageA with aggregated message1 into
You can either use the built-in kafka-reassign-partitions.sh script (
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-4.ReassignPartitionsTool
)
Or in industry others use tooling such as
https://github.com/Yelp/kafka-utils (more easy to use scripts) or
How did you reassign partitions? While reassigning reassignment json had
the broker 0 mentioned, it could be in this state. Could you share what's
the output of describing the topic from console.
On Wed, Feb 19, 2020 at 5:11 AM Bhat, Avinash
wrote:
> Hi Ivan,
>
> This is probably by design,
Hello Team,
Kindly help me to understand hardware assessment details to setup Kafka as
messaging broker to de-couple applications/systems.
Thanks and regards,
Naveen
The physical memory you need depends on the type of workload you are
running and the particular setup for this workload, e.g. number of
partitions, and production and consumption patterns.
Assuming you have 90 MB/s of *production* throughput, a replication
factor of 3 across topics, even
Hi George,
The 90Mb/s, how did you calculate that?
We are expecting 360 messages (events) per second across all topics we
have. One event size = 256Kb
With regards,
Gowtham S, MCA
On Wed, 19 Feb 2020 at 11:01, George wrote:
> Hi there
>
> with regard to "> Can you please suggest me how to
18 matches
Mail list logo