Thanks Sharninder!
Adding dev group to know if they done some benchmarking test on Single
Consumer Group Vs Multiple Consumer Grp on Same Topic.
Cheers,
Senthil
On Jan 24, 2017 10:48 PM, "Sharninder Khera" wrote:
I don't have benchmarks but multiple consumer groups are
Hi ,
Have anybody tested spcl char inside kafka ?
I am little worried about serialization and de-serialization of special
characters.
**
*Regards,*
*Laxmi Narayan Patel*
*MCA NIT Durgapur (2011-2014)*
*Mob:-9741292048,8345847473*
Thank you very much, both suggestions are wonderful, and I will try them.
Have a great day!
Kind regards,
Nick
On 24 January 2017 at 19:46, Matthias J. Sax wrote:
> If your data is already partitioned by key, you can save writing to a
> topic by doing a dummy reduce
Hi All,
I am running a kafka streaming application with a simple pipeline of:
source topic -> group -> aggregate by key -> for each > save to a sink.
I source topic gets message at rate of 5000 - 1 messages per second.
During peak load we see the delay reaching to 3 million messages.
So I
Hi everyone,
We would like to invite you to a Stream Processing Meetup at LinkedIn’s
Sunnyvale campus on Thursday, February 16 at 6pm.
Please RSVP here (*only if you intend to attend in person*):
https://www.meetup.com/Stream-Processing-Meetup-LinkedIn/events/237171557/
Could someone please let me know what is going wrong in my Kafka cluster? I
would highly appreciate a response.
Thanks,
Sri
On Mon, Jan 23, 2017 at 1:47 PM, Srikrishna Alla
wrote:
> Hi,
>
> I am running a Kafka Sink Connector with Kafka 0.9.0.2. I am seeing that
> my
Sorry, wrong link: http://docs.confluent.io/2.0.1/kafka/deployment.html
On 1/24/17, 2:13 PM, "David Garcia" wrote:
This should give you an idea:
https://www.confluent.io/blog/design-and-deployment-considerations-for-deploying-apache-kafka-on-aws/
On 1/23/17,
This should give you an idea:
https://www.confluent.io/blog/design-and-deployment-considerations-for-deploying-apache-kafka-on-aws/
On 1/23/17, 10:25 PM, "Ewen Cheslack-Postava" wrote:
Smaller servers/instances work fine for tests, as long as the workload is
scaled
> On Jan 24, 2017, at 14:17, Jon Yeargers wrote:
>
> It may be picking a random partition but it sticks with it indefinitely
> despite there being a significant disparity in traffic.
Ah, I forgot to mention that IIRC the default Partitioner impl doesn’t choose a
(cont'd) meant to say mod%partition count of System.currentTimeMillis().
Having said that - is there any disadvantage to true random distribution of
traffic for a topic?
On Tue, Jan 24, 2017 at 11:17 AM, Jon Yeargers
wrote:
> It may be picking a random partition but
It may be picking a random partition but it sticks with it indefinitely
despite there being a significant disparity in traffic. I need to break it
up in some different fashion. Maybe just a hash of
System.currentTimeMillis()?
On Tue, Jan 24, 2017 at 10:52 AM, Avi Flax
> On Jan 24, 2017, at 11:18, Jon Yeargers wrote:
>
> If I don't specify a key when I call send a value to kafka (something akin
> to 'kafkaProducer.send(new ProducerRecord<>(TOPIC_PRODUCE, jsonView))') how
> is it keyed?
IIRC, in this case the key is null; i.e. there
If your data is already partitioned by key, you can save writing to a
topic by doing a dummy reduce instead:
stream
.groupByKey()
.reduce(new Reducer() {
V apply(V value1, V value2) {
return value2;
}
},
"yourStoreName");
(replace V with your actuall value type)
-Matthias
No, I don’t think we have any orphaned process. Can you please bit elaborate
what you are trying to explain and what would be the solution?
Thanks
Achintya
-Original Message-
From: Jon Yeargers [mailto:jon.yearg...@cedexis.com]
Sent: Tuesday, January 24, 2017 11:07 AM
To:
Hi, Mark,
Thanks for pointing this out. This issue is fixed in 0.10.0.0 in
https://issues.apache.org/jira/browse/KAFKA-725.
In 0.9.0, what's going to happen is the consumer will get an unknown error.
Normally, the consumer will only reset the offset if it gets an
OffsetOutOfRangeException. If it
I don't have benchmarks but multiple consumer groups are possible. For Kafka
the performance should be similar or close to as having multiple consumers
using a single group.
_
From: Senthil Kumar
Sent: Tuesday, January 24,
Hi Team , Sorry if the same question asked already in this group !
Say we have topic => ad_events .. I want to read events from ad_events
topic and send it to two different systems... This can be achieved by
creating two Consume Groups..
Example : Consumer Group SYS1 with 10 threads
Hi Nick,
I guess there is some reason why you can't just build it as a table to
begin with?
There isn't a convenient method for doing this right now, but you could do
something like:
stream.to("some-other-topic");
builder.table("some-other-topic");
Thanks,
Damian
On Tue, 24 Jan 2017 at 16:32
Hello,
How can I simply table a Kafka Stream? I have a Kafka Stream, and I want to
create a table from it backed by a state store. The key of the stream could
be the same as the table.
I've tried following examples, but it seems all examples use `groupBy` or
`count` to convert `KStream`s into
If I don't specify a key when I call send a value to kafka (something akin
to 'kafkaProducer.send(new ProducerRecord<>(TOPIC_PRODUCE, jsonView))') how
is it keyed?
I am producing to a topic from an external feed. It appears to be heavily
biased towards certain values and as a result I have 2-3
Make sure you don't have an orphaned process holding onto the various
kafka/zk folders. If it won't respond and you can't kill it then this might
have happened.
On Tue, Jan 24, 2017 at 6:46 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:
> Can anyone please answer this?
>
>
Hi Kafka-users,
We've set up a KAFKA(0.10.0.0) topic to feed to elasticsearch so that we can
view the stuff on Kibana dashboard but the problem I'm running into is that
topic apparently gets flooded with loads and loads of data and we've got limited
Disk space on the server to utilize. So in
Can anyone please answer this?
Thanks
Achintya
-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent: Monday, January 23, 2017 1:51 PM
To: users@kafka.apache.org
Subject: RE: Messages are lost
Version 0.10 and I don’t have the thread dump but
Hi all,
I working on a Go program that needs to create topics and consume
messages from kafka. Currently, I am creating a topic by setting up the
appropriate nodes in zookeeper.
Once the changes are committed to zookeeper, how long does it take for
kafka to see the topic? I am noticing in
Hi *,
we got an unexpected log cleaning behavior. We tried to configure the log
cleaning that all retention logs are deleted.
Expectation:
- One active log present
- all rolled logs will be deleted
Configuration of server.properties
log.retention.bytes=4
Hi, all:
Sorry to describe problem with my pool English, hope you can understand:
I have a 4 point kafka cluster, (kafka_2.1.1_0.9.0.0), one topic with 12
partitions. Then I create 3 consumers in 3 server to consume this topic.
However, messages in several partitions might not be consume (
26 matches
Mail list logo