wondering is actually Apache Kafka suite for my scenario ? Since we
will use Internet connection
internet , should I be worried about network related problems such as
performance and latency ?
Thank you
Ali
that the issue is related to the
client upgrade. Previously Kafka and Storm supervisor were collocated on
the same host.
Regards,
Ali
Sometimes I see warnings in my logs if i create a consumer for a topic
which doesn't exist. Such as:
org.apache.kafka.clients.NetworkClient - Error while fetching metadata
with correlation id 1 : {example_topic=LEADER_NOT_AVAILABLE}
If later messages are posted to that topic (which will create i
oes-not-exist
On 7 Jul 2017 9:46 pm, "Ali Akhtar" wrote:
> Sometimes I see warnings in my logs if i create a consumer for a topic
> which doesn't exist. Such as:
>
> org.apache.kafka.clients.NetworkClient - Error while fetching metadata
> with correlation id 1
Oh gotcha, thanks. So a topic will be created if topic creation is enabled.
On Sat, Jul 8, 2017 at 8:14 PM, M. Manna wrote:
> Please check my previous email.
>
> On Sat, 8 Jul 2017 at 2:32 am, Ali Akhtar wrote:
>
> > What happens if auto creation is enabled but the t
Not too familiar with that error, but I do have Kafka working on
Kubernetes. I'll share my files here in case that helps:
Zookeeper:
https://gist.github.com/aliakhtar/812974c35cf2658022fca55cc83f4b1d
Kafka: https://gist.github.com/aliakhtar/724fbee6910dec7263ab70332386af33
Essentially I have 3 k
How do you know that the brokers don't talk to each other?
On Thu, Sep 14, 2017 at 4:32 PM, Yongtao You
wrote:
> Hi,
> I would like to know the right way to setup a Kafka cluster with Nginx in
> front of it as a reverse proxy. Let's say I have 2 Kafka brokers running on
> 2 different hosts; and
L. But I could be wrong.
>
>
> Thanks!
>
> -Yongtao
>
>
> On Thursday, September 14, 2017, 8:07:38 PM GMT+8, Ali Akhtar <
> ali.rac...@gmail.com> wrote:
>
>
> How do you know that the brokers don't talk to each other?
>
> On Thu, Sep 14, 2017 at 4:
r listens on. It's an [info] message so I'm not sure how
> serious it is, but I don't see messages sent from filebeat in Kafka. :(
>
> Thanks!
> -Yongtao
>
> On Thursday, September 14, 2017, 8:31:31 PM GMT+8, Ali Akhtar <
> ali.rac...@gmail.com> wrote:
parties = ports *
On Thu, Sep 14, 2017 at 8:04 PM, Ali Akhtar wrote:
> I would try to put the SSL on different ports than what you're sending
> kafka to. Make sure the kafka ports don't do anything except communicate in
> plaintext, put all 3rd parties on different parties.
&
e
new-consumer mode with this version of Kafka. Since I am using
new-consumer, Kafka does not maintain offset info in Zookeeper.
2017-11-23 07:55:52.870 o.a.s.k.s.i.OffsetManager [WARN]
topic-partition [indexing-1] has unexpected offset [7259]. Current
committed Offset [8855387]
Regards,
Ali
>
> On Thu, Nov 23, 2017 at 7:02 PM, Ali Nazemian
> wrote:
>
> > Hi All,
> >
> > I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version.
> I
> > have a situation that after removing Kafka topic, I am getting the
> > following error in Kaf
Thanks, Brett. We will work on writing a client to do this for us, then.
Regards,
Ali
On Thu, Nov 23, 2017 at 11:02 PM, Brett Rann
wrote:
> Ah apologies.
>
> Found the KIP where the one I suggested was added:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 122%3A+Add
Hello,
I am working on the Kafka monitoring and writing an application for it. Can
anyone tell me how to get the metrics in crude form. I don't want to use
Datadog or JConsole some tools like these one.
I am currently using jolokia to retrieve metrics.
With Regards
Irtiza Ali
Hello everyone,
I am working python based Kafka monitoring application. I am unable to
figure out how to retrieve the metrics using Jolokia. I have enable the
port for metrics retrieval to .
I Have two questions
1) Is there something that I am not doing correctly.
2) Is there some other way
t not for the Kafka broker.
Does anyone here have done something like this before, help will be
appreciated.
Thanks in advance!
With Regards
Irtiza ALi
/jmxtrans/jmxtrans/wiki
>
> Hope that helps!
>
> Thanks,
> Subhash
>
> Sent from my iPhone
>
> > On Dec 6, 2017, at 5:36 AM, Irtiza Ali wrote:
> >
> > Hello everyone,
> >
> > I am working python based Kafka monitoring application. I am unable to
>
problems with it.
> It would be useful to know what exactly you did (how you "plugged in"
> Jolokia, how you configured it, what endpoint are you querying etc.) to
> help you.
>
> On 6 December 2017 at 10:36, Irtiza Ali wrote:
>
> > Hello everyone,
> >
&
work:*
> :8074/jolokia/read/java.lang:type=Memory
>
>
> And that's it, we get a nice JSON that we can use the way we want :-)
> I hope I didn't miss anything, but this should be it.
>
> M.
>
>
> On 8 December 2017 at 09:28, Irtiza Ali wrote:
>
> > Hel
gt; answer your question:
> https://docs.confluent.io/3.0.0/kafka/monitoring.html
> https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/
>
>
> On 8 December 2017 at 14:56, Irtiza Ali wrote:
>
> > Thanks Michal, can you kindly send me you kafka-run-class.sh and
&
know if there are any...).
> Reading the Kafka documentation I'm under impression that all the metrics
> should be available to you out-of-the-box, so I'm not sure why you can't
> see them, sorry :-(
>
> M.
>
>
> On 11 December 2017 at 14:11, Irtiza Ali wro
-12-13 9:53 GMT+02:00 Irtiza Ali :
>
> > Ok thank you Michal
> >
> > On Tue, Dec 12, 2017 at 9:30 PM, Michal Michalski <
> > michal.michal...@zalando.ie> wrote:
> >
> > > Hi Irtiza,
> > >
> > > Unfortunately I don't know what could be
Hi All,
I was wondering whether there is any best practice/recommendation for
publishing byte messages to Kafka. Is there any specific Serializer that is
recommended for this matter?
Cheers,
Ali
Thanks, Matt. Have you done any benchmarking to see how using different
Serializers may impact throughput/latency?
Regards,
Ali
On Wed, Jan 10, 2018 at 7:55 AM, Matt Farmer wrote:
> We use the default byte array serializer provided with Kafka and it works
> great for us.
>
> >
is sent as a byte array, so the default byte array serializer is as
> "efficient" as it gets, as it's just sending your byte array through as the
> message... there's no serialization happening.
> -Thunder
>
> On Tue, Jan 9, 2018 at 8:17 PM Ali Nazemian wrote:
Using version 0.10.0.1, how can I create kafka topics using the java client
/ API?
Stackoverflow answers describe using kafka.admin.AdminUtils, but this class
is not included in the kafka-clients maven dependency. I also don't see the
package kafka.admin in the javadocs: http://kafka.apache.
org/0
Do
I need to talk to the bash script?
On Wed, Sep 14, 2016 at 8:45 AM, Ali Akhtar wrote:
> Using version 0.10.0.1, how can I create kafka topics using the java
> client / API?
>
> Stackoverflow answers describe using kafka.admin.AdminUtils, but this
> class is not included in the
Thank you Martin
On 15 Sep 2016 3:05 am, "Mathieu Fenniak"
wrote:
> Hey Ali,
>
> If you have auto create turned on, which it sounds like you do, and you're
> happy with using the broker's configured partition count and replication
> factor, then you can ca
If so, can you please share if you're using a publicly available
deployment, or if you created you own, how you did it? (I.e which services
/ replication controllers you have)
Also, how has the performance been for you? I've read a report which said
the performance suffered running kafka as a dock
I'm guessing its not possible to delete topics?
On Thu, Sep 15, 2016 at 5:43 AM, Ali Akhtar wrote:
> Thank you Martin
>
> On 15 Sep 2016 3:05 am, "Mathieu Fenniak"
> wrote:
>
>> Hey Ali,
>>
>> If you have auto create turned on, which it sounds
I've noticed that, on my own machine, if I start a kafka broker, then
create a topic, then I stop that server and restart it, the topic I created
is still kept.
However, on restarts, it looks like the topic is deleted.
Its also possible that the default retention policy of 24 hours causes the
mes
It sounds like a network issue. Where are the 3 servers located / hosted?
On Thu, Sep 15, 2016 at 11:51 AM, kant kodali wrote:
> Hi,
> I have the following setup.
> Single Kafka broker and Zookeeper on Machine 1single Kafka producer on
> Machine 2
> Single Kafka Consumer on Machine 3
> When a pr
e data logs to /tmp/ folder. /tmp gets
> cleared
> on system reboots. change log.dirs config property to some other directory.
>
> On Thu, Sep 15, 2016 at 11:46 AM, Ali Akhtar wrote:
>
> > I've noticed that, on my own machine, if I start a kafka broker, then
> >
); process.exit();
> } else if (received % hash === 0){
> process.stdout.write(received + '\n');}});
> consumer.on('error', function (err) { console.log('error', err);});
>
> Not using Mixmax yet?
>
>
>
>
but very
> late. I
> send about 300K messages using the node.js client and I am receiving at a
> very
> low rate. really not sure what is going on?
>
>
>
>
>
>
> On Thu, Sep 15, 2016 12:06 AM, Ali Akhtar ali.rac...@gmail.com
> wrote:
> Your code seems to
t;
>
>
> On Thu, Sep 15, 2016 12:33 AM, Ali Akhtar ali.rac...@gmail.com
>
> wrote:
> What's the instance size that you're using? With 300k messages your single
>
> broker might not be able to handle it.
>
>
>
>
> On Thu, Sep 15, 2016 at 12:30 PM,
throughput was 2K messages/secI am unable to push 300K messages
> with
> Kafka with the above configuration and environment so at this point my
> biggest
> question is what is the fair setup for Kafka so its comparable with NATS
> and
> NSQ?
> kant
>
>
>
>
>
gt; Again the big question is What is the right setup for Kafka to be
> comparable
> with the other I mentioned in my previous email?
>
>
>
>
>
>
> On Thu, Sep 15, 2016 1:47 AM, Ali Akhtar ali.rac...@gmail.com
> wrote:
> The issue is clearly that you're running out of r
It sounds like you can implement the 'mapping service' component yourself
using Kafka.
Have all of your messages go to one kafka topic. Have one consumer group
listening to this 'everything goes here' topic. This consumer group acts as
your mapping service. It looks at each message, and based on
Examine server.properties and see which port you're using in there
On Thu, Sep 15, 2016 at 3:52 PM, kant kodali wrote:
> which port should I use 9091 or 9092 or 2181 to send messages through kafka
> when using a client Library?
> I start kafka as follows:
> sudo bin/zookeeper-server-start.sh con
I've created a 3 broker kafka cluster, changing only the config values for
broker id, log.dirs, and zookeeper connect. I left the remaining fields as
default.
The broker ids are 1, 2, 3. I opened the port 9092 on AWS.
I then created a topic 'test' with replication factor of 2, and 3
partitions.
other using the
private IPs.
Shouldn't that be enough? I don't want to expose kafka publicly.
On Fri, Sep 16, 2016 at 10:48 PM, Ali Akhtar wrote:
> I've created a 3 broker kafka cluster, changing only the config values for
> broker id, log.dirs, and zookeeper connect. I left th
You can create multiple partitions of a topic and kafka will attempt to
distribute them evenly.
E.g if you have 3 brokers and you create 3 partitions of a topic, each
broker will be the leader of 1 of the 3 partitions.
P.S how did the benchmarking go?
On Sat, Sep 17, 2016 at 1:36 PM, kant kodali
Perhaps if you add 1 node, take down existing node, etc?
On Sun, Sep 25, 2016 at 10:37 PM, brenfield111
wrote:
> I need to change the hostnames and ips for the Zookeeper ensemble
> serving my Kafka cluster.
>
> Will Kafka carry on as usual, along with it's existing ZK nodes, after
> making the c
I have a somewhat tricky use case, and I'm looking for ideas.
I have 5-6 Kafka producers, reading various APIs, and writing their raw
data into Kafka.
I need to:
- Do ETL on the data, and standardize it.
- Store the standardized data somewhere (HBase / Cassandra / Raw HDFS /
ElasticSearch / Pos
It needs to be able to scale to a very large amount of data, yes.
On Thu, Sep 29, 2016 at 7:00 PM, Deepak Sharma
wrote:
> What is the message inflow ?
> If it's really high , definitely spark will be of great use .
>
> Thanks
> Deepak
>
> On Sep 29, 2016 19:24, "
ny
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On
laimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages ar
On Thu, Sep 29, 2016 at 8:24 PM, Ali Akhtar wrote:
>
>> I don't think I need a different speed storage and batch storage. Just
>> taking in raw data from Kafka, standardizing, and storing it somewhere
>> where the web UI can query it, seems like it will be enough.
>>
&g
ost /
> >> duplicated data? Are your writes idempotent?
> >>
> >> Absent any other information about the problem, I'd stay away from
> >> cassandra/flume/hdfs/hbase/whatever, and use a spark direct stream
> >> feeding postgres.
> >>
>
; On Thu, Sep 29, 2016 at 10:40 AM, Deepak Sharma
> wrote:
> > If you use spark direct streams , it ensure end to end guarantee for
> > messages.
> >
> >
> > On Thu, Sep 29, 2016 at 9:05 PM, Ali Akhtar
> wrote:
> >>
> >> My concern with Post
Avi,
Why did you choose Druid over Postgres / Cassandra / Elasticsearch?
On Fri, Sep 30, 2016 at 1:09 AM, Avi Flax wrote:
>
> > On Sep 29, 2016, at 09:54, Ali Akhtar wrote:
> >
> > I'd appreciate some thoughts / suggestions on which of these
> alternatives I
>
Newbie question, but what exactly does log.cleaner.enable=true do, and how
do I know if I need to set it to be true?
Also, if config changes like that need to be made once a cluster is up and
running, what's the recommended way to do that? Do you killall -12 kafka
and then make the change, and the
You may be able to control the starting offset, but if you try to control
which instance gets offset 4.. you'll lose all benefits of parallelism.
On 4 Oct 2016 3:02 pm, "Kaushil Rambhia/ MUM/CORP/ ENGINEERING" <
kaushi...@pepperfry.com> wrote:
> Hi guys,
> i am using apache kafka with phprd kafk
I need to consume a large number of topics, and handle each topic in a
different way.
I was thinking about creating a different KStream for each topic, and doing
KStream.foreach for each stream, to process incoming messages.
However, its unclear if this will be handled in a parallel way by defaul
are
> distributed over the running instances.
>
> Please see here for more details
> http://docs.confluent.io/current/streams/architecture.html#parallelism-m
> odel
>
>
> - -Matthias
>
> On 10/4/16 1:27 PM, Ali Akhtar wrote:
> > I need to consume a large number of t
That's awesome. Thanks.
On Wed, Oct 5, 2016 at 2:19 AM, Matthias J. Sax
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Yes.
>
> On 10/4/16 1:47 PM, Ali Akhtar wrote:
> > Hey Matthias,
> >
> > All my topics have 3 partitions each, and I
<3
On Wed, Oct 5, 2016 at 2:31 AM, Ali Akhtar wrote:
> That's awesome. Thanks.
>
> On Wed, Oct 5, 2016 at 2:19 AM, Matthias J. Sax
> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
>>
>> Yes.
>>
>> On 10/4/16 1:47 PM, A
Just noticed this on pulling up the documentation. Oh yeah! This new look
is fantastic.
On Wed, Oct 5, 2016 at 4:31 AM, Vahid S Hashemian wrote:
> +1
>
> Thank you for the much needed new design.
> At first glance, it looks great, and more professional.
>
> --Vahid
>
>
>
> From: Gwen Shapira
I don't see a space in that topic name
On Wed, Oct 5, 2016 at 6:42 PM, Hamza HACHANI
wrote:
> Hi,
>
> I created a topic called device-connection-invert-key-value-the
> metric-changelog.
>
> I insit that there is a space in it.
>
>
>
> Now that i want to delete it because my cluster can no longe
> It's often a good
idea to over-partition your topics. For example, even if today 10 machines
(and thus 10 partitions) would be sufficient, pick a higher number of
partitions (say, 50) so you have some wiggle room to add more machines
(11...50) later if need be.
If you create e.g 30 partitions,
Heya,
I have some Kafka producers, which are listening to webhook events, and for
each webhook event, they post its payload to a Kafka topic.
Each payload contains a timestamp from the webhook source.
This timestamp is the source of truth about which events happened first,
which happened last, e
ach a state store to your processor and compare the
> timestamps of the current record with the timestamp of the one in your
> store.
>
> - -Matthias
>
> On 10/6/16 8:52 AM, Ali Akhtar wrote:
> > Heya,
> >
> > I have some Kafka producers, which are listening to web
roduct, go to
> the same instance. You can ensure this, by given all records of the
> same product the same key and "groupByKey" before processing the data.
>
> - -Matthias
>
> On 10/6/16 10:55 AM, Ali Akhtar wrote:
> > Thank you, State Store seems promising.
(Assuming 'last one' can be
determined using the timestamp in the json of the message)
On Fri, Oct 7, 2016 at 2:54 AM, Ali Akhtar wrote:
> Thanks for the reply.
>
> Its not possible to provide keys, unfortunately. (Producer is written by a
> colleague, and said colleague jus
at
> returns the JSON embedded TS instead of record TS (as
> DefaultTimestampExtractor does)
>
> See
> http://docs.confluent.io/3.0.1/streams/developer-guide.html#timestamp-ex
> tractor
>
>
> - -Matthias
>
> On 10/6/16 2:59 PM, Ali Akhtar wrote:
> > Sorry, to
What the subject says. For dev, it would be a lot easier if debugging info
can be printed to stdin instead of another topic, where it will persist.
Any ideas if this is possible?
print() or #writeAsText()
>
>
> - -Matthias
>
> On 10/6/16 6:25 PM, Ali Akhtar wrote:
> > What the subject says. For dev, it would be a lot easier if
> > debugging info can be printed to stdin instead of another topic,
> > where it will persist.
> >
> >
Is it possible to have kafka-streams-reset be automatically called during
development? Something like streams.cleanUp() but which also does reset?
On Fri, Oct 7, 2016 at 2:45 PM, Michael Noll wrote:
> Ali,
>
> adding to what Matthias said:
>
> Kafka 0.10 changed the message f
s for
production)
On Fri, Oct 7, 2016 at 8:05 PM, Michael Noll wrote:
> > Is it possible to have kafka-streams-reset be automatically called during
> > development? Something like streams.cleanUp() but which also does reset?
>
> Unfortunately this isn't possible (yet), Ali.
Since we're using Java 8 in most cases anyway, Serdes / Serialiazers should
use options, to avoid having to deal with the lovely nulls.
.
When I'm reading this stream, in my callback, how would I prevent having to
check if the serialized value isn't null?
On Sat, Oct 8, 2016 at 1:07 AM, Guozhang Wang wrote:
> Hello Ali,
>
> We do have corresponding overloaded functions for most of KStream / KTable
> operators to
Also, you can set a retention period and have messages get auto deleted
after a certain time (default 1 week)
On Sat, Oct 8, 2016 at 3:21 AM, Hans Jespersen wrote:
> Kafka doesn’t work that way. Kafka is “Publish-subscribe messaging
> rethought as a distributed commit log”. The messages in the l
compatibility w/ the bad old days soon and move into
the future.
(If there's a way to do the null check automatically, i.e before calling
the lambda, please let me know).
On Sun, Oct 9, 2016 at 11:14 PM, Guozhang Wang wrote:
> Ali,
>
> In your scenario, if serde fails to parse the b
A kafka producer written elsewhere that I'm using, which uses the Go kafka
driver, is sending messages where the key is null.
Is this OK - or will this cause issues due to partitioning not happening
correctly?
What would be a good way to generate keys in this case, to ensure even
partition spread
(https://github.com/confluentinc/confluent-kafka-go/).
>
> If you decide to generate keys and you want even spread, a random
> number generator is probably your best bet.
>
> Gwen
>
> On Sun, Oct 9, 2016 at 6:05 PM, Ali Akhtar wrote:
> > A kafka producer written elsew
can't tell whether your
> Go client follows the behavior of Kafka's Java producer.
>
> -Michael
>
>
>
>
> [1]
> https://github.com/apache/kafka/blob/trunk/clients/src/
> main/java/org/apache/kafka/clients/producer/internals/
> DefaultPartitioner.java
&
So.. it should be okay to have null keys, I'm guessing.
On Mon, Oct 10, 2016 at 11:51 AM, Ali Akhtar wrote:
> Hey Michael,
>
> We're using this one: https://github.com/Shopify/sarama
>
> Any ideas how that one works?
>
> On Mon, Oct 10, 2016 at 11:48 AM, Michael No
They both have a lot of the same methods, and yet they can't be used
polymorphically because they don't share the same parent interface.
I think KIterable or something like that should be used as their base
interface w/ shared methods.
In development, I often need to delete all existing data in all topics, and
start over.
My process for this currently is: stop zookeeper, stop kafka broker, rm -rf
~/kafka/data/*
But when I bring the broker back on, it often prints a bunch of errors and
needs to be restarted before it actually wo
Heya,
Say I'm building a live auction site, with different products. Different
users will bid on different products. And each time they do, I want to
update the product's price, so it should always have the latest price in
place.
Example: Person 1 bids $3 on Product A, and Person 2 bids $5 on the
Thanks. That filter() method is a good solution. But whenever I look at it,
I feel an empty spot in my heart which can only be filled by:
filter(Optional::isPresent)
On Wed, Oct 12, 2016 at 12:15 AM, Guozhang Wang wrote:
> Ali,
>
> We are working on moving from Java7 to Java8 in Apa
P.S, does my scenario require using windows, or can it be achieved using
just KTable?
On Wed, Oct 12, 2016 at 8:56 AM, Ali Akhtar wrote:
> Heya,
>
> Say I'm building a live auction site, with different products. Different
> users will bid on different products. And each time t
The last time I tried, I couldn't find a way to do it, other than to
trigger the bash script for topic deletion programatically.
On Wed, Oct 12, 2016 at 9:18 AM, Ratha v wrote:
> Hi all;
>
> I have two topics(source and target). I do some processing on the message
> available in the source topic
imestamp of the current record.
>
> Does this makes sense?
>
> To fix you issue, you could add a .transformValue() before you KTable,
> which allows you to access the timestamp of a record. If you add this
> timestamp to you value and pass it to KTable afterwards, you can
> access i
s, then you do not need to anything special.
>
>
>
>
>
> On Wed, Oct 12, 2016 at 10:22 PM, Ali Akhtar wrote:
>
> > Thanks Matthias.
> >
> > So, if I'm understanding this right, Kafka will not discard which
> messages
> > which arrive out of orde
I am probably being too ocd anyway. It will almost never happen that
messages from another vm in the same network on ec2 arrive out of order.
Right?
On 13 Oct 2016 8:47 pm, "Ali Akhtar" wrote:
> Makes sense. Thanks
>
> On 13 Oct 2016 12:42 pm, "Michael Noll" wro
Is there a maven artifact that can be used to create instances
of EmbeddedSingleNodeKafkaCluster for unit / integration tests?
I'm using Kafka Streams, and I'm attempting to write integration tests for
a stream processor.
The processor listens to a topic, processes incoming messages, and writes
some data to Cassandra tables.
I'm attempting to write a test which produces some test data, and then
checks whether or not the
Please change that.
On Thu, Oct 20, 2016 at 1:53 AM, Eno Thereska
wrote:
> I'm afraid we haven't released this as a maven artefact yet :(
>
> Eno
>
> > On 18 Oct 2016, at 13:22, Ali Akhtar wrote:
> >
> > Is there a maven artifact tha
similar queue related tests we put the check in a loop. Check every
> second until either the result is found or a timeout happens.
>
> -Dave
>
> -Original Message-
> From: Ali Akhtar [mailto:ali.rac...@gmail.com]
> Sent: Wednesday, October 19, 2016 3:38 PM
> To: users
Michael,
Would there be any advantage to using the kafka connect method? Seems like
it'd just add an extra step of overhead?
On Thu, Oct 20, 2016 at 12:35 PM, Michael Noll wrote:
> Ali,
>
> my main feedback is similar to what Eno and Dave have already said. In
> your situat
There isn't a java API for this, you'd have to mess around with bash
scripts which I haven't found to be worth it.
Just let the data expire and get deleted. Set a short expiry time for the
topic if necessary.
On Mon, Oct 24, 2016 at 6:30 PM, Demian Calcaprina
wrote:
> Hi Guys,
>
> Is there a w
+1. I hope there will be a corresponding Java library for doing admin
functionality.
On Wed, Oct 26, 2016 at 4:10 AM, Jungtaek Lim wrote:
> +1
>
>
> On Wed, 26 Oct 2016 at 8:00 AM craig w wrote:
>
> > -1
> >
> > On Tuesday, October 25, 2016, Sriram Subramanian
> wrote:
> >
> > > -1 for all the
And this will make adding health checks via Kubernetes easy.
On Wed, Oct 26, 2016 at 4:12 AM, Ali Akhtar wrote:
> +1. I hope there will be a corresponding Java library for doing admin
> functionality.
>
> On Wed, Oct 26, 2016 at 4:10 AM, Jungtaek Lim wrote:
>
>> +1
>>
Kafka would be recommended for cloud providers?
What about using RAID0 vs JBOD for Kafka Brokers? I can see various
recommendations to use RAID0 or JBOD, but I am not really sure which one is
recommended especially for a Cloud environment?
Regards,
Ali
shers?
Cheers,
Ali
On Wed, 11 Jul. 2018, 04:48 Dan Rosanova,
wrote:
> In Azure we recommend using managed disks for Kafka. HD Insight Kafka uses
> them. I generally see SSD for Kafka, but I guess part of that could depend
> on if you write larger writes from fewer publishers or sma
version of Kafka does not fully support JBOD as even a single disk failure
can stop a Kafka broker, so we will not get the actual benefit of using
JBOD anyway. However, I am not quite sure how software raid acts in this
situation as there is no option to use HW Raid on cloud.
Regards,
Ali
gt;
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>
> Should give you some ideas.
>
> On 11 July 2018 at 14:31, Ali Nazemian wrote:
>
> > Hi All,
> >
> > I was wondering what the disk recommendation is for Kafka cluster? Is
AID 10
> has been the choice. Also, the replication you are mentioning is the s/w
> replication nothing to do with RAID 0 setup.
>
>
>
> On 11 July 2018 at 23:59, Ali Nazemian wrote:
>
> > Thanks. As this proposal is not available for the version of Kafka that
> we
>
1 - 100 of 151 matches
Mail list logo