Hi David,
Are you running in docker? Are you trying to connect from to a remote box?
We found we could connect locally but couldn't connect from another remote
host.
(I've just started using kafka also)
We had the same issue and found out: host.name=<%=@ipaddress%> needed to be
the FQDN of the
ntact
> <http://sematext.com/about/contact.html>
>
> On Thu, Dec 17, 2015 at 11:33 AM, Ben Davison <ben.davi...@7digital.com>
> wrote:
>
> > Hi David,
> >
> > Are you running in docker? Are you trying to connect from to a remote
> box?
> > We found
Hi All,
On our dev kafka environment our brokers went down with a topic on it, can
we just reassign the partitions to another broker?
Kafka 0.9
Thanks
Ben
--
This email, including attachments, is private and confidential. If you have
received this email in error please notify the sender
> > Hello Ben,
> >
> > Yes, you can use apply different replica assignment. See related docs:
> >
> >
> http://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor
> >
> > Kind regards,
> > Stevo Slavic.
> >
Hi David,
Do you have advertised.host.name set in kafka properties?
On AWS (might be the same for DO) we had to set this to the public DNS of
the box (you might be able to get away with setting it to the public IP)
Thanks,
Ben
On Fri, Jan 8, 2016 at 10:52 AM, David Montgomery
Hi Marco,
We use the public DNS hostname that you can get from the AWS metadata
service.
Thanks,
Ben
On Wed, Jun 1, 2016 at 1:54 PM, Marco B. wrote:
> Hello everyone,
>
> I am trying to setup a MirrorMaker between my company's local cluster and
> another cluster in AWS to
+ 1
On Thursday, 16 June 2016, Craig Swift
wrote:
> +1
>
> Craig J. Swift
> Principal Software Engineer - Data Pipeline
> ReturnPath Inc.
> Work: 303-999-3220 Cell: 720-560-7038
>
> On Thu, Jun 16, 2016 at 2:50 PM, Henry Cai
>
Try mounting it to /tmp/ then you will see if it's a file permission issue.
On Fri, Jun 17, 2016 at 1:27 PM, Gerard Klijs
wrote:
> What do you mean with a *docker volume*? It's best to use a data container,
> and use the volumes in your broker container, this way you
Hi Vince,
I think this might help you https://github.com/edenhill/librdkafka (
- High-level balanced KafkaConsumer: supported (requires broker >= 0.9))
Regards,
Ben
On Wed, Apr 6, 2016 at 3:16 PM, Vince Deters wrote:
> Hello, we have a requirement to run redundant
Hi Stephen,
Have you checked out https://github.com/edenhill/librdkafka ? It might be
what you need (I don't do C, so it might not be right for you)
Regards,
Ben
On Thu, Mar 3, 2016 at 10:47 AM, Hopson, Stephen <
stephen.hop...@gb.unisys.com> wrote:
> Hi,
>
> Not sure if this is the right
Hi Jahn,
I'm assuming your using Java, maybe try another Java client? I know you
said that CPU was nominal but maybe this client from blackberry will help:
https://github.com/blackberry/Krackle
Thanks,
Ben
On Tue, May 24, 2016 at 8:34 AM, Jahn Roux wrote:
> Thanks for the
, so you can go watch it :)
>
> On Fri, May 13, 2016 at 10:05 PM, Ben Davison <ben.davi...@7digital.com>
> wrote:
> > Hi Gwen,
> >
> > Awesome stuff, can't wait to go through these -- I noticed that one link
> > didn't go anywhere.
> > Real-Ti
Hi Gwen,
Awesome stuff, can't wait to go through these -- I noticed that one link
didn't go anywhere.
Real-Time Analytics Visualized w/ Kafka + Streamliner + MemSQL + ZoomData
Will this be posted anytime?
Thanks,
Ben
On Fri, May 13, 2016 at 8:05 PM, Gwen Shapira wrote:
>
Hi Paolo,
We just jump on the box (don't use kuberenetes) and change the
metadata.properties file manually, then restart. We are only doing a small
amount of non prod traffic so it's easy for us to manage.
Thanks,
Ben
On Tuesday, 10 May 2016, Paolo Patierno wrote:
> Hi
Hi Paolo,
Would be interested to hear the answer also. We've been getting around this
ourselves by setting the new broker that comes up back to id 1001.
Thanks,
Ben
On Tuesday, 10 May 2016, Paolo Patierno wrote:
> Hello,
>
> I'm experiencing the usage of Kafka with
Does anyone have any opinions on this?
https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-production-ready-in-three-regions/
Looks interesting, just wondering if anyone else uses NFS mounts with Kafka?
Thanks,
Ben
--
This email, including attachments, is private and confidential.
Try putting "" or '' around the string when running the command.
On Wed, Oct 5, 2016 at 3:29 PM, Hamza HACHANI
wrote:
> It's between "the" and "metric"
>
>
> De : Ali Akhtar
> Envoyé : mercredi 5 octobre 2016
Hi Kant,
I was following the other thread, can you try using a different
benchmarking client for a test.
https://grey-boundary.io/load-testing-apache-kafka-on-aws/
Ben
On Thursday, 15 September 2016, kant kodali wrote:
> with Kafka I tried it with 10 messages with single
ld be
> comparable to
> the setup I had with NATS and NSQ except you suspect the client library or
> something?
> Thanks,Kant
>
>
>
>
>
>
> On Thu, Sep 15, 2016 2:16 AM, Ben Davison ben.davi...@7digital.com
> wrote:
>
> Hi Kant,
>
>
>
>
> I was follow
I'm going to change my vote to a -1, great points made by all.
(Although I would want to have another discussion around a "management"
REST endpoint for Kafka, I think there would be value in that)
On Wed, Oct 26, 2016 at 3:56 PM, Samuel Taylor
wrote:
> -1
>
> I don't
+ 1
On Tue, Oct 25, 2016 at 10:16 PM, Harsha Chintalapani
wrote:
> Hi All,
>We are proposing to have a REST Server as part of Apache Kafka
> to provide producer/consumer/admin APIs. We Strongly believe having
> REST server functionality with Apache Kafka will help
you explain me in form of example or something for which you are
> feasible
>
> On Oct 17, 2016 2:08 PM, "Ben Davison" <ben.davi...@7digital.com> wrote:
>
> > We have it setup so that both log ms is set to 7 days and log delete
> > bytes(can't remember exac
We have it setup so that both log ms is set to 7 days and log delete
bytes(can't remember exactly what the setting is called. So we never run
out of space (don't set the value to something like 99% of your disk, as
the log cleaner thread might not kick in time, we leave it at 90% of disks
space)
I *think* Spark 2.0.0 has a Kafka 0.8 consumer, which would still use the
old Zookeeper method.
The use the new consumer offsets the consumer needs to be atleast Kafka 0.9
compatible.
On Thu, Oct 13, 2016 at 1:55 PM, Samy Dindane wrote:
> Hi,
>
> I use Kafka 0.10 with ZK
This one works fine, and has statefull sets built in.
https://github.com/Yolean/kubernetes-kafka/
Ben
On Tue, Aug 22, 2017 at 4:41 PM, Ali Akhtar wrote:
> Not too familiar with that error, but I do have Kafka working on
> Kubernetes. I'll share my files here in case that
Hi Dmitriy,
Did you check out this thread "Incorrect consumer offsets after broker
restart 0.11.0.0" from Phil Luckhurst, it sounds similar.
Thanks,
Ben
On Wed, Oct 11, 2017 at 4:44 PM Dmitriy Vsekhvalnov
wrote:
> Hey, want to resurrect this thread.
>
> Decided to do
If you set these configs at broker start they will automatically replicate
newly created topics:
default.replication.factor
num.partitions=3
Ben
On Fri, Dec 1, 2017 at 1:41 PM Skip Montanaro
wrote:
> Are dynamically created topics thus never replicated?
>
> On Nov
27 matches
Mail list logo