I believe the wire format has changed between 0.8 and 0.9. It might be
necessary to update your clients. I'd try that first before doing any further
debugging / tracing.
--
Best regards,
Rad
On Wed, Jun 8, 2016 at 4:40 PM +0200, "Chris Barlock"
wrote:
Kiran,
If you’re using Docker, you can use Docker on Mesos, you can use constraints to
force relaunched kafka broker to always relaunch at the same agent and you can
use Docker volumes to persist the data.
Not sure if https://github.com/mesos/kafka provides these capabilites.
–
Best regards,
First result in Google for “kafka udp listener” brings this:
https://github.com/agaoglu/udp-kafka-bridge
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
de.linkedin.com/in/radgruchalski
Confidentiality:
This communication is intended for the above-named person and may be
confidential
You can’t. I have file a wish for something like this:
https://issues.apache.org/jira/browse/KAFKA-3726.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
de.linkedin.com/in/radgruchalski
Confidentiality:
This communication is intended for the above-named person and may be
confidential
Hey, you should have a look at Apache Samza. You put Samza on top of Kafka and
you can inject content filtering rules into a Samza system. This will give you
a "content subscription" system you intend to build.
Get Outlook for iOS
On Thu, May 19, 2016 at 1:56 AM -0700, "Janagan
I have described a cold storage solution for Kafka:
https://medium.com/@rad_g/the-case-for-kafka-cold-storage-32929d0a57b2#.kf0jf8cwv.
Also described it here a couple of times. Thd potential solution seems rather
straightforward.
Get Outlook for iOS
_
From: Luke
:-/
Get Outlook for iOS
On Tue, May 17, 2016 at 4:53 PM -0700, "Christian Posta"
<christian.po...@gmail.com> wrote:
+1 to your solution of log.cleanup.policy. Other brokers (ie, ActiveMQ)
have a feature like that.
Is there a JIRA for this?
On Tue, May 17, 2016 at 4
On Tue, May 17, 2016 at 4:57 PM, Radoslaw Gruchalski
wrote:
> Not as far as I'm aware. I'd be happy to contribute if there is a desire
> to have such feature. From experience with other projects, I know that
> without the initial pitch / discussion, it could be difficult to get such
&
.
If it has come to you in error you must take no action based on it, nor must
you copy or show it to anyone; please delete/destroy and inform the sender
immediately.
On May 18, 2016 at 3:57:43 PM, Radoslaw Gruchalski (ra...@gruchalski.com) wrote:
Hi Tom,
There is, indeed, the problem with replication
Hi Tom,
There is, indeed, the problem with replication in case of the leader change for
the partition.
Hence, I think the best, best approach would to have Kafka emit events in case
of:
- partition leader change
- offset file to be cleaned up
This still leaves a lot of work for the ops people
Are you sure you’re getting the same IP address?
Regarding zookeeper connection being closed, is kubernetes doing a soft
shutdown of your container? If so, zookeeper is asked politely to stop.
–
Best regards,
Radek Gruchalski
radek@gruchalski.commailto:ra...@gruchalski.com
Chris,
There is a .topic() method available on that object:
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java#L323
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
de.linkedin.com/in/radgruchalski
*Confidentiality:*This
Marcin,
The DNS seems to be your friend. /etc/hosts should be sufficient but it
might be an operational hassle.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On August 10, 2016 at 10:03:16 PM, Marcin (kan...@o2.pl) wrote:
We have kafka behind NAT with *only one broker*.
Let say we
Is there a JIRA for it? Could you point to where the issue exists in the
code?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On August 12, 2016 at 5:15:33 PM, Oleg Zhurakousky (
ozhurakou...@hortonworks.com) wrote:
It hangs indefinitely in any container. It’s a known issue and has been
The exception is:
Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092
to a broker endpoint
And it happens
here:https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L47
Do you have any non-ASCII characters in your URI? Something
Out of curiosity, are you aware of kafka.util.TestUtils and Apache Curator
TestingServer?
I’m using this successfully to test publis / consume scenarios with things
like Flink, Spark and custom apps.
What would stop you from taking the same approach?
–
Best regards,
Radek Gruchalski
pick poll() ? or do they plan on
introducing reactive streams?Thanks,kant
On Sat, Sep 17, 2016 5:14 AM, Radoslaw Gruchalski ra...@gruchalski.com
wrote:
I'm only guessing here regarding if this is the reason:
Pull is much more sensible when a lot of data is pushed through. It allows
consumers
thoughts.
On Sat, Sep 17, 2016 12:39 PM, Radoslaw Gruchalski ra...@gruchalski.com
wrote:
Kafka is not a queue. It’s a distributed commit log.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On September 17, 2016 at 9:23:09 PM, kant kodali (kanth...@gmail.com)
wrote:
Hmm
Kafka uses murmur2 key hashing by default. You can also create your own custom
partitioner. The partitioner can be set on per producer / consumer basis.
--
Best regards,
Rad
On Sat, Sep 17, 2016 at 11:01 AM +0200, "kant kodali"
wrote:
so Zookeeper will select
I'm only guessing here regarding if this is the reason:
Pull is much more sensible when a lot of data is pushed through. It allows
consumers consuming at their own pace, slow consumers do not slow the complete
system down.
--
Best regards,
Rad
On Sat, Sep 17, 2016 at 11:18 AM +0200, "kant
John,
AFAIK not, however, this was suggested as part of the following JIRA:
https://issues.apache.org/jira/browse/KAFKA-3726
Feel free to upvote.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On October 4, 2016 at 12:17:08 AM, John Vines (jvi...@gmail.com) wrote:
Obligatory sorry if I
regards,
Radek Gruchalski
ra...@gruchalski.com
On November 9, 2016 at 12:27:53 PM, Ali Akhtar (ali.rac...@gmail.com) wrote:
Its probably not UTF-8 if it contains Turkish characters. That's why base64
encoding / decoding it might help.
On Wed, Nov 9, 2016 at 4:22 PM, Radoslaw Gruchalski <
t("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer.encoding", "ISO-8859-9");
My message is ;
{"TW_USER_LOCATION":"Antalya,Türkiye}
I have problem with "ü" character.
İs there an
Baris,
Kafka does not care about encoding, everything is transported as bytes.
What’s the configueration of your producer / consumer?
Are you using Java / JVM?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On November 9, 2016 at 11:42:02 AM, Baris Akgun (Garanti Teknoloji) (
.acks", "1");
Consumer side//
I am using Spark Streaming Kafka API, I also try with Kafka CLI and Java
kafka api but I always face with same issue.
Thanks
*From:* Radoslaw Gruchalski [mailto:ra...@gruchalski.com]
*Sent:* Wednesday, November 9, 2016 1:49 PM
*To:
You can try cleanup.policy=compact.
But be careful with a large number of keys.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On October 19, 2016 at 11:44:39 PM, Jesus Cabrera Reveles (
jesus.cabr...@encontrack.com) wrote:
Hello,
We are a company of IoT and we are trying to implement
Hi Raghav,
Have a look at AdminUtils:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/AdminUtils.scala
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On November 17, 2016 at 5:58:34 PM, Raghav (raghavas...@gmail.com) wrote:
Hi
I want to be able to create a
Banias,
This is a property for producers / consumers. Your producers / consumers
may not necessarily (and probably should not) have access to te zookeeper
cluster your kafka cluster uses. That’s why you give it a list kafka nodes
with bootstrap.servers.
–
Best regards,
Radek Gruchalski
Hi Krystian,
I have no experience with the setup itself but I know that any vmware
product will offer an API to fetch metrics.
You could write a client to fetch these and publish them to Kafka.
Kind regards,
Radek Gruchalski
ra...@gruchalski.com
+4917685656526
*Confidentiality:*
This
Hi,
I believe the answer is in the code. This is where the --compression-codec
is processed:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L143
and this is —producer-property:
30 matches
Mail list logo