Hi all
Trying to produce message to secured HDP2.3 kakfa broker from different node
using a java producer client but facing below issue any ideas how to configure
Kerberos so that I can produce messages to kafka. Thank you.
[2015-11-06 07:18:57,435] INFO Closing socket connection to /10.0.2.17.
Hi Gwen,
Thanks a lot for saving my time.
The problem was that under "Modules", the 'output path' and 'test output
path' for Kafka-0.8.2.1 (i.e. the main source directory) were the same:
/Users/pbharaj/Desktop/Dev/OpenSource4/Kafka-0.8.2.1/build
After changing both, its working fine now.
Thank
In "Project Structure" under "Modules" look at "paths" tab for the modules.
You should see separate paths for main and test. For example:
Output: /Users/gwen/workspaces/kafka/clients/build/classes/main
Test output: /Users/gwen/workspaces/kafka/clients/build/classes/test
Make sure you have somethi
Hi Gwen,
After your email, I tried debugging the test in
unit.kafka/producer/ProducerTest.scala, I can debug it within Intellij
I'm not trying to debug the tests, but things like tools:
ConsumerOffsetChecker as it has a main method
When I do that, I get this error:-
Error:scalac: Output path
/Use
I also have a gradle plugin, but I don't think its used for debugging.
What prevents you from debugging? What happens when you right-click on a
test and select "debug"?
On Thu, Nov 5, 2015 at 7:15 PM, Prabhjot Bharaj
wrote:
> Hi Jeff,
>
> So, if you've the Kafka source in a directory, how have
Hi Jeff,
So, if you've the Kafka source in a directory, how have you set up your
intellij ?
I have used ./gradlew idea as was mentioned in some posts.
But, how do you debug in intellij?
I use the bare bones Intellij with Scala plugin. Do you have any other
plugin to support debug?
Regards,
Prabhj
For what it's worth, I've never done the gradlew idea thing and debug/unit
testing in Intellij works fine for me.
On Thu, Nov 5, 2015 at 6:26 PM, Rad Gruchalski wrote:
> It never worked for me. I might have to try again. And, yes, that was
> after generating the intellij stuff with gradle.
> W
I am not sure about this. It might be related to your GC settings. But I am
not sure why it only occurs on Friday night.
Thanks,
Mayuresh
On Tue, Nov 3, 2015 at 3:01 AM, Gleb Zhukov wrote:
> Hi, Mayuresh. No, this log before restart 61.
> But I found some interesting logs about ZK on problem b
It never worked for me. I might have to try again. And, yes, that was after
generating the intellij stuff with gradle.
Will give it a shot again and, if I still have issues, ask here.
Kind regards,
Radek Gruchalski
ra...@gruchalski.com (mailto:ra...@gruchalski.com)
(mailto:ra...@gruc
Running tests from intellij is fairly easy - you click on the test name and
select "run" or "debug", if you select "debug" it honors breakpoints.
Rad, what happens when you try to run a test within Intellij?
On Thu, Nov 5, 2015 at 2:55 PM, Dong Lin wrote:
> Hi Rad,
>
> I never use intellij to r
Hi Rad,
I never use intellij to run test for kafka. It is probably easier to run it
via command line. You can check README.md for more information on how to
run tests.
Dong
On Thu, Nov 5, 2015 at 2:33 PM, Rad Gruchalski wrote:
> Dong,
>
> Does it allow running, say, tests in debug? I tried tha
Dong,
Does it allow running, say, tests in debug? I tried that and never managed to
get any test to run in intellij. Say, to set some breakpoints and debug...
Kind regards,
Radek Gruchalski
ra...@gruchalski.com (mailto:ra...@gruchalski.com)
(mailto:ra...@gruchalski.com)
de.linkedi
Wow, that's interesting info, thanks for the tip!
On Fri, Nov 6, 2015 at 1:27 AM Dong Lin wrote:
> Hi,
>
> If you want to browse kafka code in intellij, you can setup intellIj
> project by doing ./gradlew idea.
>
> Hope it helps,
> Dong
>
> On Thu, Nov 5, 2015 at 3:16 AM, Prabhjot Bharaj
> wrot
Hi,
If you want to browse kafka code in intellij, you can setup intellIj
project by doing ./gradlew idea.
Hope it helps,
Dong
On Thu, Nov 5, 2015 at 3:16 AM, Prabhjot Bharaj
wrote:
> Hi,
>
> I'm using kafka 0.8.2.1 version with IntelliJ.
> Sometimes, I change the code and build it using this c
Hi all
I have a docker environment and my HDP 2.3 which has kafka works in 4 nodes and
I have another two nodes where my application which has kafka producer and
consumer sits. I need to talk to Hortonworks HDP2.3 kafka topics and produce
and consume messages. I have modified kafka-server proper
Hi Jeff,
The java doc is very nice, thank you and thanks to whoever wrote it.
I do have one question about the API. For what we're doing, it's important
for us to calculate the "lag" or pending message count. Today we do that
using the simple consumer to ask kafka for the committed offset (beca
The most severe issue I've run into is a poorly timed GC pause can actually
lead to a situation where rebalancing leaves a partition completely
un-owned. It's important to make sure that rebalance.max.retries *
rebalance.backoff.ms is longer than any GC pause that your consumers
experience.
A mor
Hello Folks,
Requesting your expertise on this
Thanks,
Prabhjot
On Thu, Nov 5, 2015 at 4:46 PM, Prabhjot Bharaj
wrote:
> Hi,
>
> I'm using kafka 0.8.2.1 version with IntelliJ.
> Sometimes, I change the code and build it using this command:
>
> ./gradlew -PscalaVersion=2.11.7 releaseTarGz
>
> I
Hi Jeff,
Thanks for your response.
On scala side, is there a Producer implementation that I could use? is the
java based KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer;)
same as Producer in Producer.scala ?
Thanks,
Prabhjot
On Thu, Nov 5, 2015 at 11:28 PM, Jeff Holoman wrote:
>
Hello Folks,
I am evaluating some failure scenarios during consumer rebalance in the
high-level consumer.
The idea of this test is to know what are the pain points from
operational/maintanence stand poitn that I need to consider when a consumer
rebalance takes place.
Also, if there are any known
The best thing that I know is the latest javadoc that's committed to trunk:
https://github.com/apache/kafka/blob/ef5d168cc8f10ad4f0efe9df4cbe849a4b35496e/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
Thanks
Jeff
On Thu, Nov 5, 2015 at 12:51 PM, Cliff Rhyne wrote:
Hi Jeff,
Is there a writeup of how to use the new consumer API (either in general or
for Java)? I've seen various proposals but I don't see a recent one on the
actual implementation. My team wants to start the development work to
migrate to 0.9.
Thanks,
Cliff
On Thu, Nov 5, 2015 at 11:18 AM, J
Prabhjot,
The answer changes slightly for the Producer and Consumer and depends on
your timeline and comfort with using new APIs
Today and in the future, for the Producer, you should be using the "new"
producer, which isn't all that new anymore:
org.apache.kafka.clients.producer.KafkaProducer;
Hello Folks,
Requesting your expertise on this.
I see that under core/src/main/scala/kafka/producer/, there are many
implementations - Producer.scala and SyncProducer.scala
Also, going via the producerPerformance.scala, there are 2 implementations
- NewShinyProducer (which points to KafkaProducer
Hi Scott,
Added it here https://cwiki.apache.org/confluence/display/KAFKA/Powered+By.
Thank you,
Grant
On Thu, Nov 5, 2015 at 8:52 AM, Scott Krueger
wrote:
> Dear Kafka users,
>
>
> Could someone kindly add the following to the Kafka "Powered by" page
> please:?
>
>
> [skyscanner](http://www.s
I'm seeing a few (50+ in a couple of hours) warning messages like this
2015-10-30 06:22:11,086 WARN kafka.utils.Logging$class:83
[kafka-request-handler-0] [warn] Broker 175 ignoring LeaderAndIsr request
from controller 175 with correlation id 18359 epoch 11 for partition
[mytpoic,1337] since its
Dear Kafka users,
Could someone kindly add the following to the Kafka "Powered by" page please:?
[skyscanner](http://www.skyscanner.net/) |
[skyscanner](http://www.skyscanner.net/), the world's travel search engine,
uses Kafka for real-time log and event ingestion. It is the integration point
Thanks, Prabhjot
I know that running out of space on disks can cause a Kafka shutdown but it
is not the case here, there is a lot of free space
On Thu, Nov 5, 2015 at 6:08 AM, Prabhjot Bharaj
wrote:
> Hi Vadim,
>
> Did you see your hard disk partition getting full where kafka data
> directory i
Hi Gleb,
No, no zookeper related errors. The only suspicious lines I see immediately
preceeding the shutdown is this:
2015-11-03 01:53:20,810] INFO Reconnect due to socket error:
java.nio.channels.ClosedChannelException (kafka.consumer.SimpleConsumer)
which makes me think it could be some networ
Hi,
I'm using kafka 0.8.2.1 version with IntelliJ.
Sometimes, I change the code and build it using this command:
./gradlew -PscalaVersion=2.11.7 releaseTarGz
In some cases, I feel the need for debugging the code within IntelliJ.
e.g. I, currently, want to debug the ConsumerOffsetCheker to see h
Hi Vadim,
Did you see your hard disk partition getting full where kafka data
directory is present ?
It could be because you have set log retention to a larger value, whereas
your input data may be taking up full disk space. In that case, move some
data out from that disk partition, set log retenti
Hi, Vadim. Do you see something like this: "zookeeper state changed
(Expired)" in kafka's logs?
On Wed, Nov 4, 2015 at 6:33 PM, Vadim Bobrov wrote:
> Hi,
>
> does anyone know in what cases Kafka will take itself down? I have a
> cluster of 2 nodes that went down (not crashed) this night in a con
Adding users as well
On Thu, Nov 5, 2015 at 3:37 PM, Prabhjot Bharaj
wrote:
> Hi,
>
> I'm using the latest update: 0.8.2.2
> I would like to use the latest producer and consumer apis
> over the past few weeks, I have tried to do some performance benchmarking
> using the producer and consumer scr
Hi,
My 3 node cluster (2x4 core xeon, 8GB ram per machine, striped HDD) had
crashed with OOM when I had some 5-6 topics with 256 partitions per topic.
dont remember the heap size I used, I think it was the default one that is
there in the 0.8.2.1 bundle
I have been trying to come up with the numb
Hello Prabhjot,
Actually what I meant in previous email is that there are 200 topics, with
1 partition each so there are 200 total partitions. Is there any rule of
thumb regarding this matter? Can you share your configuration (including
spec, jvm memory, etc) for kafka cluster?
Thank you,
On Thu
Hi,
Not sure. But, I had hit OOM when using too many partitions and many topics
with a lesser heap size assigned to the jvm
Regards,
Prabhjot
On Thu, Nov 5, 2015 at 10:40 AM, Muqtafi Akhmad
wrote:
> Hello all,
> Recently I got incident with kafka cluster, I found OutOfMemoryError in
> kafka se
36 matches
Mail list logo