Hi, i've searched the mailing list archive, but nothing found. I'm
wondering how to prevent test producer from sending dirty messages to a
production broker?
--
zhaown
There is a security proposal in the works
https://cwiki.apache.org/confluence/display/KAFKA/Security but nothing yet.
How to prevent your scenario is going to depend a little on what
circumstances would occur where test messages are occurring in production
but is something you have to take care
Hi, I'm still using Kafka 0.7, and was wondering if there's a way I can
manually commit offsets per Topic/Partition.
I was expecting something like KafkaMessageStream#commitOffset()
I couldn't find an API for this. If there isn't any, is there a reason?
Regards,
Roman
I am sorry Neha, so late to reply you. I will try that and let you know
anyway.
Thanks Neha,
2013/10/31 Neha Narkhede neha.narkh...@gmail.com
The default replication factor is 1, so even if you have one broker
failure, you can run into a situation with no leader for some partitions.
You
The API you are looking for is ConsumerConnector.commitOffsets(). However,
not that it will commit offsets for all partitions that the particular
consumer connector owns at that time. And the set of partitions owned by a
particular consumer connector can change over time.
Thanks,
Neha
On Mon,
Hi everyone,
I'm using Kafka 0.8.0 from the git repository.
I'm trying the following commands:
bin/kafka-reassign-partitions.sh --topics-to-move-json-file topics-to-move.json
--zookeeper zk-qa.us-e.roomkey.net:2181/kafka_stage/kafka_qa --broker-list 1
--execute
where topics-to-move.json is:
Thanks, Kane I had the same problem with assembly-package-dependency and
this fixes it.
Jun, will scalaVersion be changed to 2.10.x? Currently it¹s set to 2.8.0
and crossScalaVersions includes 2.10.1 but I believe that¹s to publish to
local ivy repo but I publish to local maven repo and I need
Right Neha, thanks. I was aware of that particular API, but I process
topics within separate threads (actors) so I cannot assume every
topic/partition is supposed to commit its offset. Instead, I could use a
per-stream (topic-partition) commit.
Is this use case valid? Should there be a ticket to
Ok,
After adding a delay before enabling a freshly started broker in the
metadata vip that clients use, it seems to have drastically reduced the
number of these topic creation requests. However, not all of them.
I still see (only some times) a handful of Topic creation log messages,
that happen
Hello All,
I am getting this error and I am not sure what that means. Specifically the
Wrong request type 768
[2013-11-04 16:56:47,540] ERROR Closing socket for /0:0:0:0:0:0:0:1 because
of error (kafka.network.Processor)
kafka.common.KafkaException: Wrong request type 768
at
Good afternoon. I was under impression if auto commit set to false then
once consumer is restarted then logs would be replayed from the beginning.
Is that correct?
Thanks,
Vadim
That is correct. If auto.commit.enable is set to faulse, the offsets will
not be committed at all unless the consumer calls the commit function
explicitly.
Guozhang
On Mon, Nov 4, 2013 at 2:42 PM, Vadim Keylis vkeylis2...@gmail.com wrote:
Good afternoon. I was under impression if auto commit
Hello Ahmed,
Currently we only have 10 total Kafka request types, including produce
request, fetch request and topic metadata request. And these requests are
encoded with a key, currently ranging from 0 to 9. When Kafka server
receives a request with key it cannot recognize, that exception will
Hello Joe,
Do you see any exceptions in the controller or state-change logs?
Where are these two topics originally located?
Guozhang
On Mon, Nov 4, 2013 at 9:47 AM, Joseph Lawson jlaw...@roomkey.com wrote:
Hi everyone,
I'm using Kafka 0.8.0 from the git repository.
I'm trying the
Thanks for confirming, but that not behavior I observe. My consumer does
not commit data to kafka. It get messages sent to kafka. Once restarted I
should of gotten messages that previously received by consumer, but on
contrarily I got none. Logs confirm the initial offset been as -1. What am
I
And exceptions you saw from the broker end, in server log?
Guozhang
On Mon, Nov 4, 2013 at 4:27 PM, Vadim Keylis vkeylis2...@gmail.com wrote:
Thanks for confirming, but that not behavior I observe. My consumer does
not commit data to kafka. It get messages sent to kafka. Once restarted I
I am trying to send kafka metrics for display to ganglia server using
latest download from https://github.com/adambarthelson/kafka-ganglia.
Here's my kafka_metrics.json
{
servers : [ {
port : ,
host : ecokaf1,
queries : [ {
outputWriters : [ {
@class :
It looks like you are missing quotes in the object name. Here is a snippet
from our jmxtrans configs:
resultAlias: ReplicaManager,
obj: \kafka.server\:type=\ReplicaManager\,name=\*\,
attr: [
Count,
OneMinuteRate,
MeanRate,
Value
]
Unless more recent
Hi, joe
Thanks for replying. I've found that proposal which is last updated 2 month
ago, and i think maybe i don't need that much securiting. I simple way to
ban arbitrary message out is enough for me.
For your solution, how to assure isProduction=0 in test enviroment? What if
huge amounts of
I am using:
dependency
groupIdorg.apache.kafka/groupId
artifactIdkafka_2.9.2/artifactId
version0.8.0-beta1/version
/dependency
I am trying to rebuild iron-count
https://github.com/edwardcapriolo/IronCount/tree/iron-ng
against kafka 0.8.0
I
Hi Ed:
regarding the test the testtopology exampe program on my github ...
do you see the words successful completion printed out towards the
end of the test Run?
I assuming that you ran mvn:exec java ...etc.. as specified in the Read Me
file.. is that correct?
On Nov 4, 2013 7:52 PM,
I have success when the number of messages is less then ~1200. With more
then 1200 it never completes.
Try changing the program to this:
tkp = new TestKafkaProducer(
theTopic,
localhost: + zookeeperTestServer.getPort(),
4000);
You need to set auto.offset.reset=smallest. By default, the consumer
will start consuming the latest messages.
Thanks,
Neha
On Mon, Nov 4, 2013 at 4:38 PM, Guozhang Wang wangg...@gmail.com wrote:
And exceptions you saw from the broker end, in server log?
Guozhang
On Mon, Nov 4, 2013 at
Joe,
Could you try -
bin/kafka-reassign-partitions.sh --topics-to-move-json-file
topics-to-move.json --zookeeper
zk-qa.us-e.roomkey.net:2181/kafka_stage/kafka_qa --broker-list 1
Store the partition assignment output in a file partition-assignment.json,
then use that to execute the assignment -
Currently, the only way to achieve that is to use the SimpleConsumer API.
We are considering the feature you mentioned for the 0.9 release -
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPI
Thanks,
Neha
On Mon, Nov 4, 2013 at 9:57 AM, Roman Garcia
Are you using a non-java client to send these requests ?
Thanks,
Neha
On Mon, Nov 4, 2013 at 3:36 PM, Guozhang Wang wangg...@gmail.com wrote:
Hello Ahmed,
Currently we only have 10 total Kafka request types, including produce
request, fetch request and topic metadata request. And these
Ok, so this can happen, even if the node has not been placed back into
rotation, at the metadata vip?
On Tue, Nov 5, 2013 at 12:11 AM, Neha Narkhede neha.narkh...@gmail.comwrote:
It is probably due to the same metadata propagation issue.
https://issues.apache.org/jira/browse/KAFKA- should
Do you think our cross compilation can be extended to scala 2.10.2?
Thanks,
Jun
On Mon, Nov 4, 2013 at 9:48 AM, Ngu, Bob bob@intel.com wrote:
Thanks, Kane I had the same problem with assembly-package-dependency and
this fixes it.
Jun, will scalaVersion be changed to 2.10.x? Currently
I'm using it with scala 2.10.2.
On Mon, Nov 4, 2013 at 9:41 PM, Jun Rao jun...@gmail.com wrote:
Do you think our cross compilation can be extended to scala 2.10.2?
Thanks,
Jun
On Mon, Nov 4, 2013 at 9:48 AM, Ngu, Bob bob@intel.com wrote:
Thanks, Kane I had the same problem with
Paul,
Thank you! I didn't realize the quotes are important to keep.
I have question on name=\*\
For the type=ReplicaManager, there are at least 4 Mbean names
kafka.server:name=PartitionCount,type=ReplicaManager
kafka.server:name=LeaderCount,type=ReplicaManager
Thanks so much Neha. That did the trick. Thanks so much.
On Mon, Nov 4, 2013 at 8:43 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
You need to set auto.offset.reset=smallest. By default, the consumer
will start consuming the latest messages.
Thanks,
Neha
On Mon, Nov 4, 2013 at 4:38
Can this unintentional topic creation be avoided by setting
auto.create.topics.enable=false?
On Mon, Nov 4, 2013 at 9:40 PM, Jason Rosenberg j...@squareup.com wrote:
Ok, so this can happen, even if the node has not been placed back into
rotation, at the metadata vip?
On Tue, Nov 5, 2013 at
Hi,
I wanted to control the commitOffset signal to ZK from High-level kafka
consumer. This will enable to process the message by consumer and incase
of failure the offset is not moved.
If I do
auto.commit.enable= false
and then use commitallOffset Api of high-level consumer, is that way to
33 matches
Mail list logo