[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2017-06-12 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047074#comment-16047074
 ] 

Vahid Hashemian commented on KAFKA-3129:


[~pmishra01] I tried this on Ubuntu, Windows 7 and Windows 10 but were not able 
to reproduce it after a few tries.
Please note that the default {{acks}} value has changed from 0 to 1 based on 
the [this PR|https://github.com/apache/kafka/pull/1795]. So if you like to try 
producing with {{acks=0}} you'd have to overwrite the default.

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2017-06-10 Thread Pankaj (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045812#comment-16045812
 ] 

Pankaj commented on KAFKA-3129:
---

[~cotedm], [~ijuma], [~vahid] I am also facing the same issue where my kafka 
console producer fails to write data. Please find below steps to recreate.

1-Make a text file with 800 record. each line have record like "Message 1"  and 
the line 2 "Message 2"  "Message 800"
2- # start zookeeper server
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

3-# start broker
.\bin\windows\kafka-server-start.bat .\config\server.properties 

4-# create topic “test”
.\bin\windows\kafka-topics.bat --create --topic test --zookeeper localhost:2181 
--partitions 1 --replication-factor 1

5-#start consumer
.\bin\windows\kafka-console-consumer.bat --topic test --zookeeper localhost:2181

6-sent file to producer
.\bin\windows\\kafka-console-producer.bat --broker-list localhost:9092 --topic 
test < my_file.txt

I have executed all above steps with below configuration.(Windows 7, 
kafka_2.11-0.10.0.1)
1- Default kafka configuration- NOK(Sometimes it consumes 366 messages, 
sometimes 700 messages.)
2- Updating kafka producer.properties for acks="1" or acks="all". both did not 
worked. still  NOK(Sometimes it consumes 366 messages, sometimes 700 messages.).

Please suggest if this issue has been fixed. I'm facing critical problem on 
production

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471028#comment-15471028
 ] 

ASF GitHub Bot commented on KAFKA-3129:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1795


> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445771#comment-15445771
 ] 

ASF GitHub Bot commented on KAFKA-3129:
---

GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1795

KAFKA-3129: Console producer issue when request-required-acks=0

change console producer default acks to 1, update acks docs.  Also added 
the -1 config to the acks docs since that question comes up often.  @ijuma and 
@vahidhashemian, does this look reasonable to you?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka KAFKA-3129

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1795


commit bec755ffc4e4b779a6c6d45b144a7e3a87dc64d7
Author: Dustin Cote 
Date:   2016-08-29T12:44:37Z

change console producer default acks to 1, update acks docs




> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-26 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439764#comment-15439764
 ] 

Vahid Hashemian commented on KAFKA-3129:


[~cotedm] Thanks for looking into this. I think if we are going to accept the 
current behavior (which is fine to me) this defect should be documented and the 
default ack (as you mentioned) should be set to something else other than 0 so 
this issue is not revealed using default settings.

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-26 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439670#comment-15439670
 ] 

Dustin Cote commented on KAFKA-3129:


What I'm seeing is that we are faking the callback 
{code}org.apache.kafka.clients.producer.internals.Sender#handleProduceResponse{code}
 for the case where acks=0.  This is a problem because the callback gets 
generated when we do 
{code}org.apache.kafka.clients.producer.internals.Sender#createProduceRequests{code}
 in the run loop but the actual send happens a bit later.  When close() comes 
in that window between createProduceRequests and the send, you get messages 
that are lost.  Funny thing is that if you slow down call stack a bit by 
turning on something like strace, the issue goes away so it's hard to tell 
which layer exactly is buffering the requests.

So my question is, do we want to risk a small performance hit for all producers 
to be able to guarantee all messages with acks=0 actually make it out of the 
producer knowing full well that they aren't going to be verified to have made 
it to the broker?  I personally don't feel it's worth the extra locking 
complexity and could be documented known durability issue when you aren't using 
durability settings.  If we go that route, I feel like the console producer 
should have acks=1 by default.  That way, users who are getting started with 
the built-in tools have a cursory durability guarantee and can tune for 
performance instead.  What do you think [~ijuma] and [~vahid]?

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)