[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685998#comment-15685998
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

I am still a noob . How do I confirm the above ? Any debug statements that I 
can enable ? Thanks and sorry for the inconvenience

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685911#comment-15685911
 ] 

huxi commented on KAFKA-4430:
-

Maybe you could confirm this:  complete serialized size of the message + 12 <= 
message.max.bytes


> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685885#comment-15685885
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

Sorry I forgot to mention. I have set the message.max.bytes on the Aggregate 
Kafka Cluster to 1 MB. I could narrow it down to ProducerResponseCallback() in 
KafkaApi.scala where it is logged as INFO. So was wondering if this is an 
actual error and if it is dropping messages.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685848#comment-15685848
 ] 

huxi commented on KAFKA-4430:
-

Check broker config 'message.max.bytes' for Aggregate Kafka cluster. The 
default value is 112 bytes which is less than 1MB. Try to increase this 
value to see if it works.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4431) HeartbeatThread should be a daemon thread

2016-11-21 Thread David Judd (JIRA)
David Judd created KAFKA-4431:
-

 Summary: HeartbeatThread should be a daemon thread
 Key: KAFKA-4431
 URL: https://issues.apache.org/jira/browse/KAFKA-4431
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.1.0
Reporter: David Judd


We're seeing an issue where an exception inside the main processing loop of a 
consumer doesn't cause the JVM to exit, as expected (and, in our case, 
desired). From the thread dump, it appears that what's blocking exit is the 
"kafka-coordinator-heartbeat-thread", which is not currently a daemon thread. 
Per the mailing list, it sounds like this is a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: should HeartbeatThread be a daemon thread?

2016-11-21 Thread Jason Gustafson
Hey David,

It probably should be a daemon thread. Perhaps open a JIRA?

Thanks,
Jason

On Mon, Nov 21, 2016 at 2:03 PM, David Judd 
wrote:

> Hi folks,
>
> We're seeing an issue where an exception inside the main processing loop of
> a consumer doesn't cause the JVM to exit, as expected (and, in our case,
> desired). From the thread dump, it appears that what's blocking exit is the
> "kafka-coordinator-heartbeat-thread". From what I understand of what it
> does, it seems to me like this should be a daemon thread, but it's not. Is
> this a bug, or deliberate?
>
> Thanks,
> David
>


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. I have configured mirrormaker to send 
messages less than 1 M Bytes . Is the messages getting dropped and why is it 
logged at info. Shouldnt it atleast logged as warning ? 




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes . 
Is the messages getting dropped and why is it logged at info. Shouldnt it 
atleast logged as warning ? 





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Is the messages getting dropped and why is it 
> logged at info. Shouldnt it atleast logged as warning ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. I have configured mirrormaker to send 
messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
circumstances this error occurs ?



  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. I have configured mirrormaker to send 
messages less than 1 M Bytes . Is the messages getting dropped and why is it 
logged at info. Shouldnt it atleast logged as warning ? 





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes . 
Is the messages getting dropped and why is it logged at info. Shouldnt it 
atleast logged as warning ? 




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes . Is the messages getting dropped and why is it logged at info. 
> Shouldnt it atleast logged as warning ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Priority: Major  (was: Minor)

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


KafkaProducer

2016-11-21 Thread 揣立武

Hi, all. I use the kafka which the version is 0.8.2.1. When partiton 
transferring, KafkaProducer needs about 5mins to recovery, so do you know how 
to fix it.
Thanks!!!



[GitHub] kafka pull request #2156: kafka-4428: Kafka does not exit when it receives "...

2016-11-21 Thread amethystic
GitHub user amethystic opened a pull request:

https://github.com/apache/kafka/pull/2156

kafka-4428: Kafka does not exit when it receives "Address already in use" 
error during startup

kafka-4428: Kafka does not exit when it receives "Address already in use" 
error during startup
@author: hux...@gmail.com
During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latches for all Processors have no chance to be 
counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amethystic/kafka 
kafka-4428_Kafka_noexit_for_port_already_use

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2156.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2156


commit 9bcfbaffaa98ad577d2d3f59a8162d1bc33b7d57
Author: huxi 
Date:   2016-11-22T03:27:11Z

kafka-4428: Kafka does not exit when it receives "Address already in use" 
error during startup
@author: hux...@gmail.com
During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latch for all Processors instsance have no chance to 
be counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-4428) Kafka does not exit when it receives "Address already in use" error during startup

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685562#comment-15685562
 ] 

huxi edited comment on KAFKA-4428 at 11/22/16 3:39 AM:
---

During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latches for all Processors have no chance to be 
counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown


was (Author: huxi_2b):
During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latch for all Processors instsance have no chance to 
be counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown

> Kafka does not exit when it receives "Address already in use" error during 
> startup
> --
>
> Key: KAFKA-4428
> URL: https://issues.apache.org/jira/browse/KAFKA-4428
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.1.0
>Reporter: Zeynep Arikoglu
>Assignee: huxi
>
> [2016-11-21 14:58:04,136] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:5000: 
> Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:316)
> at kafka.network.Acceptor.(SocketServer.scala:242)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:96)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at 
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
> at kafka.network.SocketServer.startup(SocketServer.scala:89)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:312)
> ... 11 more
> [2016-11-21 14:58:04,138] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2016-11-21 14:58:04,146] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4428) Kafka does not exit when it receives "Address already in use" error during startup

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685562#comment-15685562
 ] 

huxi edited comment on KAFKA-4428 at 11/22/16 3:32 AM:
---

During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latch for all Processors instsance have no chance to 
be counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown


was (Author: huxi_2b):
During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latch for all Processors instsance have no change to 
be counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown

> Kafka does not exit when it receives "Address already in use" error during 
> startup
> --
>
> Key: KAFKA-4428
> URL: https://issues.apache.org/jira/browse/KAFKA-4428
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.1.0
>Reporter: Zeynep Arikoglu
>Assignee: huxi
>
> [2016-11-21 14:58:04,136] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:5000: 
> Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:316)
> at kafka.network.Acceptor.(SocketServer.scala:242)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:96)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at 
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
> at kafka.network.SocketServer.startup(SocketServer.scala:89)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:312)
> ... 11 more
> [2016-11-21 14:58:04,138] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2016-11-21 14:58:04,146] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4428) Kafka does not exit when it receives "Address already in use" error during startup

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685562#comment-15685562
 ] 

huxi commented on KAFKA-4428:
-

During Acceptor initialization, if "Address already in use" error is 
encountered, the countdown latch for all Processors instsance have no change to 
be counted down, hence Kafka server fails to exit, pending when invoking 
Processor.shutdown

> Kafka does not exit when it receives "Address already in use" error during 
> startup
> --
>
> Key: KAFKA-4428
> URL: https://issues.apache.org/jira/browse/KAFKA-4428
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.1.0
>Reporter: Zeynep Arikoglu
>Assignee: huxi
>
> [2016-11-21 14:58:04,136] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:5000: 
> Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:316)
> at kafka.network.Acceptor.(SocketServer.scala:242)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:96)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at 
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
> at kafka.network.SocketServer.startup(SocketServer.scala:89)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:312)
> ... 11 more
> [2016-11-21 14:58:04,138] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2016-11-21 14:58:04,146] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack to 0 , compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>Priority: Minor
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack to 0 , compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack -> 0 , compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>Priority: Minor
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1 M on DC 
> and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
> max.request.size is set to 1 M Byte and ack to 0 , compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Summary: Broker logging "Topic and partition to exceptions: [topic,6] -> 
kafka.common.MessageSizeTooLargeException"  (was: Broker logging Topic and 
partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException)

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>Priority: Minor
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1 M on DC 
> and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
> max.request.size is set to 1 M Byte and ack -> 0 and compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack -> 0 , compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack -> 0 and compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>Priority: Minor
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1 M on DC 
> and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
> max.request.size is set to 1 M Byte and ack -> 0 , compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4430) Broker logging Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:

Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack -> 0 and compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
AGG Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack -> 0. 
However on the AGG Kafka I get the following exception 
Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. 





> Broker logging Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException
> 
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>Priority: Minor
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1 M on DC 
> and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
> max.request.size is set to 1 M Byte and ack -> 0 and compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4430) Broker logging Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)
Srinivas Dhruvakumar created KAFKA-4430:
---

 Summary: Broker logging Topic and partition to exceptions: 
[topic,6] -> kafka.common.MessageSizeTooLargeException
 Key: KAFKA-4430
 URL: https://issues.apache.org/jira/browse/KAFKA-4430
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.1
 Environment: Production 
Reporter: Srinivas Dhruvakumar
Priority: Minor


I have a setup as below 
DC Kafka 
Mirrormaker 
AGG Kafka
Here is the following settings. I have set the max.message.bytes to 1 M on DC 
and AGG kafka. Mirrormaker producer batch setting is set to 500 KBytes and 
max.request.size is set to 1 M Byte and ack -> 0. 
However on the AGG Kafka I get the following exception 
Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
KafkaServer.log.3:Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. 






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4428) Kafka does not exit when it receives "Address already in use" error during startup

2016-11-21 Thread huxi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxi reassigned KAFKA-4428:
---

Assignee: huxi

> Kafka does not exit when it receives "Address already in use" error during 
> startup
> --
>
> Key: KAFKA-4428
> URL: https://issues.apache.org/jira/browse/KAFKA-4428
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.1.0
>Reporter: Zeynep Arikoglu
>Assignee: huxi
>
> [2016-11-21 14:58:04,136] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:5000: 
> Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:316)
> at kafka.network.Acceptor.(SocketServer.scala:242)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:96)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at 
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
> at kafka.network.SocketServer.startup(SocketServer.scala:89)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:312)
> ... 11 more
> [2016-11-21 14:58:04,138] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2016-11-21 14:58:04,146] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4429) records-lag should be zero if FetchResponse is empty

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685240#comment-15685240
 ] 

ASF GitHub Bot commented on KAFKA-4429:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/2155

KAFKA-4429; records-lag should be zero if FetchResponse is empty



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-4429

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2155


commit 0dbcf2c1357861606e5e7bcf5ee59d29bb1641f3
Author: Dong Lin 
Date:   2016-11-22T00:38:20Z

KAFKA-4429; records-lag should be zero if FetchResponse is empty




> records-lag should be zero if FetchResponse is empty
> 
>
> Key: KAFKA-4429
> URL: https://issues.apache.org/jira/browse/KAFKA-4429
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> In Fetcher we record records-lag in terms of number of records for any 
> partition. Currently this metric value is updated only if number of parsed 
> records is not empty. This means that if consumer has already fully caught up 
> and there is no new data into the topic, this metric's value will be negative 
> infinity and users can not rely on this metric to know if their consumer has 
> caught up.
> We can fix this problem by assuming the lag is zero is FetchResponse is empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2155: KAFKA-4429; records-lag should be zero if FetchRes...

2016-11-21 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/2155

KAFKA-4429; records-lag should be zero if FetchResponse is empty



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-4429

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2155


commit 0dbcf2c1357861606e5e7bcf5ee59d29bb1641f3
Author: Dong Lin 
Date:   2016-11-22T00:38:20Z

KAFKA-4429; records-lag should be zero if FetchResponse is empty




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4429) records-lag should be zero if FetchResponse is empty

2016-11-21 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-4429:
---

 Summary: records-lag should be zero if FetchResponse is empty
 Key: KAFKA-4429
 URL: https://issues.apache.org/jira/browse/KAFKA-4429
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin
Assignee: Dong Lin


In Fetcher we record records-lag in terms of number of records for any 
partition. Currently this metric value is updated only if number of parsed 
records is not empty. This means that if consumer has already fully caught up 
and there is no new data into the topic, this metric's value will be negative 
infinity and users can not rely on this metric to know if their consumer has 
caught up.

We can fix this problem by assuming the lag is zero is FetchResponse is empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


should HeartbeatThread be a daemon thread?

2016-11-21 Thread David Judd
Hi folks,

We're seeing an issue where an exception inside the main processing loop of
a consumer doesn't cause the JVM to exit, as expected (and, in our case,
desired). From the thread dump, it appears that what's blocking exit is the
"kafka-coordinator-heartbeat-thread". From what I understand of what it
does, it seems to me like this should be a daemon thread, but it's not. Is
this a bug, or deliberate?

Thanks,
David


[jira] [Commented] (KAFKA-4270) ClassCast for Agregation

2016-11-21 Thread Damian Guy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684523#comment-15684523
 ] 

Damian Guy commented on KAFKA-4270:
---

Hi,

I still don't really follow what the issue is. 
Have you tried providing the serdes to the {{groupBy}} method, as i suggested 
above. The {{ClassCastException}} suggest that it has used the default Serdes 
which would be the case if you don't provide the serdes to the {{groupBy}}.

Also which version of streams are you running?

> ClassCast for Agregation
> 
>
> Key: KAFKA-4270
> URL: https://issues.apache.org/jira/browse/KAFKA-4270
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Mykola Polonskyi
>Assignee: Damian Guy
>Priority: Critical
>  Labels: architecture
>
> With defined serdes for intermediate topic in aggregation catch the 
> ClassCastException: from custom class to the ByteArray.
> In debug I saw that defined serde isn't used for creation sinkNode (incide 
> `org.apache.kafka.streams.kstream.internals.KGroupedTableImpl#doAggregate`) 
> Instead defined serde inside aggregation call is used default Impl with empty 
> plugs instead of implementations 
> {code:koltin} 
> userTable.join(
> skicardsTable.groupBy { key, value -> 
> KeyValue(value.skicardInfo.ownerId, value.skicardInfo) }
> .aggregate(
> { mutableSetOf() }, 
> { ownerId, skicardInfo, accumulator -> 
> accumulator.put(skicardInfo) },
> { ownerId, skicardInfo, accumulator -> 
> accumulator }, 
> skicardByOwnerIdSerde,
> skicardByOwnerIdTopicName
> ),
> { userCreatedOrUpdated, skicardInfoSet -> 
> UserWithSkicardsCreatedOrUpdated(userCreatedOrUpdated.user, skicardInfoSet) }
> ).to(
> userWithSkicardsTable
> )
> {code}
> I think current behavior of `doAggregate` with serdes and/or stores setting 
> up should be changed because that is incorrect in release 0.10.0.1-cp1 to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4271) The console consumer fails on Windows with new consumer is used

2016-11-21 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684468#comment-15684468
 ] 

Vahid Hashemian commented on KAFKA-4271:


[~huxi] Thanks for responding. I checked the RAM, and out of 4 GB available 
there's plenty left (about half) when I run the consumer.
I also set the config you mentioned to 4 times the default size, but no luck. I 
still get the same error.

> The console consumer fails on Windows with new consumer is used 
> 
>
> Key: KAFKA-4271
> URL: https://issues.apache.org/jira/browse/KAFKA-4271
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Vahid Hashemian
>
> When I try to consume message using the new consumer (Quickstart Step 5) I 
> get an exception on the broker side. The old consumer works fine.
> {code}
> java.io.IOException: Map failed
> at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> at kafka.log.AbstractIndex.(AbstractIndex.scala:61)
> at kafka.log.OffsetIndex.(OffsetIndex.scala:51)
> at kafka.log.LogSegment.(LogSegment.scala:67)
> at kafka.log.Log.loadSegments(Log.scala:255)
> at kafka.log.Log.(Log.scala:108)
> at kafka.log.LogManager.createLog(LogManager.scala:362)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
> at kafka.cluster.Partition.makeLeader(Partition.scala:168)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:740)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:739)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:739)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:685)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> ... 29 more
> {code}
> This issue seems to break the broker and I have to clear out the logs so I 
> can bring the broker back up again.
> Update: This issue seems to occur on 32-bit Windows only. I tried this on a 
> Windows 32-bit VM. On a 64-bit machine I did not get the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-87 - Add Compaction Tombstone Flag

2016-11-21 Thread Mayuresh Gharat
Hi Michael,

I have updated the migration section of the KIP. Can you please take a look?

Thanks,

Mayuresh

On Fri, Nov 18, 2016 at 9:07 AM, Mayuresh Gharat  wrote:

> Hi Michael,
>
> That whilst sending tombstone and non null value, the consumer can expect
> only to receive the non-null message only in step (3) is this correct?
> ---> I do agree with you here.
>
> Becket, Ismael : can you guys review the migration plan listed above using
> magic byte?
>
> Thanks,
>
> Mayuresh
>
> On Fri, Nov 18, 2016 at 8:58 AM, Michael Pearce 
> wrote:
>
>> Many thanks for this Mayuresh. I don't have any objections.
>>
>> I assume we should state:
>>
>> That whilst sending tombstone and non null value, the consumer can expect
>> only to receive the non-null message only in step (3) is this correct?
>>
>> Cheers
>> Mike
>>
>>
>>
>> Sent using OWA for iPhone
>> 
>> From: Mayuresh Gharat 
>> Sent: Thursday, November 17, 2016 5:18:41 PM
>> To: dev@kafka.apache.org
>> Subject: Re: [DISCUSS] KIP-87 - Add Compaction Tombstone Flag
>>
>> Hi Ismael,
>>
>> Thanks for the explanation.
>> Specially I like this part where in you mentioned we can get rid of the
>> older null value support for log compaction later on, here :
>> We can't change semantics of the message format without having a long
>> transition period. And we can't rely
>> on people reading documentation or acting on a warning for something so
>> fundamental. As such, my take is that we need to bump the magic byte. The
>> good news is
>> that we don't have to support all versions forever. We have said that we
>> will support direct upgrades for 2 years. That means that message format
>> version n could, in theory, be removed 2 years after the it's introduced.
>>
>> Just a heads up, I would like to mention that even without bumping magic
>> byte, we will *NOT* loose zero copy as in the client(x+1) in my
>> explanation
>> above will convert internally a null value to have a tombstone bit set and
>> a tombstone bit set to have a null value automatically internally and by
>> the time we move to version (x+2), the clients would have upgraded.
>> Obviously if we support a request from consumer(x), we will loose zero
>> copy
>> but that is the same case with magic byte.
>>
>> But if magic byte bump makes life easier for transition for the above
>> reasons that you explained, I am OK with it since we are going to meet the
>> end goal down the road :)
>>
>> On a side note can we update the doc here on magic byte to say that "*it
>> should be bumped whenever the message format is changed or the
>> interpretation of message format (usage of the reserved bits as well) is
>> changed*".
>>
>>
>> Hi Michael,
>>
>> Here is the update plan that we discussed offline yesterday :
>>
>> Currently the magic-byte which corresponds to the "message.format.version"
>> is set to 1.
>>
>> 1) On broker it will be set to 1 initially.
>>
>> 2) When a producer client sends a message with magic-byte = 2, since the
>> broker is on magic-byte = 1, we will down convert it, which means if the
>> tombstone bit is set, the value will be set to null. A consumer
>> understanding magic-byte = 1, will still work with this. A consumer
>> working
>> with magic-byte =2 will also be able to understand this, since it
>> understands the tombstone.
>> Now there is still the question of supporting a non-tombstone and null
>> value from producer client with magic-byte = 2.* (I am not sure if we
>> should support this. Ismael/Becket can comment here)*
>>
>> 3) When almost all the clients have upgraded, the message.format.version
>> on
>> the broker can be changed to 2, where in the down conversion in the above
>> step will not happen. If at this point we get a consumer request from a
>> older consumer, we might have to down convert where in we loose zero copy,
>> but these cases should be rare.
>>
>> Becket can you review this plan and add more details if I have
>> missed/wronged something, before we put it on KIP.
>>
>> Thanks,
>>
>> Mayuresh
>>
>> On Wed, Nov 16, 2016 at 11:07 PM, Michael Pearce 
>> wrote:
>>
>> > Thanks guys, for discussing this offline and getting some consensus.
>> >
>> > So its clear for myself and others what is proposed now (i think i
>> > understand, but want to make sure)
>> >
>> > Could i ask either directly update the kip to detail the migration
>> > strategy, or (re-)state your offline discussed and agreed migration
>> > strategy based on a magic byte is in this thread.
>> >
>> >
>> > The main original driver for the KIP was to support compaction where
>> value
>> > isn't null, based off the discussions on KIP-82 thread.
>> >
>> > We should be able to support non-tombstone + null value by the
>> completion
>> > of the KIP, as we noted when discussing this kip, having logic based on
>> a
>> > null value isn't very clean and also separates the 

[jira] [Commented] (KAFKA-3994) Deadlock between consumer heartbeat expiration and offset commit.

2016-11-21 Thread mjuarez (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684271#comment-15684271
 ] 

mjuarez commented on KAFKA-3994:


I've been running the ConsumerBounceTest, both with and without the patch 
multiple times, but the test always succeeds for me.  

How were you able to trigger the deadlock on this test?

> Deadlock between consumer heartbeat expiration and offset commit.
> -
>
> Key: KAFKA-3994
> URL: https://issues.apache.org/jira/browse/KAFKA-3994
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.1
>
>
> I got the following stacktraces from ConsumerBounceTest
> {code}
> ...
> "Test worker" #12 prio=5 os_prio=0 tid=0x7fbb28b7f000 nid=0x427c runnable 
> [0x7fbb06445000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x0003d48bcbc0> (a sun.nio.ch.Util$2)
> - locked <0x0003d48bcbb0> (a 
> java.util.Collections$UnmodifiableSet)
> - locked <0x0003d48bbd28> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at org.apache.kafka.common.network.Selector.select(Selector.java:454)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:277)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:179)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:411)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1054)
> at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:103)
> at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:70)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>  

[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-21 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683783#comment-15683783
 ] 

Balint Molnar commented on KAFKA-4307:
--

Hi, [~manasvigupta] can I take this?

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4428) Kafka does not exit when it receives "Address already in use" error during startup

2016-11-21 Thread Zeynep Arikoglu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zeynep Arikoglu updated KAFKA-4428:
---
Description: 
[2016-11-21 14:58:04,136] FATAL [Kafka Server 0], Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:5000: 
Address already in use.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:316)
at kafka.network.Acceptor.(SocketServer.scala:242)
at 
kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:96)
at 
kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at 
scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at kafka.network.SocketServer.startup(SocketServer.scala:89)
at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:312)
... 11 more
[2016-11-21 14:58:04,138] INFO [Kafka Server 0], shutting down 
(kafka.server.KafkaServer)
[2016-11-21 14:58:04,146] INFO [Socket Server on Broker 0], Shutting down 
(kafka.network.SocketServer)


> Kafka does not exit when it receives "Address already in use" error during 
> startup
> --
>
> Key: KAFKA-4428
> URL: https://issues.apache.org/jira/browse/KAFKA-4428
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.1.0
>Reporter: Zeynep Arikoglu
>
> [2016-11-21 14:58:04,136] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:5000: 
> Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:316)
> at kafka.network.Acceptor.(SocketServer.scala:242)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:96)
> at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at 
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
> at kafka.network.SocketServer.startup(SocketServer.scala:89)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:312)
> ... 11 more
> [2016-11-21 14:58:04,138] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2016-11-21 14:58:04,146] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4428) Kafka does not exit when it receives "Address already in use" error during startup

2016-11-21 Thread Zeynep Arikoglu (JIRA)
Zeynep Arikoglu created KAFKA-4428:
--

 Summary: Kafka does not exit when it receives "Address already in 
use" error during startup
 Key: KAFKA-4428
 URL: https://issues.apache.org/jira/browse/KAFKA-4428
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 0.10.1.0
Reporter: Zeynep Arikoglu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3701) Expose KafkaStreams metrics in public API

2016-11-21 Thread Mitch Seymour (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitch Seymour updated KAFKA-3701:
-
Assignee: (was: Mitch Seymour)

> Expose KafkaStreams metrics in public API
> -
>
> Key: KAFKA-3701
> URL: https://issues.apache.org/jira/browse/KAFKA-3701
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Jeff Klukas
>Priority: Minor
>  Labels: user-experience
> Fix For: 0.10.2.0
>
>
> The Kafka clients expose their metrics registries through a `metrics` method 
> presenting an unmodifiable collection, but `KafkaStreams` does not expose its 
> registry. Currently, applications can access a StreamsMetrics instance via 
> the ProcessorContext within a Processor, but this limits flexibility.
> Having read-only access to a KafkaStreams.metrics() method would allow a 
> developer to define a health check for their application based on the metrics 
> that KafkaStreams is collecting. Or a developer might want to define a metric 
> in some other framework based on KafkaStreams' metrics.
> I am imagining that an application would build and register 
> KafkaStreams-based health checks after building a KafkaStreams instance but 
> before calling the start() method. Are metrics added to the registry at the 
> time a KafkaStreams instance is constructed, or only after calling the 
> start() method? If metrics are registered only after application startup, 
> then this approach may not be sufficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Mark de Jong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683522#comment-15683522
 ] 

Mark de Jong commented on KAFKA-4425:
-

Great, that works! Just added a test for deleting topics as well, but with a 
time out of 60 seconds (6), it returns with RequestTimedOut (7) and the 
topic is still listed when I run a metadata call against the controller.

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683246#comment-15683246
 ] 

Ismael Juma commented on KAFKA-4425:


You have to pass `null` to get all topics. An empty list returns no topics.

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683245#comment-15683245
 ] 

Ismael Juma commented on KAFKA-4425:


You have to pass `null` to get all topics. An empty list returns no topics.

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Mark de Jong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683238#comment-15683238
 ] 

Mark de Jong commented on KAFKA-4425:
-

Interesting, with metadata v2 request no topics I would assume all topics are 
sent back.. but nothing is sent back instead? Bug ?

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Mark de Jong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683132#comment-15683132
 ] 

Mark de Jong edited comment on KAFKA-4425 at 11/21/16 10:29 AM:


It seems I've solved the issue. My codec for booleans was wrong. It seems it is 
encoded as a int8, but I assumed that it was a bit (0 | 1). Contacting the 
controller about metadata is enough :-)

It was not listed a the primitive type section in the docs: 
https://kafka.apache.org/protocol.html


was (Author: fristi):
It seems I've solved the issue. My codec for booleans was wrong. It seems it is 
encoded as a int8, but I assumed that it was a bit (0 | 1). Contacting the 
controller about metadata is enough :-)


> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Mark de Jong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683132#comment-15683132
 ] 

Mark de Jong commented on KAFKA-4425:
-

It seems I've solved the issue. My codec for booleans was wrong. It seems it is 
encoded as a int8, but I assumed that it was a bit (0 | 1). Contacting the 
controller about metadata is enough :-)


> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Mark de Jong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683086#comment-15683086
 ] 

Mark de Jong commented on KAFKA-4425:
-

I had the same idea about sending a metadata request to the current controller, 
but it doesn't seem to work. After that, I've tried to send a metadata request 
to all brokers (first run a metadata call to list all brokers and then send a 
metadata call with the topic to all brokers listed), but all seem to return the 
topic.. but without partition data in it.

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4427) Skip topicGroups with no tasks

2016-11-21 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-4427:
---

 Summary: Skip topicGroups with no tasks
 Key: KAFKA-4427
 URL: https://issues.apache.org/jira/browse/KAFKA-4427
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Eno Thereska
Assignee: Eno Thereska
 Fix For: 0.10.2.0


Currently the StreamPartitionAssignor's "assign" method does not handle cases 
where we don't have tasks for a particular topic group. E.g., code like this 
might give an NPE:
"for (TaskId task : tasksByTopicGroup.get(topicGroupId))" 

We need to handle the above cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3038) Speeding up partition reassignment after broker failure

2016-11-21 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-3038:

Assignee: (was: Eno Thereska)

> Speeding up partition reassignment after broker failure
> ---
>
> Key: KAFKA-3038
> URL: https://issues.apache.org/jira/browse/KAFKA-3038
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
> Fix For: 0.11.0.0
>
>
> After a broker failure the controller does several writes to Zookeeper for 
> each partition on the failed broker. Writes are done one at a time, in closed 
> loop, which is slow especially under high latency networks. Zookeeper has 
> support for batching operations (the "multi" API). It is expected that 
> substituting serial writes with batched ones should reduce failure handling 
> time by an order of magnitude.
> This is identified as an issue in 
> https://cwiki.apache.org/confluence/display/KAFKA/kafka+Detailed+Replication+Design+V3
>  (section End-to-end latency during a broker failure)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3514) Stream timestamp computation needs some further thoughts

2016-11-21 Thread Michal Borowiecki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683056#comment-15683056
 ] 

Michal Borowiecki commented on KAFKA-3514:
--

IMO, 2) *is* a severe problem. Punctuate methods (as described by their API) 
are meant to perform periodic operations. As it currently stands, if any of the 
input topics doesn't receive messages regularly, the punctuate method won't be 
called regularly either (due to the min offset across all partitions not 
advancing), which violates what the API promises. 
We've worked around it in our app by creating an independent stream and a 
scheduler sending ticks regularly to an input topic to a Transformer, so that 
it's punctuate method is called predictably but this is far from ideal.

> Stream timestamp computation needs some further thoughts
> 
>
> Key: KAFKA-3514
> URL: https://issues.apache.org/jira/browse/KAFKA-3514
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.2.0
>
>
> Our current stream task's timestamp is used for punctuate function as well as 
> selecting which stream to process next (i.e. best effort stream 
> synchronization). And it is defined as the smallest timestamp over all 
> partitions in the task's partition group. This results in two unintuitive 
> corner cases:
> 1) observing a late arrived record would keep that stream's timestamp low for 
> a period of time, and hence keep being process until that late record. For 
> example take two partitions within the same task annotated by their 
> timestamps:
> {code}
> Stream A: 5, 6, 7, 8, 9, 1, 10
> {code}
> {code}
> Stream B: 2, 3, 4, 5
> {code}
> The late arrived record with timestamp "1" will cause stream A to be selected 
> continuously in the thread loop, i.e. messages with timestamp 5, 6, 7, 8, 9 
> until the record itself is dequeued and processed, then stream B will be 
> selected starting with timestamp 2.
> 2) an empty buffered partition will cause its timestamp to be not advanced, 
> and hence the task timestamp as well since it is the smallest among all 
> partitions. This may not be a severe problem compared with 1) above though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2154: HOTFIX: Increased wait time

2016-11-21 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2154

HOTFIX: Increased wait time



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka hotfix_streams

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2154.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2154


commit fcf24cc607d3760648668bff4f7527fdd9cecf04
Author: Eno Thereska 
Date:   2016-11-21T09:46:18Z

Increased wait time




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4426) Add consumer.close(timeout, unit) for graceful close with timeout

2016-11-21 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4426:
-

 Summary: Add consumer.close(timeout, unit) for graceful close with 
timeout
 Key: KAFKA-4426
 URL: https://issues.apache.org/jira/browse/KAFKA-4426
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.10.1.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


KAFKA-3703 implements graceful close of consumers with a hard-coded timeout of 
5 seconds. For consistency with the producer, add a close method with 
configurable timeout for Consumer.

{quote}
public void close(long timeout, TimeUnit unit);
{quote}

Since this is a public interface change, this change requires a KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2016-11-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682981#comment-15682981
 ] 

Ismael Juma commented on KAFKA-4425:


[~Fristi], which broker are you sending the metadata request to? The create 
topic call doesn't ensure that the data has been propagated to every broker in 
the cluster. It just ensures that the controller's metadata cache has been 
updated.

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)