[jira] [Commented] (KAFKA-1688) Add authorization interface and naive implementation

2015-01-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283108#comment-14283108
 ] 

Joe Stein commented on KAFKA-1688:
--

what is your confluence username so I can grant you permission?

 Add authorization interface and naive implementation
 

 Key: KAFKA-1688
 URL: https://issues.apache.org/jira/browse/KAFKA-1688
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 Add a PermissionManager interface as described here:
 https://cwiki.apache.org/confluence/display/KAFKA/Security
 (possibly there is a better name?)
 Implement calls to the PermissionsManager in KafkaApis for the main requests 
 (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
 exception to the protocol to indicate permission denied.
 Add a server configuration to give the class you want to instantiate that 
 implements that interface. That class can define its own configuration 
 properties from the main config file.
 Provide a simple implementation of this interface which just takes a user and 
 ip whitelist and permits those in either of the whitelists to do anything, 
 and denies all others.
 Rather than writing an integration test for this class we can probably just 
 use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] 0.8.2.0 Candidate 2

2015-01-19 Thread Jun Rao
This is the second candidate for release of Apache Kafka 0.8.2.0. There has
been some changes since the 0.8.2 beta release, especially in the new java
producer api and jmx mbean names. It would be great if people can test this
out thoroughly.

Release Notes for the 0.8.2.0 release
*https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

*** Please download, test and vote by Monday, Jan 26h, 7pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
*http://kafka.apache.org/KEYS http://kafka.apache.org/KEYS* in addition
to the md5, sha1 and sha2 (SHA256) checksum.

* Release artifacts to be voted upon (source and binary):
*https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

* Maven artifacts to be voted upon prior to release:
*https://repository.apache.org/content/groups/staging/
https://repository.apache.org/content/groups/staging/*

* scala-doc
*https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

* java-doc
*https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

* The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
*https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c*
 (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

/***

Thanks,

Jun


[jira] [Updated] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-19 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1840:

Assignee: Jiangjie Qin
  Status: Patch Available  (was: Open)

 Add a simple message handler in Mirror Maker
 

 Key: KAFKA-1840
 URL: https://issues.apache.org/jira/browse/KAFKA-1840
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1840.patch


 Currently mirror maker simply mirror all the messages it consumes from the 
 source cluster to target cluster. It would be useful to allow user to do some 
 simple process such as filtering/reformatting in mirror maker. We can allow 
 user to wire in a message handler to handle messages. The default handler 
 could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-19 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283390#comment-14283390
 ] 

Jiangjie Qin commented on KAFKA-1840:
-

Created reviewboard https://reviews.apache.org/r/30063/diff/
 against branch origin/trunk

 Add a simple message handler in Mirror Maker
 

 Key: KAFKA-1840
 URL: https://issues.apache.org/jira/browse/KAFKA-1840
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
 Attachments: KAFKA-1840.patch


 Currently mirror maker simply mirror all the messages it consumes from the 
 source cluster to target cluster. It would be useful to allow user to do some 
 simple process such as filtering/reformatting in mirror maker. We can allow 
 user to wire in a message handler to handle messages. The default handler 
 could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-19 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1840:

Attachment: KAFKA-1840.patch

 Add a simple message handler in Mirror Maker
 

 Key: KAFKA-1840
 URL: https://issues.apache.org/jira/browse/KAFKA-1840
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
 Attachments: KAFKA-1840.patch


 Currently mirror maker simply mirror all the messages it consumes from the 
 source cluster to target cluster. It would be useful to allow user to do some 
 simple process such as filtering/reformatting in mirror maker. We can allow 
 user to wire in a message handler to handle messages. The default handler 
 could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1882) Create extendable channel interface and default implementations

2015-01-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1882:

Description: 
For the security infrastructure, we need an extendible interface to replace 
SocketChannel.

KAFKA-1684 suggests extending SocketChannel itself, but since SocketChannel is 
part of Java's standard library, the interface changes between different Java 
versions, so extending it directly can become a compatibility issue.

Instead, we can implement a KafkaChannel interface, which will implement 
connect(), read(), write() and possibly other methods we use. 

We will replace direct use of SocketChannel in our code with use of 
KafkaChannel.

Different implementations of KafkaChannel will be instantiated based on the 
port/SecurityProtocol configuration. 

This patch will provide at least the PLAINTEXT implementation for KafkaChannel.

I will validate that the SSL implementation in KAFKA-1684 can be refactored to 
use a KafkaChannel interface rather than extend SocketChannel directly. 
However, the patch will not include the SSL channel itself.

The interface should also include setters/getters for principal and remote IP, 
which will be used for the authentication code.

  was:
For the security infrastructure, we need an extendible interface to replace 
SocketChannel.

KAFKA-1684 suggests extending SocketChannel itself, but since SocketChannel is 
part of Java's standard library, the interface changes between different Java 
versions, so extending it directly can become a compatibility issue.

Instead, we can implement a KafkaChannel interface, which will implement 
connect(), read(), write() and possibly other methods we use. 

We will replace direct use of SocketChannel in our code with use of 
KafkaChannel.

Different implementations of KafkaChannel will be instantiated based on the 
port/SecurityProtocol configuration. 

This patch will provide at least the PLAINTEXT implementation for KafkaChannel.

I will validate that the SSL implementation in KAFKA-1684 can be refactored to 
use a KafkaChannel interface rather than extend SocketChannel directly. 
However, the patch will not include the SSL channel itself.


 Create extendable channel interface and default implementations
 ---

 Key: KAFKA-1882
 URL: https://issues.apache.org/jira/browse/KAFKA-1882
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Gwen Shapira
Assignee: Gwen Shapira

 For the security infrastructure, we need an extendible interface to replace 
 SocketChannel.
 KAFKA-1684 suggests extending SocketChannel itself, but since SocketChannel 
 is part of Java's standard library, the interface changes between different 
 Java versions, so extending it directly can become a compatibility issue.
 Instead, we can implement a KafkaChannel interface, which will implement 
 connect(), read(), write() and possibly other methods we use. 
 We will replace direct use of SocketChannel in our code with use of 
 KafkaChannel.
 Different implementations of KafkaChannel will be instantiated based on the 
 port/SecurityProtocol configuration. 
 This patch will provide at least the PLAINTEXT implementation for 
 KafkaChannel.
 I will validate that the SSL implementation in KAFKA-1684 can be refactored 
 to use a KafkaChannel interface rather than extend SocketChannel directly. 
 However, the patch will not include the SSL channel itself.
 The interface should also include setters/getters for principal and remote 
 IP, which will be used for the authentication code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka Out of Memory error

2015-01-19 Thread Pranay Agarwal
Thanks a lot Natty.

I am using this Ruby gem on the client side with all the default config
https://github.com/joekiller/jruby-kafka/blob/master/lib/jruby-kafka/group.rb
and the value fetch.message.max.bytes is set to 1MB.

Currently I only have 3 nodes setup in the Kafka cluster (with 8 GB RAM)
and if 1 consumer if going to take 1000 partitions X 1mb ~ 1GB, does it
mean 1 kafka node can at best support 8 consumer only? Also, when I do
top/free on the Kafka cluster nodes (Both zookeeper and kafka is deployed
on each 3 nodes of the cluster) I don't see lots of memory being used on
the machine. Also, even with this calculation, I shouldn't be facing any
issue with only 1 consumer, as I have 8GB of JVM space given to Kafka
nodes, right?

Thanks
-Pranay

On Mon, Jan 19, 2015 at 2:53 PM, Jonathan Natkins na...@streamsets.com
wrote:

 The fetch.message.max.size is actually a client-side configuration. With
 regard to increasing the number of threads, I think the calculation may be
 a little more subtle than what you're proposing, and frankly, it's unlikely
 that your servers can handle allocating 200MB x 1000 threads = 200GB of
 memory at a single time.

 I believe that if you have every partition on a single broker, and all of
 your consumer threads are requesting data simultaneously, then yes, the
 broker would attempt to allocate 200GB of heap, and probably you'll hit an
 OOME. However, since each consumer is only reading from one partition,
 those 1000 threads should be making requests that are spread out over the
 entire Kafka cluster. Depending on the memory on your servers, you may need
 to increase the number of brokers in your cluster to support the 1000
 threads. For example, I would expect that you can support this with 10
 brokers if each broker has something north of 20GB of heap allocated.

 Some of this is a little bit of guess work on my part, and I'm not super
 confident of my numbers...Can anybody else on the list validate my math?

 Thanks,
 Natty

 Jonathan Natty Natkins
 StreamSets | Customer Engagement Engineer
 mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice


 On Mon, Jan 19, 2015 at 2:34 PM, Pranay Agarwal agarwalpran...@gmail.com
 wrote:

  Thanks Natty.
 
  Is there any config which I need to change on the client side as well?
  Also, currently I am trying with only 1 consumer thread. Does the
 equation
  changes to
  (#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with
  1000 threads from from topic2(1000 partitions)?
 
  -Pranay
 
  On Mon, Jan 19, 2015 at 2:26 PM, Jonathan Natkins na...@streamsets.com
  wrote:
 
   Hi Pranay,
  
   I think the JIRA you're referencing is a bit orthogonal to the OOME
 that
   you're experiencing. Based on the stacktrace, it looks like your OOME
 is
   coming from a consumer request, which is attempting to allocate 200MB.
   There was a thread (relatively recently) that discussed what I think is
   your issue:
  
  
  
 
 http://mail-archives.apache.org/mod_mbox/kafka-users/201412.mbox/%3CCAG1fNJDHHGSL-x3wp=pPZS1asOdOBrQ-Ge3kiA3Bk_iz7o=5...@mail.gmail.com%3E
  
   I suspect that the takeaway is that the way Kafka determines the
 required
   memory for a consumer request is (#partitions in the topic) x
   (replica.fetch.max.bytes), and seemingly you don't have enough memory
   allocated to handle that request. The solution is likely to increase
 the
   heap size on your brokers or to decrease your max fetch size.
  
   Thanks,
   Natty
  
   Jonathan Natty Natkins
   StreamSets | Customer Engagement Engineer
   mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice
  
  
   On Mon, Jan 19, 2015 at 2:10 PM, Pranay Agarwal 
  agarwalpran...@gmail.com
   wrote:
  
Hi All,
   
I have a kafka cluster setup which has 2 topics
   
topic1 with 10 partitions
topic2 with 1000 partitions.
   
While, I am able to consume messages from topic1 just fine, I get
   following
error from the topic2. There is a resolved issue here on the same
 thing
https://issues.apache.org/jira/browse/KAFKA-664
   
I am using latest kafka server version, and I have used kafka command
   line
tools to consumer from the topics.
   

[2015-01-19 22:08:10,758] ERROR OOME with size 201332748
(kafka.network.BoundedByteBufferReceive)
   
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at
   
   
  
 
 kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
at
   
   
  
 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
at
   
   
  
 
 kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
at
 

[jira] [Commented] (KAFKA-1688) Add authorization interface and naive implementation

2015-01-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283196#comment-14283196
 ] 

Joe Stein commented on KAFKA-1688:
--

you can create a child page or such, just checked your perms looks ok to-do so

 Add authorization interface and naive implementation
 

 Key: KAFKA-1688
 URL: https://issues.apache.org/jira/browse/KAFKA-1688
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 Add a PermissionManager interface as described here:
 https://cwiki.apache.org/confluence/display/KAFKA/Security
 (possibly there is a better name?)
 Implement calls to the PermissionsManager in KafkaApis for the main requests 
 (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
 exception to the protocol to indicate permission denied.
 Add a server configuration to give the class you want to instantiate that 
 implements that interface. That class can define its own configuration 
 properties from the main config file.
 Provide a simple implementation of this interface which just takes a user and 
 ip whitelist and permits those in either of the whitelists to do anything, 
 and denies all others.
 Rather than writing an integration test for this class we can probably just 
 use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1872) Update Developer Setup

2015-01-19 Thread Sree Vaddi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283236#comment-14283236
 ] 

Sree Vaddi commented on KAFKA-1872:
---

[~omkreddy]
Thanks for the pointer. I changed the other way.
gradle.properties
scalaVersion=2.11

project 'core' - right click - Scala - Restart Presentation Compiler.



 Update Developer Setup
 --

 Key: KAFKA-1872
 URL: https://issues.apache.org/jira/browse/KAFKA-1872
 Project: Kafka
  Issue Type: Improvement
  Components: build
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 eclipse Mars M4
 Gradle 2
 Scala 2
 Git
Reporter: Sree Vaddi
Assignee: Sree Vaddi
  Labels: cwiki, development_environment, eclipse, git, gradle, 
 scala, setup
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 I setup my developer environment today and came up with an updated document.
 Update the CWiki page at 
 https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 OR create a new page:
 Update the site page at http://kafka.apache.org/code.html
 with the one created in previous step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Hi Kafka

2015-01-19 Thread Guozhang Wang
Hi Sree,

I saw you have created a new wiki for this, thanks!

Could you check the see if it can be merged into this page?

https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup

Guozhang

On Sat, Jan 17, 2015 at 6:55 AM, Sree V sree_at_ch...@yahoo.com.invalid
wrote:

 Hi All,
 I am new to Apache Kafka. I use it at work.I have been working with Apache
 Drill and Apache Calcite, as well.
 I have made an updated doc for, dev setup on eclipse, if anyone needs.


 Thanking you.

 With Regards
 Sree




-- 
-- Guozhang


[jira] [Commented] (KAFKA-1873) scalatest_2.10-1.9.1.jar of core build path is cross-compiled with an incompatible version of Scala (2.10.0)

2015-01-19 Thread Sree Vaddi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283245#comment-14283245
 ] 

Sree Vaddi commented on KAFKA-1873:
---

[~junrao] This fixes it.

kafka/gradle.properties
scalaVersion=2.11.5 === change it from 2.10.4

project 'core' - right click - Scala - Restart Presentation Compiler.

 scalatest_2.10-1.9.1.jar of core build path is cross-compiled with an 
 incompatible version of Scala (2.10.0)
 

 Key: KAFKA-1873
 URL: https://issues.apache.org/jira/browse/KAFKA-1873
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 Oracle JDK 1.7.0_72
 eclipse Mars M4
Reporter: Sree Vaddi
Priority: Minor
  Labels: 2.10.0, core, incompatible, scala, scalatest
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 When you setup your development environment, you see this in Problems for the 
 project 'core'.
 Description   ResourcePathLocationType
 scalatest_2.10-1.9.1.jar of core build path is cross-compiled with an 
 incompatible version of Scala (2.10.0). In case this report is mistaken, this 
 check can be disabled in the compiler preference page.core
 Unknown Scala Version Problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1688) Add authorization interface and naive implementation

2015-01-19 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283104#comment-14283104
 ] 

Don Bosco Durai commented on KAFKA-1688:


[~charmalloc], can you help me create a new Wiki page to put the high level 
design for this feature? It will be easier to discuss based on that.

Thanks


 Add authorization interface and naive implementation
 

 Key: KAFKA-1688
 URL: https://issues.apache.org/jira/browse/KAFKA-1688
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 Add a PermissionManager interface as described here:
 https://cwiki.apache.org/confluence/display/KAFKA/Security
 (possibly there is a better name?)
 Implement calls to the PermissionsManager in KafkaApis for the main requests 
 (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
 exception to the protocol to indicate permission denied.
 Add a server configuration to give the class you want to instantiate that 
 implements that interface. That class can define its own configuration 
 properties from the main config file.
 Provide a simple implementation of this interface which just takes a user and 
 ip whitelist and permits those in either of the whitelists to do anything, 
 and denies all others.
 Rather than writing an integration test for this class we can probably just 
 use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1874) missing import util.parsing.json.JSON

2015-01-19 Thread Sree Vaddi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283247#comment-14283247
 ] 

Sree Vaddi commented on KAFKA-1874:
---

[~junrao] [~omkreddy] This fixes it.

kafka/gradle.properties
scalaVersion=2.11.5 === change it from 2.10.4

project 'core' - right click - Scala - Restart Presentation Compiler.


 missing import util.parsing.json.JSON
 -

 Key: KAFKA-1874
 URL: https://issues.apache.org/jira/browse/KAFKA-1874
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 Oracle JDK 1.7.0_72
 eclipse Mars M4
 Scala 2.11.5
Reporter: Sree Vaddi
  Labels: class, missing, scala
 Fix For: 0.8.2

 Attachments: Screen Shot 2015-01-17 at 3.14.33 PM.png

   Original Estimate: 1h
  Remaining Estimate: 1h

 core project
 main scala folder
 kafka.utils.Json.scala file
 line#21
 import util.parsing.json.JSON
 this class is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1873) scalatest_2.10-1.9.1.jar of core build path is cross-compiled with an incompatible version of Scala (2.10.0)

2015-01-19 Thread Sree Vaddi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sree Vaddi resolved KAFKA-1873.
---
Resolution: Fixed
  Reviewer: Manikumar Reddy

 scalatest_2.10-1.9.1.jar of core build path is cross-compiled with an 
 incompatible version of Scala (2.10.0)
 

 Key: KAFKA-1873
 URL: https://issues.apache.org/jira/browse/KAFKA-1873
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 Oracle JDK 1.7.0_72
 eclipse Mars M4
Reporter: Sree Vaddi
Priority: Minor
  Labels: 2.10.0, core, incompatible, scala, scalatest
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 When you setup your development environment, you see this in Problems for the 
 project 'core'.
 Description   ResourcePathLocationType
 scalatest_2.10-1.9.1.jar of core build path is cross-compiled with an 
 incompatible version of Scala (2.10.0). In case this report is mistaken, this 
 check can be disabled in the compiler preference page.core
 Unknown Scala Version Problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1872) Update Developer Setup

2015-01-19 Thread Sree Vaddi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283253#comment-14283253
 ] 

Sree Vaddi commented on KAFKA-1872:
---

[~junrao]
Would you move
https://issues.apache.org/jira/browse/KAFKA-1873
https://issues.apache.org/jira/browse/KAFKA-1874
Sub-Tasks of,
https://issues.apache.org/jira/browse/KAFKA-1872.

And close them, as well.


 Update Developer Setup
 --

 Key: KAFKA-1872
 URL: https://issues.apache.org/jira/browse/KAFKA-1872
 Project: Kafka
  Issue Type: Improvement
  Components: build
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 eclipse Mars M4
 Gradle 2
 Scala 2
 Git
Reporter: Sree Vaddi
Assignee: Sree Vaddi
  Labels: cwiki, development_environment, eclipse, git, gradle, 
 scala, setup
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 I setup my developer environment today and came up with an updated document.
 Update the CWiki page at 
 https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 OR create a new page:
 Update the site page at http://kafka.apache.org/code.html
 with the one created in previous step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1872) Update Developer Setup

2015-01-19 Thread Sree Vaddi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283248#comment-14283248
 ] 

Sree Vaddi commented on KAFKA-1872:
---

Updated the wiki, as well.

 Update Developer Setup
 --

 Key: KAFKA-1872
 URL: https://issues.apache.org/jira/browse/KAFKA-1872
 Project: Kafka
  Issue Type: Improvement
  Components: build
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 eclipse Mars M4
 Gradle 2
 Scala 2
 Git
Reporter: Sree Vaddi
Assignee: Sree Vaddi
  Labels: cwiki, development_environment, eclipse, git, gradle, 
 scala, setup
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 I setup my developer environment today and came up with an updated document.
 Update the CWiki page at 
 https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 OR create a new page:
 Update the site page at http://kafka.apache.org/code.html
 with the one created in previous step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1872) Update Developer Setup

2015-01-19 Thread Sree Vaddi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sree Vaddi updated KAFKA-1872:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Update Developer Setup
 --

 Key: KAFKA-1872
 URL: https://issues.apache.org/jira/browse/KAFKA-1872
 Project: Kafka
  Issue Type: Improvement
  Components: build
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 eclipse Mars M4
 Gradle 2
 Scala 2
 Git
Reporter: Sree Vaddi
Assignee: Sree Vaddi
  Labels: cwiki, development_environment, eclipse, git, gradle, 
 scala, setup
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 I setup my developer environment today and came up with an updated document.
 Update the CWiki page at 
 https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 OR create a new page:
 Update the site page at http://kafka.apache.org/code.html
 with the one created in previous step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-01-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1824:

Status: Patch Available  (was: Reopened)

 in ConsoleProducer - properties key.separator and parse.key no longer work
 --

 Key: KAFKA-1824
 URL: https://issues.apache.org/jira/browse/KAFKA-1824
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
 KAFKA-1824_2014-12-22_16:17:42.patch


 Looks like the change in kafka-1711 breaks them accidentally.
 reader.init is called with readerProps which is initialized with commandline 
 properties as defaults.
 the problem is that reader.init checks:
 if(props.containsKey(parse.key))
 and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-01-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283351#comment-14283351
 ] 

Gwen Shapira commented on KAFKA-1824:
-

Thanks, Jun. 
I changed status to Patch Available, so once the 0.8.2 madness calms down a 
bit, we can review and get it into trunk.

 in ConsoleProducer - properties key.separator and parse.key no longer work
 --

 Key: KAFKA-1824
 URL: https://issues.apache.org/jira/browse/KAFKA-1824
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
 KAFKA-1824_2014-12-22_16:17:42.patch


 Looks like the change in kafka-1711 breaks them accidentally.
 reader.init is called with readerProps which is initialized with commandline 
 properties as defaults.
 the problem is that reader.init checks:
 if(props.containsKey(parse.key))
 and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1882) Create extendable channel interface and default implementations

2015-01-19 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-1882:
---

 Summary: Create extendable channel interface and default 
implementations
 Key: KAFKA-1882
 URL: https://issues.apache.org/jira/browse/KAFKA-1882
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Gwen Shapira


For the security infrastructure, we need an extendible interface to replace 
SocketChannel.

KAFKA-1684 suggests extending SocketChannel itself, but since SocketChannel is 
part of Java's standard library, the interface changes between different Java 
versions, so extending it directly can become a compatibility issue.

Instead, we can implement a KafkaChannel interface, which will implement 
connect(), read(), write() and possibly other methods we use. 

We will replace direct use of SocketChannel in our code with use of 
KafkaChannel.

Different implementations of KafkaChannel will be instantiated based on the 
port/SecurityProtocol configuration. 

This patch will provide at least the PLAINTEXT implementation for KafkaChannel.

I will validate that the SSL implementation in KAFKA-1684 can be refactored to 
use a KafkaChannel interface rather than extend SocketChannel directly. 
However, the patch will not include the SSL channel itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka Out of Memory error

2015-01-19 Thread Jonathan Natkins
The fetch.message.max.size is actually a client-side configuration. With
regard to increasing the number of threads, I think the calculation may be
a little more subtle than what you're proposing, and frankly, it's unlikely
that your servers can handle allocating 200MB x 1000 threads = 200GB of
memory at a single time.

I believe that if you have every partition on a single broker, and all of
your consumer threads are requesting data simultaneously, then yes, the
broker would attempt to allocate 200GB of heap, and probably you'll hit an
OOME. However, since each consumer is only reading from one partition,
those 1000 threads should be making requests that are spread out over the
entire Kafka cluster. Depending on the memory on your servers, you may need
to increase the number of brokers in your cluster to support the 1000
threads. For example, I would expect that you can support this with 10
brokers if each broker has something north of 20GB of heap allocated.

Some of this is a little bit of guess work on my part, and I'm not super
confident of my numbers...Can anybody else on the list validate my math?

Thanks,
Natty

Jonathan Natty Natkins
StreamSets | Customer Engagement Engineer
mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice


On Mon, Jan 19, 2015 at 2:34 PM, Pranay Agarwal agarwalpran...@gmail.com
wrote:

 Thanks Natty.

 Is there any config which I need to change on the client side as well?
 Also, currently I am trying with only 1 consumer thread. Does the equation
 changes to
 (#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with
 1000 threads from from topic2(1000 partitions)?

 -Pranay

 On Mon, Jan 19, 2015 at 2:26 PM, Jonathan Natkins na...@streamsets.com
 wrote:

  Hi Pranay,
 
  I think the JIRA you're referencing is a bit orthogonal to the OOME that
  you're experiencing. Based on the stacktrace, it looks like your OOME is
  coming from a consumer request, which is attempting to allocate 200MB.
  There was a thread (relatively recently) that discussed what I think is
  your issue:
 
 
 
 http://mail-archives.apache.org/mod_mbox/kafka-users/201412.mbox/%3CCAG1fNJDHHGSL-x3wp=pPZS1asOdOBrQ-Ge3kiA3Bk_iz7o=5...@mail.gmail.com%3E
 
  I suspect that the takeaway is that the way Kafka determines the required
  memory for a consumer request is (#partitions in the topic) x
  (replica.fetch.max.bytes), and seemingly you don't have enough memory
  allocated to handle that request. The solution is likely to increase the
  heap size on your brokers or to decrease your max fetch size.
 
  Thanks,
  Natty
 
  Jonathan Natty Natkins
  StreamSets | Customer Engagement Engineer
  mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice
 
 
  On Mon, Jan 19, 2015 at 2:10 PM, Pranay Agarwal 
 agarwalpran...@gmail.com
  wrote:
 
   Hi All,
  
   I have a kafka cluster setup which has 2 topics
  
   topic1 with 10 partitions
   topic2 with 1000 partitions.
  
   While, I am able to consume messages from topic1 just fine, I get
  following
   error from the topic2. There is a resolved issue here on the same thing
   https://issues.apache.org/jira/browse/KAFKA-664
  
   I am using latest kafka server version, and I have used kafka command
  line
   tools to consumer from the topics.
  
   
   [2015-01-19 22:08:10,758] ERROR OOME with size 201332748
   (kafka.network.BoundedByteBufferReceive)
  
   java.lang.OutOfMemoryError: Java heap space
   at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:57)
   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
   at
  
  
 
 kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
   at
  
  
 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
   at
  
  
 
 kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
   at
  
  
 
 kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
   at
  
  
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
   at
  
  
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
   at
  
  
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
   at
  
  
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
   at
  
  
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
   at
  
  
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
   at 

[jira] [Commented] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-01-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283216#comment-14283216
 ] 

Jun Rao commented on KAFKA-1824:


Gwen, thanks for point this out. I am reverting KAFKA-1711 from the 0.8.2 
branch before cutting RC1.

 in ConsoleProducer - properties key.separator and parse.key no longer work
 --

 Key: KAFKA-1824
 URL: https://issues.apache.org/jira/browse/KAFKA-1824
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
 KAFKA-1824_2014-12-22_16:17:42.patch


 Looks like the change in kafka-1711 breaks them accidentally.
 reader.init is called with readerProps which is initialized with commandline 
 properties as defaults.
 the problem is that reader.init checks:
 if(props.containsKey(parse.key))
 and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka Out of Memory error

2015-01-19 Thread Gwen Shapira
Two things:
1. The OOM happened on the consumer, right? So the memory that matters
is the RAM on the consumer machine, not on the Kafka cluster nodes.

2. If the consumers belong to the same consumer group, each will
consume a subset of the partitions and will only need to allocate
memory for those partitions.

So, assuming all your consumers belong to the same group:
2 consumers  - each has 500 partitions - each uses 500MB.

The total remains 1GB no matter how many consumers you have, as long
as they are all in the same group.

If the consumer belong to different groups (i.e. they read copies of
the same messages from the same partitions), then yes, you are limited
to 8 per server (probably less because there are other stuff on the
server).

Gwen

On Mon, Jan 19, 2015 at 3:06 PM, Pranay Agarwal
agarwalpran...@gmail.com wrote:
 Thanks a lot Natty.

 I am using this Ruby gem on the client side with all the default config
 https://github.com/joekiller/jruby-kafka/blob/master/lib/jruby-kafka/group.rb
 and the value fetch.message.max.bytes is set to 1MB.

 Currently I only have 3 nodes setup in the Kafka cluster (with 8 GB RAM)
 and if 1 consumer if going to take 1000 partitions X 1mb ~ 1GB, does it
 mean 1 kafka node can at best support 8 consumer only? Also, when I do
 top/free on the Kafka cluster nodes (Both zookeeper and kafka is deployed
 on each 3 nodes of the cluster) I don't see lots of memory being used on
 the machine. Also, even with this calculation, I shouldn't be facing any
 issue with only 1 consumer, as I have 8GB of JVM space given to Kafka
 nodes, right?

 Thanks
 -Pranay

 On Mon, Jan 19, 2015 at 2:53 PM, Jonathan Natkins na...@streamsets.com
 wrote:

 The fetch.message.max.size is actually a client-side configuration. With
 regard to increasing the number of threads, I think the calculation may be
 a little more subtle than what you're proposing, and frankly, it's unlikely
 that your servers can handle allocating 200MB x 1000 threads = 200GB of
 memory at a single time.

 I believe that if you have every partition on a single broker, and all of
 your consumer threads are requesting data simultaneously, then yes, the
 broker would attempt to allocate 200GB of heap, and probably you'll hit an
 OOME. However, since each consumer is only reading from one partition,
 those 1000 threads should be making requests that are spread out over the
 entire Kafka cluster. Depending on the memory on your servers, you may need
 to increase the number of brokers in your cluster to support the 1000
 threads. For example, I would expect that you can support this with 10
 brokers if each broker has something north of 20GB of heap allocated.

 Some of this is a little bit of guess work on my part, and I'm not super
 confident of my numbers...Can anybody else on the list validate my math?

 Thanks,
 Natty

 Jonathan Natty Natkins
 StreamSets | Customer Engagement Engineer
 mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice


 On Mon, Jan 19, 2015 at 2:34 PM, Pranay Agarwal agarwalpran...@gmail.com
 wrote:

  Thanks Natty.
 
  Is there any config which I need to change on the client side as well?
  Also, currently I am trying with only 1 consumer thread. Does the
 equation
  changes to
  (#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with
  1000 threads from from topic2(1000 partitions)?
 
  -Pranay
 
  On Mon, Jan 19, 2015 at 2:26 PM, Jonathan Natkins na...@streamsets.com
  wrote:
 
   Hi Pranay,
  
   I think the JIRA you're referencing is a bit orthogonal to the OOME
 that
   you're experiencing. Based on the stacktrace, it looks like your OOME
 is
   coming from a consumer request, which is attempting to allocate 200MB.
   There was a thread (relatively recently) that discussed what I think is
   your issue:
  
  
  
 
 http://mail-archives.apache.org/mod_mbox/kafka-users/201412.mbox/%3CCAG1fNJDHHGSL-x3wp=pPZS1asOdOBrQ-Ge3kiA3Bk_iz7o=5...@mail.gmail.com%3E
  
   I suspect that the takeaway is that the way Kafka determines the
 required
   memory for a consumer request is (#partitions in the topic) x
   (replica.fetch.max.bytes), and seemingly you don't have enough memory
   allocated to handle that request. The solution is likely to increase
 the
   heap size on your brokers or to decrease your max fetch size.
  
   Thanks,
   Natty
  
   Jonathan Natty Natkins
   StreamSets | Customer Engagement Engineer
   mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice
  
  
   On Mon, Jan 19, 2015 at 2:10 PM, Pranay Agarwal 
  agarwalpran...@gmail.com
   wrote:
  
Hi All,
   
I have a kafka cluster setup which has 2 topics
   
topic1 with 10 partitions
topic2 with 1000 partitions.
   
While, I am able to consume messages from topic1 just fine, I get
   following
error from the topic2. There is a resolved issue here on the same
 thing
https://issues.apache.org/jira/browse/KAFKA-664
   
I am using latest kafka server version, and I 

[jira] [Updated] (KAFKA-1883) NullPointerException in RequestSendThread

2015-01-19 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1883:

Status: Patch Available  (was: Open)

 NullPointerException in RequestSendThread
 -

 Key: KAFKA-1883
 URL: https://issues.apache.org/jira/browse/KAFKA-1883
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1883.patch


 I often see the following exception while running some tests
 (ProducerFailureHandlingTest.testNoResponse is one such instance):
 {code}
 [2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
 Controller 0 fails to send a request to broker
 id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
 java.lang.NullPointerException
 at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
 scala:150)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 {code}
 Looking at that code in question, I can see that the NPE can be triggered
 when the receive is null which can happen if the isRunning is false
 (i.e a shutdown has been requested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1883) NullPointerException in RequestSendThread

2015-01-19 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1883:

Attachment: KAFKA-1883.patch

 NullPointerException in RequestSendThread
 -

 Key: KAFKA-1883
 URL: https://issues.apache.org/jira/browse/KAFKA-1883
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1883.patch


 I often see the following exception while running some tests
 (ProducerFailureHandlingTest.testNoResponse is one such instance):
 {code}
 [2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
 Controller 0 fails to send a request to broker
 id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
 java.lang.NullPointerException
 at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
 scala:150)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 {code}
 Looking at that code in question, I can see that the NPE can be triggered
 when the receive is null which can happen if the isRunning is false
 (i.e a shutdown has been requested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1883) NullPointerException in RequestSendThread

2015-01-19 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283366#comment-14283366
 ] 

jaikiran pai commented on KAFKA-1883:
-

Created reviewboard https://reviews.apache.org/r/30062/diff/
 against branch origin/trunk

 NullPointerException in RequestSendThread
 -

 Key: KAFKA-1883
 URL: https://issues.apache.org/jira/browse/KAFKA-1883
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1883.patch


 I often see the following exception while running some tests
 (ProducerFailureHandlingTest.testNoResponse is one such instance):
 {code}
 [2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
 Controller 0 fails to send a request to broker
 id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
 java.lang.NullPointerException
 at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
 scala:150)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 {code}
 Looking at that code in question, I can see that the NPE can be triggered
 when the receive is null which can happen if the isRunning is false
 (i.e a shutdown has been requested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1688) Add authorization interface and naive implementation

2015-01-19 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283155#comment-14283155
 ] 

Don Bosco Durai commented on KAFKA-1688:


My id is bosco. I think I might already have write access to the Wiki. It 
would be good if you can create a new page where I can start putting content, 
which can be linked from the requirement document 
https://cwiki.apache.org/confluence/display/KAFKA/Security. If there is already 
a design page for security, then I can use that also.

Thanks

 Add authorization interface and naive implementation
 

 Key: KAFKA-1688
 URL: https://issues.apache.org/jira/browse/KAFKA-1688
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 Add a PermissionManager interface as described here:
 https://cwiki.apache.org/confluence/display/KAFKA/Security
 (possibly there is a better name?)
 Implement calls to the PermissionsManager in KafkaApis for the main requests 
 (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
 exception to the protocol to indicate permission denied.
 Add a server configuration to give the class you want to instantiate that 
 implements that interface. That class can define its own configuration 
 properties from the main config file.
 Provide a simple implementation of this interface which just takes a user and 
 ip whitelist and permits those in either of the whitelists to do anything, 
 and denies all others.
 Rather than writing an integration test for this class we can probably just 
 use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30062: Patch for KAFKA-1883

2015-01-19 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30062/
---

Review request for kafka.


Bugs: KAFKA-1883
https://issues.apache.org/jira/browse/KAFKA-1883


Repository: kafka


Description
---

KAFKA-1883 Fix NullPointerException in RequestSendThread


Diffs
-

  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
eb492f00449744bc8d63f55b393e2a1659d38454 

Diff: https://reviews.apache.org/r/30062/diff/


Testing
---


Thanks,

Jaikiran Pai



Re: Review Request 29647: Patch for KAFKA-1697

2015-01-19 Thread Eric Olander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29647/#review68687
---



core/src/main/scala/kafka/cluster/Partition.scala
https://reviews.apache.org/r/29647/#comment113038

I realize this has absolutely nothing to do with the code under review, but 
ouch, this is a lot of unnecessary work.  This boolean expression is all that 
is needed:

leaderReplicaIfLocal().isDefined  inSyncReplicas.size  
assignedReplicas.size



core/src/main/scala/kafka/cluster/Partition.scala
https://reviews.apache.org/r/29647/#comment113039

Again, totally unrelated, but all this needs is:

Option(assignedReplicaMap.get(replicaId))

Option.apply() already does what this code is doing.


- Eric Olander


On Jan. 14, 2015, 11:41 p.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29647/
 ---
 
 (Updated Jan. 14, 2015, 11:41 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1697
 https://issues.apache.org/jira/browse/KAFKA-1697
 
 
 Repository: kafka
 
 
 Description
 ---
 
 trivial change to add byte serializer to ProducerPerformance; patched by Jun 
 Rao
 
 
 KAFKA-1723 (delta patch to fix javadoc); make the metrics name in new 
 producer more standard; patched by Manikumar Reddy; reviewed by Jun Rao
 
 
 removed broker code for handling acks1 and made 
 NotEnoughReplicasAfterAppendException non-retriable
 
 
 added early handling of invalid number of acks to handler and a test
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/common/errors/InvalidRequiredAcksException.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
  75c80a97e43089cb3f924a38f86d67b5a8dd2b89 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 3316b6a1098311b8603a4a5893bf57b75d2e43cb 
   core/src/main/scala/kafka/cluster/Partition.scala 
 b230e9a1fb1a3161f4c9d164e4890a16eceb2ad4 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 c011a1b79bd6c4e832fe7d097daacb0d647d1cd4 
   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
 cd16ced5465d098be7a60498326b2a98c248f343 
   core/src/test/scala/unit/kafka/api/testKafkaApis.scala PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/29647/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




Review Request 30063: Patch for KAFKA-1840

2015-01-19 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30063/
---

Review request for kafka.


Bugs: KAFKA-1840
https://issues.apache.org/jira/browse/KAFKA-1840


Repository: kafka


Description
---

Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
KAFKA-1840


Diffs
-

  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 

Diff: https://reviews.apache.org/r/30063/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Reopened] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-01-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao reopened KAFKA-1824:


Reopening the issue since the followup patch hasn't been committed yet.

 in ConsoleProducer - properties key.separator and parse.key no longer work
 --

 Key: KAFKA-1824
 URL: https://issues.apache.org/jira/browse/KAFKA-1824
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
 KAFKA-1824_2014-12-22_16:17:42.patch


 Looks like the change in kafka-1711 breaks them accidentally.
 reader.init is called with readerProps which is initialized with commandline 
 properties as defaults.
 the problem is that reader.init checks:
 if(props.containsKey(parse.key))
 and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Hi Kafka

2015-01-19 Thread Sree V
Hi Guozhang,
I will sure.
For now, I had put a new link with date on top of the previous entries.So, 
people know, where to go first.

Thanking you.

With Regards
Sree 

 On Monday, January 19, 2015 4:58 PM, Guozhang Wang wangg...@gmail.com 
wrote:
   

 Hi Sree,

I saw you have created a new wiki for this, thanks!

Could you check the see if it can be merged into this page?

https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup

Guozhang

On Sat, Jan 17, 2015 at 6:55 AM, Sree V sree_at_ch...@yahoo.com.invalid
wrote:

 Hi All,
 I am new to Apache Kafka. I use it at work.I have been working with Apache
 Drill and Apache Calcite, as well.
 I have made an updated doc for, dev setup on eclipse, if anyone needs.


 Thanking you.

 With Regards
 Sree




-- 
-- Guozhang




Re: Kafka Out of Memory error

2015-01-19 Thread Pranay Agarwal
Thanks Natty.

Is there any config which I need to change on the client side as well?
Also, currently I am trying with only 1 consumer thread. Does the equation
changes to
(#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with
1000 threads from from topic2(1000 partitions)?

-Pranay

On Mon, Jan 19, 2015 at 2:26 PM, Jonathan Natkins na...@streamsets.com
wrote:

 Hi Pranay,

 I think the JIRA you're referencing is a bit orthogonal to the OOME that
 you're experiencing. Based on the stacktrace, it looks like your OOME is
 coming from a consumer request, which is attempting to allocate 200MB.
 There was a thread (relatively recently) that discussed what I think is
 your issue:


 http://mail-archives.apache.org/mod_mbox/kafka-users/201412.mbox/%3CCAG1fNJDHHGSL-x3wp=pPZS1asOdOBrQ-Ge3kiA3Bk_iz7o=5...@mail.gmail.com%3E

 I suspect that the takeaway is that the way Kafka determines the required
 memory for a consumer request is (#partitions in the topic) x
 (replica.fetch.max.bytes), and seemingly you don't have enough memory
 allocated to handle that request. The solution is likely to increase the
 heap size on your brokers or to decrease your max fetch size.

 Thanks,
 Natty

 Jonathan Natty Natkins
 StreamSets | Customer Engagement Engineer
 mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice


 On Mon, Jan 19, 2015 at 2:10 PM, Pranay Agarwal agarwalpran...@gmail.com
 wrote:

  Hi All,
 
  I have a kafka cluster setup which has 2 topics
 
  topic1 with 10 partitions
  topic2 with 1000 partitions.
 
  While, I am able to consume messages from topic1 just fine, I get
 following
  error from the topic2. There is a resolved issue here on the same thing
  https://issues.apache.org/jira/browse/KAFKA-664
 
  I am using latest kafka server version, and I have used kafka command
 line
  tools to consumer from the topics.
 
  
  [2015-01-19 22:08:10,758] ERROR OOME with size 201332748
  (kafka.network.BoundedByteBufferReceive)
 
  java.lang.OutOfMemoryError: Java heap space
  at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:57)
  at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
  at
 
 
 kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
  at
 
 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
  at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
  at
 
 
 kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
  at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
  at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
  at
 
 
 kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
  at
 
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
  at
 
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
  at
 
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
  at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
  at
 
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
  at
 
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
  at
 
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
  at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
  at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107)
  at
 
 
 kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
  at
  kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
  
 
 
  Thanks
  -Pranay
 



Re: Detecting lost connection in high level consumer

2015-01-19 Thread Guozhang Wang
Hi Hari,

For high level consumer the fetching logic is handled by a background
fetcher thread and is hidden from user, for either case of 1) broker down
or 2) no message is available the fetcher thread will keep retrying while
the user thread will wait on the fetcher thread to put some data into the
buffer until timeout. So in a sentence the high-level consumer design is to
not let users worry about connect / reconnect issues.

Guozhang

On Mon, Jan 19, 2015 at 1:26 AM, harikiran harihawk...@gmail.com wrote:

 Hi

 I am using the 0811 Kafka High level consumer and I have configured 
 consumer.timeout.ms to a value that is not -1, say 5000ms.

 I create the consumer iterator and invoke hasNext() method on it.

 Irrespective of whether kafka broker was shutdown or there was no message
 written to kafka, I see a ConsumerTimeOut exception after 5000ms.

 My goal is to detect lost connection and reconnect but I cannot figure out
 a way.

 Any kind of help is appreciated.

 Thanks
 Hari




-- 
-- Guozhang


[jira] [Comment Edited] (KAFKA-1872) Update Developer Setup

2015-01-19 Thread Sree Vaddi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283236#comment-14283236
 ] 

Sree Vaddi edited comment on KAFKA-1872 at 1/20/15 12:51 AM:
-

[~omkreddy]
Thanks for the pointer. I changed the other way.
gradle.properties
scalaVersion=2.11.5

project 'core' - right click - Scala - Restart Presentation Compiler.




was (Author: sreevaddi):
[~omkreddy]
Thanks for the pointer. I changed the other way.
gradle.properties
scalaVersion=2.11

project 'core' - right click - Scala - Restart Presentation Compiler.



 Update Developer Setup
 --

 Key: KAFKA-1872
 URL: https://issues.apache.org/jira/browse/KAFKA-1872
 Project: Kafka
  Issue Type: Improvement
  Components: build
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 eclipse Mars M4
 Gradle 2
 Scala 2
 Git
Reporter: Sree Vaddi
Assignee: Sree Vaddi
  Labels: cwiki, development_environment, eclipse, git, gradle, 
 scala, setup
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 I setup my developer environment today and came up with an updated document.
 Update the CWiki page at 
 https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 OR create a new page:
 Update the site page at http://kafka.apache.org/code.html
 with the one created in previous step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1874) missing import util.parsing.json.JSON

2015-01-19 Thread Sree Vaddi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sree Vaddi updated KAFKA-1874:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 missing import util.parsing.json.JSON
 -

 Key: KAFKA-1874
 URL: https://issues.apache.org/jira/browse/KAFKA-1874
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 Oracle JDK 1.7.0_72
 eclipse Mars M4
 Scala 2.11.5
Reporter: Sree Vaddi
  Labels: class, missing, scala
 Fix For: 0.8.2

 Attachments: Screen Shot 2015-01-17 at 3.14.33 PM.png

   Original Estimate: 1h
  Remaining Estimate: 1h

 core project
 main scala folder
 kafka.utils.Json.scala file
 line#21
 import util.parsing.json.JSON
 this class is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1874) missing import util.parsing.json.JSON

2015-01-19 Thread Sree Vaddi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sree Vaddi updated KAFKA-1874:
--
Reviewer: Manikumar Reddy
  Status: Patch Available  (was: Open)

 missing import util.parsing.json.JSON
 -

 Key: KAFKA-1874
 URL: https://issues.apache.org/jira/browse/KAFKA-1874
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 Oracle JDK 1.7.0_72
 eclipse Mars M4
 Scala 2.11.5
Reporter: Sree Vaddi
  Labels: class, missing, scala
 Fix For: 0.8.2

 Attachments: Screen Shot 2015-01-17 at 3.14.33 PM.png

   Original Estimate: 1h
  Remaining Estimate: 1h

 core project
 main scala folder
 kafka.utils.Json.scala file
 line#21
 import util.parsing.json.JSON
 this class is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: NullPointerException in RequestSendThread

2015-01-19 Thread Guozhang Wang
Hi Jaikiran,

This is a real bug, could you file a JIRA?

As for the fix, I think your proposal would be the right way to fix it.


Guozhang

On Mon, Jan 19, 2015 at 9:07 AM, Jaikiran Pai jai.forums2...@gmail.com
wrote:

 I often see the following exception while running some tests
 (ProducerFailureHandlingTest.testNoResponse is one such instance):


 [2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
 Controller 0 fails to send a request to broker
 id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
 java.lang.NullPointerException
 at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
 scala:150)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)


 Looking at that code in question, I can see that the NPE can be trigger
 when the receive is null which can happen if the isRunning is false
 (i.e a shutdown has been requested). The fix to prevent this seems
 straightforward:

 diff --git 
 a/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
 b/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
 index eb492f0..10f4c5a 100644
 --- a/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
 +++ b/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
 @@ -144,20 +144,22 @@ class RequestSendThread(val controllerId: Int,
Utils.swallow(Thread.sleep(300))
}
  }
 -var response: RequestOrResponse = null
 -request.requestId.get match {
 -  case RequestKeys.LeaderAndIsrKey =
 -response = LeaderAndIsrResponse.readFrom(receive.buffer)
 -  case RequestKeys.StopReplicaKey =
 -response = StopReplicaResponse.readFrom(receive.buffer)
 -  case RequestKeys.UpdateMetadataKey =
 -response = UpdateMetadataResponse.readFrom(receive.buffer)
 -}
 -stateChangeLogger.trace(Controller %d epoch %d received
 response %s for a request sent to broker %s
 -  .format(controllerId,
 controllerContext.epoch, response.toString, toBroker.toString))
 +if (receive != null) {
 +  var response: RequestOrResponse = null
 +  request.requestId.get match {
 +case RequestKeys.LeaderAndIsrKey =
 +  response = LeaderAndIsrResponse.readFrom(receive.buffer)
 +case RequestKeys.StopReplicaKey =
 +  response = StopReplicaResponse.readFrom(receive.buffer)
 +case RequestKeys.UpdateMetadataKey =
 +  response = UpdateMetadataResponse.readFrom(receive.buffer)
 +  }
 +  stateChangeLogger.trace(Controller %d epoch %d received
 response %s for a request sent to broker %s
 +.format(controllerId, controllerContext.epoch,
 response.toString, toBroker.toString))

 -if(callback != null) {
 -  callback(response)
 +  if (callback != null) {
 +callback(response)
 +  }
  }
}


 However can this really be considered a fix or would this just be hiding
 the real issue and would there be something more that will have to be done
 in this case? I'm on trunk FWIW.


 -Jaikiran




-- 
-- Guozhang


[jira] [Created] (KAFKA-1883) NullPointerException in RequestSendThread

2015-01-19 Thread jaikiran pai (JIRA)
jaikiran pai created KAFKA-1883:
---

 Summary: NullPointerException in RequestSendThread
 Key: KAFKA-1883
 URL: https://issues.apache.org/jira/browse/KAFKA-1883
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: jaikiran pai
Assignee: jaikiran pai


I often see the following exception while running some tests
(ProducerFailureHandlingTest.testNoResponse is one such instance):

{code}
[2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
Controller 0 fails to send a request to broker
id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
java.lang.NullPointerException
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
scala:150)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

{code}

Looking at that code in question, I can see that the NPE can be triggered
when the receive is null which can happen if the isRunning is false
(i.e a shutdown has been requested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: NullPointerException in RequestSendThread

2015-01-19 Thread Jaikiran Pai
JIRA created https://issues.apache.org/jira/browse/KAFKA-1883 and patch 
submitted for review. Thanks Guozhang.


-Jaikiran

On Tuesday 20 January 2015 05:53 AM, Guozhang Wang wrote:

Hi Jaikiran,

This is a real bug, could you file a JIRA?

As for the fix, I think your proposal would be the right way to fix it.


Guozhang

On Mon, Jan 19, 2015 at 9:07 AM, Jaikiran Pai jai.forums2...@gmail.com
wrote:


I often see the following exception while running some tests
(ProducerFailureHandlingTest.testNoResponse is one such instance):


[2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
Controller 0 fails to send a request to broker
id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
java.lang.NullPointerException
 at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
scala:150)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)


Looking at that code in question, I can see that the NPE can be trigger
when the receive is null which can happen if the isRunning is false
(i.e a shutdown has been requested). The fix to prevent this seems
straightforward:

diff --git a/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
b/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
index eb492f0..10f4c5a 100644
--- a/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
+++ b/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
@@ -144,20 +144,22 @@ class RequestSendThread(val controllerId: Int,
Utils.swallow(Thread.sleep(300))
}
  }
-var response: RequestOrResponse = null
-request.requestId.get match {
-  case RequestKeys.LeaderAndIsrKey =
-response = LeaderAndIsrResponse.readFrom(receive.buffer)
-  case RequestKeys.StopReplicaKey =
-response = StopReplicaResponse.readFrom(receive.buffer)
-  case RequestKeys.UpdateMetadataKey =
-response = UpdateMetadataResponse.readFrom(receive.buffer)
-}
-stateChangeLogger.trace(Controller %d epoch %d received
response %s for a request sent to broker %s
-  .format(controllerId,
controllerContext.epoch, response.toString, toBroker.toString))
+if (receive != null) {
+  var response: RequestOrResponse = null
+  request.requestId.get match {
+case RequestKeys.LeaderAndIsrKey =
+  response = LeaderAndIsrResponse.readFrom(receive.buffer)
+case RequestKeys.StopReplicaKey =
+  response = StopReplicaResponse.readFrom(receive.buffer)
+case RequestKeys.UpdateMetadataKey =
+  response = UpdateMetadataResponse.readFrom(receive.buffer)
+  }
+  stateChangeLogger.trace(Controller %d epoch %d received
response %s for a request sent to broker %s
+.format(controllerId, controllerContext.epoch,
response.toString, toBroker.toString))

-if(callback != null) {
-  callback(response)
+  if (callback != null) {
+callback(response)
+  }
  }
}


However can this really be considered a fix or would this just be hiding
the real issue and would there be something more that will have to be done
in this case? I'm on trunk FWIW.


-Jaikiran








[jira] [Created] (KAFKA-1877) Expose version via JMX for 'new' producer

2015-01-19 Thread Vladimir Tretyakov (JIRA)
Vladimir Tretyakov created KAFKA-1877:
-

 Summary: Expose version via JMX for 'new' producer 
 Key: KAFKA-1877
 URL: https://issues.apache.org/jira/browse/KAFKA-1877
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 0.8.2
Reporter: Vladimir Tretyakov
 Fix For: 0.8.3


Add version of Kafka to jmx (monitoring tool can use this info).
Something like that
{code}
kafka.common:type=AppInfo,name=Version
  Value java.lang.Object = 0.8.2-beta
{code}
we already have this in core Kafka module (see kafka.common.AppInfo object).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIPs

2015-01-19 Thread Gwen Shapira
I created a KIP for the multi-port broker change.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs

I'm not re-opening the discussion, since it was agreed on over a month
ago and implementation is close to complete (I hope!). Lets consider
this voted and accepted?

Gwen

On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps jay.kr...@gmail.com wrote:
 Great! Sounds like everyone is on the same page

- I created a template page to make things easier. If you do Tools-Copy
on this page you can just fill in the italic portions with your details.
- I retrofitted KIP-1 to match this formatting
- I added the metadata section people asked for (a link to the
discussion, the JIRA, and the current status). Let's make sure we remember
to update the current status as things are figured out.
- Let's keep the discussion on the mailing list rather than on the wiki
pages. It makes sense to do one or the other so all the comments are in one
place and I think prior experience is that the wiki comments are the worse
way.

 I think it would be great do KIPs for some of the in-flight items folks
 mentioned.

 -Jay

 On Sat, Jan 17, 2015 at 8:23 AM, Gwen Shapira gshap...@cloudera.com wrote:

 +1

 Will be happy to provide a KIP for the multiple-listeners patch.

 Gwen

 On Sat, Jan 17, 2015 at 8:10 AM, Joe Stein joe.st...@stealth.ly wrote:
  +1 to everything we have been saying and where this (has settled to)/(is
  settling to).
 
  I am sure other folks have some more feedback and think we should try to
  keep this discussion going if need be. I am also a firm believer of form
  following function so kicking the tires some to flesh out the details of
  this and have some organic growth with the process will be healthy and
  beneficial to the community.
 
  For my part, what I will do is open a few KIP based on some of the work I
  have been involved with for 0.8.3. Off the top of my head this would
  include 1) changes to re-assignment of partitions 2) kafka cli 3) global
  configs 4) security white list black list by ip 5) SSL 6) We probably
 will
  have lots of Security related KIPs and should treat them all individually
  when the time is appropriate 7) Kafka on Mesos.
 
  If someone else wants to jump in to start getting some of the security
 KIP
  that we are going to have in 0.8.3 I think that would be great (e.g.
  Multiple Listeners for Kafka Brokers). There are also a few other
 tickets I
  can think of that are important to have in the code in 0.8.3 that should
  have KIP also that I haven't really been involved in. I will take a few
  minutes and go through JIRA (one I can think of like auto assign id that
 is
  already committed I think) and ask for a KIP if appropriate or if I feel
  that I can write it up (both from a time and understanding perspective)
 do
  so.
 
  long story short, I encourage folks to start moving ahead with the KIP
 for
  0.8.3 as how we operate. any objections?
 
  On Fri, Jan 16, 2015 at 2:40 PM, Guozhang Wang wangg...@gmail.com
 wrote:
 
  +1 on the idea, and we could mutually link the KIP wiki page with the
 the
  created JIRA ticket (i.e. include the JIRA number on the page and the
 KIP
  url on the ticket description).
 
  Regarding the KIP process, probably we do not need two phase
 communication
  of a [DISCUSS] followed by [VOTE], as Jay said the voting should be
 clear
  while people discuss about that.
 
  About who should trigger the process, I think the only involved people
  would be 1) when the patch is submitted / or even the ticket is created,
  the assignee could choose to start the KIP process if she thought it is
  necessary; 2) the reviewer of the patch can also suggest starting KIP
  discussions.
 
  On Fri, Jan 16, 2015 at 10:49 AM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
   +1 to Ewen's suggestions: Deprecation, status and version.
  
   Perhaps add the JIRA where the KIP was implemented to the metadata.
   This will help tie things together.
  
   On Fri, Jan 16, 2015 at 9:35 AM, Ewen Cheslack-Postava
   e...@confluent.io wrote:
I think adding a section about deprecation would be helpful. A good
fraction of the time I would expect the goal of a KIP is to fix or
   replace
older functionality that needs continued support for compatibility,
 but
should eventually be phased out. This helps Kafka devs understand
 how
   long
they'll end up supporting multiple versions of features and helps
 users
understand when they're going to have to make updates to their
   applications.
   
Less important but useful -- having a bit of standard metadata like
  PEPs
do. Two I think are important are status (if someone lands on the
 KIP
   page,
can they tell whether this KIP was ever completed?) and/or the
 version
   the
KIP was first released in.
   
   
   
On Fri, Jan 16, 2015 at 9:20 AM, Joel Koshy jjkosh...@gmail.com
  wrote:
   
 

[jira] [Updated] (KAFKA-1884) New Producer blocks forever for Invalid topic names

2015-01-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1884:
---
Assignee: (was: Jun Rao)

 New Producer blocks forever for Invalid topic names
 ---

 Key: KAFKA-1884
 URL: https://issues.apache.org/jira/browse/KAFKA-1884
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2
Reporter: Manikumar Reddy
 Fix For: 0.8.3


 New producer blocks forever for invalid topics names
 producer logs:
 DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Trying 
 to send metadata request to node -1
 DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Sending 
 metadata request ClientRequest(expectResponse=true, payload=null, 
 request=RequestSend(header={api_key=3,api_version=0,correlation_id=50845,client_id=my-producer},
  body={topics=[TOPIC=]})) to node -1
 TRACE [2015-01-20 12:46:13,416] NetworkClient: handleMetadataResponse(): 
 Ignoring empty metadata response with correlation id 50845.
 DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
 to send metadata request to node -1
 DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Sending 
 metadata request ClientRequest(expectResponse=true, payload=null, 
 request=RequestSend(header={api_key=3,api_version=0,correlation_id=50846,client_id=my-producer},
  body={topics=[TOPIC=]})) to node -1
 TRACE [2015-01-20 12:46:13,417] NetworkClient: handleMetadataResponse(): 
 Ignoring empty metadata response with correlation id 50846.
 DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
 to send metadata request to node -1
 DEBUG [2015-01-20 12:46:13,418] NetworkClient: maybeUpdateMetadata(): Sending 
 metadata request ClientRequest(expectResponse=true, payload=null, 
 request=RequestSend(header={api_key=3,api_version=0,correlation_id=50847,client_id=my-producer},
  body={topics=[TOPIC=]})) to node -1
 TRACE [2015-01-20 12:46:13,418] NetworkClient: handleMetadataResponse(): 
 Ignoring empty metadata response with correlation id 50847.
 Broker logs:
 [2015-01-20 12:46:14,074] ERROR [KafkaApi-0] error when handling request 
 Name: TopicMetadataRequest; Version: 0; CorrelationId: 51020; ClientId: 
 my-producer; Topics: TOPIC= (kafka.server.KafkaApis)
 kafka.common.InvalidTopicException: topic name TOPIC= is illegal, contains a 
 character other than ASCII alphanumerics, '.', '_' and '-'
   at kafka.common.Topic$.validate(Topic.scala:42)
   at 
 kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:186)
   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:177)
   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:367)
   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:350)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at 
 scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
   at scala.collection.SetLike$class.map(SetLike.scala:93)
   at scala.collection.AbstractSet.map(Set.scala:47)
   at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:350)
   at 
 kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:389)
   at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
   at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1884) New Producer blocks forever for Invalid topic names

2015-01-19 Thread Manikumar Reddy (JIRA)
Manikumar Reddy created KAFKA-1884:
--

 Summary: New Producer blocks forever for Invalid topic names
 Key: KAFKA-1884
 URL: https://issues.apache.org/jira/browse/KAFKA-1884
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2
Reporter: Manikumar Reddy
Assignee: Jun Rao
 Fix For: 0.8.3


New producer blocks forever for invalid topics names

producer logs:

{code]
DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Trying to 
send metadata request to node -1
DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Sending 
metadata request ClientRequest(expectResponse=true, payload=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=50845,client_id=my-producer},
 body={topics=[TOPIC=]})) to node -1
TRACE [2015-01-20 12:46:13,416] NetworkClient: handleMetadataResponse(): 
Ignoring empty metadata response with correlation id 50845.
DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying to 
send metadata request to node -1
DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Sending 
metadata request ClientRequest(expectResponse=true, payload=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=50846,client_id=my-producer},
 body={topics=[TOPIC=]})) to node -1
TRACE [2015-01-20 12:46:13,417] NetworkClient: handleMetadataResponse(): 
Ignoring empty metadata response with correlation id 50846.
DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying to 
send metadata request to node -1
DEBUG [2015-01-20 12:46:13,418] NetworkClient: maybeUpdateMetadata(): Sending 
metadata request ClientRequest(expectResponse=true, payload=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=50847,client_id=my-producer},
 body={topics=[TOPIC=]})) to node -1
TRACE [2015-01-20 12:46:13,418] NetworkClient: handleMetadataResponse(): 
Ignoring empty metadata response with correlation id 50847.
{code}
Broker logs:

[2015-01-20 12:46:14,074] ERROR [KafkaApi-0] error when handling request Name: 
TopicMetadataRequest; Version: 0; CorrelationId: 51020; ClientId: my-producer; 
Topics: TOPIC= (kafka.server.KafkaApis)
kafka.common.InvalidTopicException: topic name TOPIC= is illegal, contains a 
character other than ASCII alphanumerics, '.', '_' and '-'
at kafka.common.Topic$.validate(Topic.scala:42)
at 
kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:186)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:177)
at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:367)
at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:350)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at 
scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:93)
at scala.collection.AbstractSet.map(Set.scala:47)
at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:350)
at 
kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:389)
at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
at java.lang.Thread.run(Thread.java:722)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1881) transient unit test failure in testDeleteTopicWithCleaner due to OOME

2015-01-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283419#comment-14283419
 ] 

Jun Rao commented on KAFKA-1881:


Not sure id writeDups is causing the issue. I just ran gradlew test. It seems 
this starts to happen after KAFKA-1819 is committed.

 transient unit test failure in testDeleteTopicWithCleaner due to OOME
 -

 Key: KAFKA-1881
 URL: https://issues.apache.org/jira/browse/KAFKA-1881
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Gwen Shapira

 kafka.admin.DeleteTopicTest  testDeleteTopicWithCleaner FAILED
 java.lang.OutOfMemoryError: Java heap space
 at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:39)
 at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
 at kafka.log.SkimpyOffsetMap.init(OffsetMap.scala:42)
 at kafka.log.LogCleaner$CleanerThread.init(LogCleaner.scala:177)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.Range.foreach(Range.scala:141)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at kafka.log.LogCleaner.init(LogCleaner.scala:86)
 at kafka.log.LogManager.init(LogManager.scala:64)
 at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:337)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:85)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.List.foreach(List.scala:318)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest.testDeleteTopicWithCleaner(DeleteTopicTest.scala:241)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: Patch for KAFKA-1760

2015-01-19 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review68519
---



clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java
https://reviews.apache.org/r/27799/#comment112829

a request is complete = the corresponding response is received for this 
request.



clients/src/main/java/org/apache/kafka/common/Cluster.java
https://reviews.apache.org/r/27799/#comment112819

Wondering if we should create a new metadata sub-directory of common, and 
move Cluster / Node / TopicPartition / PartitionInfo to it.



clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
https://reviews.apache.org/r/27799/#comment112820

Strictly speaking this is a public API since ConfigDef is defined as public.



clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
https://reviews.apache.org/r/27799/#comment112822

Should we keep this TODO comment?



clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
https://reviews.apache.org/r/27799/#comment112823

Shall we define those specific exceptions and replace the general 
ApiException here?



clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java
https://reviews.apache.org/r/27799/#comment112826

I think this case is covered by catching EOFException?


- Guozhang Wang


On Jan. 19, 2015, 3:10 a.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27799/
 ---
 
 (Updated Jan. 19, 2015, 3:10 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1760
 https://issues.apache.org/jira/browse/KAFKA-1760
 
 
 Repository: kafka
 
 
 Description
 ---
 
 New consumer.
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
 d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
 ab7e3220f9b76b92ef981d695299656f041ad5ed 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 397695568d3fd8e835d8f923a89b3b00c96d0ead 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
   
 clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 c0c636b3e1ba213033db6d23655032c9bbd5e378 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 57c1807ccba9f264186f83e91f37c34b959c8060 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
 16af70a5de52cca786fdea147a6a639b7dc4a311 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 76efc216c9e6c3ab084461d792877092a189ad0f 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
 ea423ad15eebd262d20d5ec05d592cc115229177 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 904976fadf0610982958628eaee810b60a98d725 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
  483899d2e69b33655d0e08949f5f64af2519660a 
   
 

[jira] [Assigned] (KAFKA-1877) Expose version via JMX for 'new' producer

2015-01-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy reassigned KAFKA-1877:
--

Assignee: Manikumar Reddy

 Expose version via JMX for 'new' producer 
 --

 Key: KAFKA-1877
 URL: https://issues.apache.org/jira/browse/KAFKA-1877
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 0.8.2
Reporter: Vladimir Tretyakov
Assignee: Manikumar Reddy
 Fix For: 0.8.3


 Add version of Kafka to jmx (monitoring tool can use this info).
 Something like that
 {code}
 kafka.common:type=AppInfo,name=Version
   Value java.lang.Object = 0.8.2-beta
 {code}
 we already have this in core Kafka module (see kafka.common.AppInfo object).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1334) Add failure detection capability to the coordinator / consumer

2015-01-19 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman reassigned KAFKA-1334:
---

Assignee: Onur Karaman

 Add failure detection capability to the coordinator / consumer
 --

 Key: KAFKA-1334
 URL: https://issues.apache.org/jira/browse/KAFKA-1334
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Affects Versions: 0.9.0
Reporter: Neha Narkhede
Assignee: Onur Karaman

 1) Add coordinator discovery and failure detection to the consumer.
 2) Add failure detection capability to the coordinator when group management 
 is used.
 This will not include any rebalancing logic, just the logic to detect 
 consumer failures using session.timeout.ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30046: Patch for KAFKA-1879

2015-01-19 Thread Gwen Shapira


 On Jan. 19, 2015, 7:22 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, lines 197-201
  https://reviews.apache.org/r/30046/diff/1/?file=825225#file825225line197
 
  Should we include the actual requiredAcks used in the request?
 
 Jun Rao wrote:
 Also, we are not deprecating the pamameter. We are just deprecating some 
 values for this parameter. So, perhaps we can reword the logging message a 
 bit.

True, it is misleading. How about:

Client %s (%s) sent a request with request.required.acks = %d.
In Kafka 0.8.2 use of values other than -1, 0, 1 for this parameter is 
deprecated and will be removed in later releases.
Please consult Kafka documentation for supported and recommended configuration.

?


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30046/#review68659
---


On Jan. 19, 2015, 6:58 p.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30046/
 ---
 
 (Updated Jan. 19, 2015, 6:58 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1879
 https://issues.apache.org/jira/browse/KAFKA-1879
 
 
 Repository: kafka
 
 
 Description
 ---
 
 fix string format
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7def85254b1a17bfcaef96e49c3c98bc5a93c423 
 
 Diff: https://reviews.apache.org/r/30046/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




[jira] [Updated] (KAFKA-1879) Log warning when receiving produce requests with acks 1

2015-01-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1879:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch. +1. Committed to 0.8.2 with minor tweak on the logging 
message.

 Log warning when receiving produce requests with acks  1
 -

 Key: KAFKA-1879
 URL: https://issues.apache.org/jira/browse/KAFKA-1879
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1879.patch


 0.8.2 deprecates support for acks  1.
 We want to start logging warnings when client use this deprecated behavior, 
 so we can safely drop it in the next release (see KAFKA-1697 for more 
 details).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1881) transient unit test failure in testDeleteTopicWithCleaner due to OOME

2015-01-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira reassigned KAFKA-1881:
---

Assignee: Gwen Shapira

 transient unit test failure in testDeleteTopicWithCleaner due to OOME
 -

 Key: KAFKA-1881
 URL: https://issues.apache.org/jira/browse/KAFKA-1881
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Gwen Shapira

 kafka.admin.DeleteTopicTest  testDeleteTopicWithCleaner FAILED
 java.lang.OutOfMemoryError: Java heap space
 at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:39)
 at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
 at kafka.log.SkimpyOffsetMap.init(OffsetMap.scala:42)
 at kafka.log.LogCleaner$CleanerThread.init(LogCleaner.scala:177)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.Range.foreach(Range.scala:141)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at kafka.log.LogCleaner.init(LogCleaner.scala:86)
 at kafka.log.LogManager.init(LogManager.scala:64)
 at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:337)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:85)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.List.foreach(List.scala:318)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest.testDeleteTopicWithCleaner(DeleteTopicTest.scala:241)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka Out of Memory error

2015-01-19 Thread Pranay Agarwal
Hi All,

I have a kafka cluster setup which has 2 topics

topic1 with 10 partitions
topic2 with 1000 partitions.

While, I am able to consume messages from topic1 just fine, I get following
error from the topic2. There is a resolved issue here on the same thing
https://issues.apache.org/jira/browse/KAFKA-664

I am using latest kafka server version, and I have used kafka command line
tools to consumer from the topics.


[2015-01-19 22:08:10,758] ERROR OOME with size 201332748
(kafka.network.BoundedByteBufferReceive)

java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at
kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
at
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
at
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)



Thanks
-Pranay


[jira] [Commented] (KAFKA-1881) transient unit test failure in testDeleteTopicWithCleaner due to OOME

2015-01-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283087#comment-14283087
 ] 

Gwen Shapira commented on KAFKA-1881:
-

We can (although we are looking at around 2K of memory here... 100 records * 3 
dupes * ~6 bytes each).

But it looks like we are running out of memory before writeDups is even called 
- the error is in createTestTopicAndCluster:
at 
kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:272)
at 
kafka.admin.DeleteTopicTest.testDeleteTopicWithCleaner(DeleteTopicTest.scala:241)

What makes you suspect writeDups in this issue? Does the OOM only happens when 
running the test in a loop?

 transient unit test failure in testDeleteTopicWithCleaner due to OOME
 -

 Key: KAFKA-1881
 URL: https://issues.apache.org/jira/browse/KAFKA-1881
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Gwen Shapira

 kafka.admin.DeleteTopicTest  testDeleteTopicWithCleaner FAILED
 java.lang.OutOfMemoryError: Java heap space
 at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:39)
 at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
 at kafka.log.SkimpyOffsetMap.init(OffsetMap.scala:42)
 at kafka.log.LogCleaner$CleanerThread.init(LogCleaner.scala:177)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.Range.foreach(Range.scala:141)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at kafka.log.LogCleaner.init(LogCleaner.scala:86)
 at kafka.log.LogManager.init(LogManager.scala:64)
 at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:337)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:85)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.List.foreach(List.scala:318)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest.testDeleteTopicWithCleaner(DeleteTopicTest.scala:241)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE CANCELLED] 0.8.2.0 Candidate 1

2015-01-19 Thread Jun Rao
Thanks for reporting the issues in RC1. I will prepare RC2 and start a new
vote.

Jun

On Tue, Jan 13, 2015 at 7:16 PM, Jun Rao j...@confluent.io wrote:

 This is the first candidate for release of Apache Kafka 0.8.2.0. There
 has been some changes since the 0.8.2 beta release, especially in the new
 java producer api and jmx mbean names. It would be great if people can test
 this out thoroughly. We are giving people 10 days for testing and voting.

 Release Notes for the 0.8.2.0 release
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

 *** Please download, test and vote by Friday, Jan 23h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS* in
 addition to the md5, sha1
 and sha2 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

 * Maven artifacts to be voted upon prior to release:
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/maven_staging/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/maven_staging/*

 * scala-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

 * java-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b0c7d579f8aeb5750573008040a42b7377a651d5
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b0c7d579f8aeb5750573008040a42b7377a651d5*

 /***

 Thanks,

 Jun



Re: Kafka Out of Memory error

2015-01-19 Thread Jonathan Natkins
Hi Pranay,

I think the JIRA you're referencing is a bit orthogonal to the OOME that
you're experiencing. Based on the stacktrace, it looks like your OOME is
coming from a consumer request, which is attempting to allocate 200MB.
There was a thread (relatively recently) that discussed what I think is
your issue:

http://mail-archives.apache.org/mod_mbox/kafka-users/201412.mbox/%3CCAG1fNJDHHGSL-x3wp=pPZS1asOdOBrQ-Ge3kiA3Bk_iz7o=5...@mail.gmail.com%3E

I suspect that the takeaway is that the way Kafka determines the required
memory for a consumer request is (#partitions in the topic) x
(replica.fetch.max.bytes), and seemingly you don't have enough memory
allocated to handle that request. The solution is likely to increase the
heap size on your brokers or to decrease your max fetch size.

Thanks,
Natty

Jonathan Natty Natkins
StreamSets | Customer Engagement Engineer
mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice


On Mon, Jan 19, 2015 at 2:10 PM, Pranay Agarwal agarwalpran...@gmail.com
wrote:

 Hi All,

 I have a kafka cluster setup which has 2 topics

 topic1 with 10 partitions
 topic2 with 1000 partitions.

 While, I am able to consume messages from topic1 just fine, I get following
 error from the topic2. There is a resolved issue here on the same thing
 https://issues.apache.org/jira/browse/KAFKA-664

 I am using latest kafka server version, and I have used kafka command line
 tools to consumer from the topics.

 
 [2015-01-19 22:08:10,758] ERROR OOME with size 201332748
 (kafka.network.BoundedByteBufferReceive)

 java.lang.OutOfMemoryError: Java heap space
 at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:57)
 at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
 at

 kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
 at

 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
 at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
 at

 kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
 at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
 at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
 at

 kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
 at

 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
 at

 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
 at

 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
 at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
 at

 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
 at

 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
 at

 kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
 at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
 at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107)
 at

 kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
 at
 kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
 


 Thanks
 -Pranay



[jira] [Commented] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-01-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283097#comment-14283097
 ] 

Gwen Shapira commented on KAFKA-1824:
-

I noticed that [~junrao] committed KAFKA-1711 to 0.8.2 branch last week, so we 
need this patch on 0.8.2 as well.

 in ConsoleProducer - properties key.separator and parse.key no longer work
 --

 Key: KAFKA-1824
 URL: https://issues.apache.org/jira/browse/KAFKA-1824
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
 KAFKA-1824_2014-12-22_16:17:42.patch


 Looks like the change in kafka-1711 breaks them accidentally.
 reader.init is called with readerProps which is initialized with commandline 
 properties as defaults.
 the problem is that reader.init checks:
 if(props.containsKey(parse.key))
 and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1869) Openning some random ports while running kafka service

2015-01-19 Thread QianHu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282350#comment-14282350
 ] 

QianHu commented on KAFKA-1869:
---

As you said, one random port is for the JMX RMI registry of the local only 
server, and  is the port of JMX. So, I think ,the other random port is 
opened with the port of kafka(9092), How do you see it? Thank you !

 Openning some random ports while running kafka service 
 ---

 Key: KAFKA-1869
 URL: https://issues.apache.org/jira/browse/KAFKA-1869
 Project: Kafka
  Issue Type: Bug
 Environment: kafka_2.9.2-0.8.1.1
Reporter: QianHu
Assignee: Manikumar Reddy
 Fix For: 0.8.2


 while running kafka service , four  random ports have been opened . In which 
 ,  and 9092 are setted by myself , but  28538 and 16650 are opened 
 randomly . Can you help me that why this random ports will be opened , and 
 how can we give them constant values ? Thank you very much .
 [work@02 kafka]$ jps
 8400 Jps
 727 Kafka
 [work@02 kafka]$ netstat -tpln|grep 727
 (Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
 tcp0  0 0.0.0.0:0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:28538   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:90920.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:16650   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1869) Openning some random ports while running kafka service

2015-01-19 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282386#comment-14282386
 ] 

Manikumar Reddy commented on KAFKA-1869:



1. -Dcom.sun.management.jmxremote.port=
2. -Dcom.sun.management.jmxremote.rmi.port=
3. (From the link) The additional ephemeral port . It is an implementation 
specific for JRMP.

 Openning some random ports while running kafka service 
 ---

 Key: KAFKA-1869
 URL: https://issues.apache.org/jira/browse/KAFKA-1869
 Project: Kafka
  Issue Type: Bug
 Environment: kafka_2.9.2-0.8.1.1
Reporter: QianHu
Assignee: Manikumar Reddy
 Fix For: 0.8.2


 while running kafka service , four  random ports have been opened . In which 
 ,  and 9092 are setted by myself , but  28538 and 16650 are opened 
 randomly . Can you help me that why this random ports will be opened , and 
 how can we give them constant values ? Thank you very much .
 [work@02 kafka]$ jps
 8400 Jps
 727 Kafka
 [work@02 kafka]$ netstat -tpln|grep 727
 (Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
 tcp0  0 0.0.0.0:0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:28538   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:90920.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:16650   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30026: Patch for KAFKA-1878

2015-01-19 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30026/#review68620
---



core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
https://reviews.apache.org/r/30026/#comment112947

It seems it's simpler to just change the defaultOffsetPartition in the 
broker config to 1. The test will run faster that way.


- Jun Rao


On Jan. 19, 2015, 10:09 a.m., Jaikiran Pai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30026/
 ---
 
 (Updated Jan. 19, 2015, 10:09 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1878
 https://issues.apache.org/jira/browse/KAFKA-1878
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1878 Increase metadata fetch timeout for the producer targetting the 
 offsets topic, because of the amount of time it takes to initialize the 
 number of partitions of that topic
 
 
 Diffs
 -
 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 420a1dd30264c72704cc383a4161034c7922177d 
 
 Diff: https://reviews.apache.org/r/30026/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jaikiran Pai
 




Detecting lost connection in high level consumer

2015-01-19 Thread harikiran
Hi

I am using the 0811 Kafka High level consumer and I have configured 
consumer.timeout.ms to a value that is not -1, say 5000ms.

I create the consumer iterator and invoke hasNext() method on it.

Irrespective of whether kafka broker was shutdown or there was no message
written to kafka, I see a ConsumerTimeOut exception after 5000ms.

My goal is to detect lost connection and reconnect but I cannot figure out
a way.

Any kind of help is appreciated.

Thanks
Hari


Re: Review Request 30026: Patch for KAFKA-1878

2015-01-19 Thread Jaikiran Pai


 On Jan. 19, 2015, 4:14 p.m., Jun Rao wrote:
  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala,
   lines 310-320
  https://reviews.apache.org/r/30026/diff/1/?file=824938#file824938line310
 
  It seems it's simpler to just change the defaultOffsetPartition in the 
  broker config to 1. The test will run faster that way.

That sounds good too. I don't yet have enough knowledge of the code to be sure 
it wouldn't introduce other issues, so went with this very isolated change. 
I'll update this patch (and run the test) with the change you suggest. Thanks 
Jun Rao!


- Jaikiran


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30026/#review68620
---


On Jan. 19, 2015, 10:09 a.m., Jaikiran Pai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30026/
 ---
 
 (Updated Jan. 19, 2015, 10:09 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1878
 https://issues.apache.org/jira/browse/KAFKA-1878
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1878 Increase metadata fetch timeout for the producer targetting the 
 offsets topic, because of the amount of time it takes to initialize the 
 number of partitions of that topic
 
 
 Diffs
 -
 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 420a1dd30264c72704cc383a4161034c7922177d 
 
 Diff: https://reviews.apache.org/r/30026/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jaikiran Pai
 




[jira] [Commented] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282338#comment-14282338
 ] 

jaikiran pai commented on KAFKA-1878:
-

Created reviewboard https://reviews.apache.org/r/30026/diff/
 against branch origin/trunk

 ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with 
 TimeoutException while trying to fetch metadata for topic
 --

 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1878.patch


 The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
 fails consistently with the following exception:
 {code}
 Unexpected exception while seding to an invalid topic 
 org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
 after 3000 ms.
 java.lang.AssertionError: Unexpected exception while seding to an invalid 
 topic org.apache.kafka.common.errors.TimeoutException: Failed to update 
 metadata after 3000 ms.
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 {code}
 This failure appears like it's intermittent when the 
 ProducerFailureHandlingTest is run as whole because it hides the timing issue 
 involved in the testCannotSendToInternalTopic test method. Running only that 
 testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
 consistently reproduces this failure.
 The real issue is that the initialization of the  __consumer_offset topic 
 (being accessed in the testCannotSendToInternalTopic test method) is time 
 consuming because that topic is backed by 50 partitions (default) and it 
 takes a while for each of them to be assigned a leader and do other 
 initialization. This times out the metadata fetch (3 seconds) being done by 
 the producer during a send(), which causes the test method to fail.
 I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1878:

Status: Patch Available  (was: Open)

 ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with 
 TimeoutException while trying to fetch metadata for topic
 --

 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1878.patch


 The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
 fails consistently with the following exception:
 {code}
 Unexpected exception while seding to an invalid topic 
 org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
 after 3000 ms.
 java.lang.AssertionError: Unexpected exception while seding to an invalid 
 topic org.apache.kafka.common.errors.TimeoutException: Failed to update 
 metadata after 3000 ms.
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 {code}
 This failure appears like it's intermittent when the 
 ProducerFailureHandlingTest is run as whole because it hides the timing issue 
 involved in the testCannotSendToInternalTopic test method. Running only that 
 testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
 consistently reproduces this failure.
 The real issue is that the initialization of the  __consumer_offset topic 
 (being accessed in the testCannotSendToInternalTopic test method) is time 
 consuming because that topic is backed by 50 partitions (default) and it 
 takes a while for each of them to be assigned a leader and do other 
 initialization. This times out the metadata fetch (3 seconds) being done by 
 the producer during a send(), which causes the test method to fail.
 I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30026: Patch for KAFKA-1878

2015-01-19 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30026/
---

Review request for kafka.


Bugs: KAFKA-1878
https://issues.apache.org/jira/browse/KAFKA-1878


Repository: kafka


Description
---

KAFKA-1878 Increase metadata fetch timeout for the producer targetting the 
offsets topic, because of the amount of time it takes to initialize the 
number of partitions of that topic


Diffs
-

  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
420a1dd30264c72704cc383a4161034c7922177d 

Diff: https://reviews.apache.org/r/30026/diff/


Testing
---


Thanks,

Jaikiran Pai



[jira] [Updated] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1878:

Attachment: KAFKA-1878.patch

 ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with 
 TimeoutException while trying to fetch metadata for topic
 --

 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1878.patch


 The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
 fails consistently with the following exception:
 {code}
 Unexpected exception while seding to an invalid topic 
 org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
 after 3000 ms.
 java.lang.AssertionError: Unexpected exception while seding to an invalid 
 topic org.apache.kafka.common.errors.TimeoutException: Failed to update 
 metadata after 3000 ms.
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 {code}
 This failure appears like it's intermittent when the 
 ProducerFailureHandlingTest is run as whole because it hides the timing issue 
 involved in the testCannotSendToInternalTopic test method. Running only that 
 testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
 consistently reproduces this failure.
 The real issue is that the initialization of the  __consumer_offset topic 
 (being accessed in the testCannotSendToInternalTopic test method) is time 
 consuming because that topic is backed by 50 partitions (default) and it 
 takes a while for each of them to be assigned a leader and do other 
 initialization. This times out the metadata fetch (3 seconds) being done by 
 the producer during a send(), which causes the test method to fail.
 I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread jaikiran pai (JIRA)
jaikiran pai created KAFKA-1878:
---

 Summary: ProducerFailureHandlingTest.testCannotSendToInternalTopic 
fails with TimeoutException while trying to fetch metadata for topic
 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai


The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
fails consistently with the following exception:

{code}
Unexpected exception while seding to an invalid topic 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 3000 ms.
java.lang.AssertionError: Unexpected exception while seding to an invalid topic 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 3000 ms.
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
at 
org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
at 
org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
{code}
This failure appears like it's intermittent when the 
ProducerFailureHandlingTest is run as whole because it hides the timing issue 
involved in the testCannotSendToInternalTopic test method. Running only that 
testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
consistently reproduces this failure.

The real issue is that the initialization of the  __consumer_offset topic 
(being accessed in the testCannotSendToInternalTopic test method) is time 
consuming because that topic is backed by 50 partitions (default) and it takes 
a while for each of them to be assigned a leader and do other initialization. 
This times out the metadata fetch (3 seconds) being done by the producer during 
a send(), which causes the test method to fail.

I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing

2015-01-19 Thread Jaikiran Pai

Hi Harsha,

I've now created a new JIRA 
https://issues.apache.org/jira/browse/KAFKA-1878 which explains the 
issue in a bit more detail (sorry, I initially thought this change could 
go linked with that existing JIRA, but you are right it needed a 
different one). The JIRA includes the exception which is causing the 
test to fail. I've opened a review request with a proposed fix 
https://reviews.apache.org/r/30026/diff/


-Jaikiran
On Sunday 18 January 2015 09:37 PM, Harsha wrote:

Jaikiran,
I can't reproduce the failure of the ProdcuerFailureHandlingTest.
I ran the single test .  you probably are seeing some errors
written to console when you use ./gradlew -i -Dsingle.test .
These errors are expected in some unit tests as some of these
test failure cases.
If you can reproduce this or even intermittent test failure can you
please open up a new JIRA and attach your patch there.
Your review patch is attached KAFKA-1867 which is a different issue.
Thanks,
Harsha

On Sun, Jan 18, 2015, at 07:16 AM, Jaikiran Pai wrote:

I could reproduce this consistently when that test *method* is run
individually. From what I could gather, the __consumer_offset topic
(being accessed in that test) had 50 partitions (default) which took a
while for each of them to be assigned a leader and do other
initialization and that timed out the metadata update wait during the
producer.send. I increased the metadata fetch timeout specifically for
that producer in that test method and was able to get past this. I've
sent a patch here https://reviews.apache.org/r/30013/


-Jaikiran

On Sunday 18 January 2015 12:30 AM, Manikumar Reddy wrote:

   I am consistently getting these errors. May be transient errors.

On Sun, Jan 18, 2015 at 12:05 AM, Harsha ka...@harsha.io wrote:


I don't see any failures in tests with the latest trunk or 0.8.2. I ran
it few times in a loop.
-Harsha

On Sat, Jan 17, 2015, at 08:38 AM, Manikumar Reddy wrote:

ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing on
both 0.8.2 and trunk.

Error on 0.8.2:
kafka.api.ProducerFailureHandlingTest  testCannotSendToInternalTopic
FAILED
  java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata
after 3000 ms.
  at


org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.init(KafkaProducer.java:437)

  at


org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)

  at


org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)

  at


kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:309)

  Caused by:
  org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 3000 ms.


Error on Trunk:
kafka.api.test.ProducerFailureHandlingTest 
testCannotSendToInternalTopic
FAILED
  java.lang.AssertionError: null
  at org.junit.Assert.fail(Assert.java:69)
  at org.junit.Assert.assertTrue(Assert.java:32)
  at org.junit.Assert.assertTrue(Assert.java:41)
  at


kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:312)





[jira] [Updated] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1878:
---
   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

Thanks for the patch. +1. Committed to trunk with a minor change to the comment.

 ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with 
 TimeoutException while trying to fetch metadata for topic
 --

 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai
 Fix For: 0.8.3

 Attachments: KAFKA-1878.patch, KAFKA-1878_2015-01-19_22:02:54.patch


 The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
 fails consistently with the following exception:
 {code}
 Unexpected exception while seding to an invalid topic 
 org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
 after 3000 ms.
 java.lang.AssertionError: Unexpected exception while seding to an invalid 
 topic org.apache.kafka.common.errors.TimeoutException: Failed to update 
 metadata after 3000 ms.
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 {code}
 This failure appears like it's intermittent when the 
 ProducerFailureHandlingTest is run as whole because it hides the timing issue 
 involved in the testCannotSendToInternalTopic test method. Running only that 
 testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
 consistently reproduces this failure.
 The real issue is that the initialization of the  __consumer_offset topic 
 (being accessed in the testCannotSendToInternalTopic test method) is time 
 consuming because that topic is backed by 50 partitions (default) and it 
 takes a while for each of them to be assigned a leader and do other 
 initialization. This times out the metadata fetch (3 seconds) being done by 
 the producer during a send(), which causes the test method to fail.
 I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #373

2015-01-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/373/changes

Changes:

[junrao] KAFKA-1723; num.partitions documented default is 1 while actual 
default is 2; patched by Manikumar Reddy; reviewed by Jun Rao

[junrao] KAFKA-1878; ProducerFailureHandlingTest.testCannotSendToInternalTopic 
fails with TimeoutException while trying to fetch metadata for topic; patched 
by jaikiran pai; reviewed by Jun Rao

--
[...truncated 2158 lines...]

kafka.admin.AdminTest  testReassigningNonExistingPartition FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.init(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.admin.AdminTest.setUp(AdminTest.scala:33)

kafka.admin.AdminTest  testResumePartitionReassignmentThatWasCompleted FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.init(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.admin.AdminTest.setUp(AdminTest.scala:33)

kafka.admin.AdminTest  testPreferredReplicaJsonData FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.init(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.admin.AdminTest.setUp(AdminTest.scala:33)

kafka.admin.AdminTest  testBasicPreferredReplicaElection FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.init(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.admin.AdminTest.setUp(AdminTest.scala:33)

kafka.admin.AdminTest  testShutdownBroker FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.init(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.admin.AdminTest.setUp(AdminTest.scala:33)

kafka.admin.AdminTest  testTopicConfigChange FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.init(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.admin.AdminTest.setUp(AdminTest.scala:33)

kafka.admin.TopicCommandTest  testConfigPreservationAcrossPartitionAlteration 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 

Re: Review Request 30026: Patch for KAFKA-1878

2015-01-19 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30026/
---

(Updated Jan. 19, 2015, 4:33 p.m.)


Review request for kafka.


Bugs: KAFKA-1878
https://issues.apache.org/jira/browse/KAFKA-1878


Repository: kafka


Description (updated)
---

KAFKA-1878 Set a smaller value for the number of partitions for the offset 
commit topic in the test, to prevent timeouts while fetching metadata for the 
topic


Diffs (updated)
-

  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
420a1dd30264c72704cc383a4161034c7922177d 

Diff: https://reviews.apache.org/r/30026/diff/


Testing
---


Thanks,

Jaikiran Pai



[jira] [Updated] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1878:

Attachment: KAFKA-1878_2015-01-19_22:02:54.patch

 ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with 
 TimeoutException while trying to fetch metadata for topic
 --

 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1878.patch, KAFKA-1878_2015-01-19_22:02:54.patch


 The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
 fails consistently with the following exception:
 {code}
 Unexpected exception while seding to an invalid topic 
 org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
 after 3000 ms.
 java.lang.AssertionError: Unexpected exception while seding to an invalid 
 topic org.apache.kafka.common.errors.TimeoutException: Failed to update 
 metadata after 3000 ms.
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 {code}
 This failure appears like it's intermittent when the 
 ProducerFailureHandlingTest is run as whole because it hides the timing issue 
 involved in the testCannotSendToInternalTopic test method. Running only that 
 testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
 consistently reproduces this failure.
 The real issue is that the initialization of the  __consumer_offset topic 
 (being accessed in the testCannotSendToInternalTopic test method) is time 
 consuming because that topic is backed by 50 partitions (default) and it 
 takes a while for each of them to be assigned a leader and do other 
 initialization. This times out the metadata fetch (3 seconds) being done by 
 the producer during a send(), which causes the test method to fail.
 I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1878) ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with TimeoutException while trying to fetch metadata for topic

2015-01-19 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282672#comment-14282672
 ] 

jaikiran pai commented on KAFKA-1878:
-

Updated reviewboard https://reviews.apache.org/r/30026/diff/
 against branch origin/trunk

 ProducerFailureHandlingTest.testCannotSendToInternalTopic fails with 
 TimeoutException while trying to fetch metadata for topic
 --

 Key: KAFKA-1878
 URL: https://issues.apache.org/jira/browse/KAFKA-1878
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai
 Attachments: KAFKA-1878.patch, KAFKA-1878_2015-01-19_22:02:54.patch


 The testCannotSendToInternalTopic test method in ProducerFailureHandlingTest 
 fails consistently with the following exception:
 {code}
 Unexpected exception while seding to an invalid topic 
 org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
 after 3000 ms.
 java.lang.AssertionError: Unexpected exception while seding to an invalid 
 topic org.apache.kafka.common.errors.TimeoutException: Failed to update 
 metadata after 3000 ms.
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:317)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:321)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runSingleTest(ScalaTestRunner.java:245)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest1(ScalaTestRunner.java:213)
   at 
 org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:30)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 {code}
 This failure appears like it's intermittent when the 
 ProducerFailureHandlingTest is run as whole because it hides the timing issue 
 involved in the testCannotSendToInternalTopic test method. Running only that 
 testCannotSendToInternalTopic test method (I did it from IntelliJ IDE) 
 consistently reproduces this failure.
 The real issue is that the initialization of the  __consumer_offset topic 
 (being accessed in the testCannotSendToInternalTopic test method) is time 
 consuming because that topic is backed by 50 partitions (default) and it 
 takes a while for each of them to be assigned a leader and do other 
 initialization. This times out the metadata fetch (3 seconds) being done by 
 the producer during a send(), which causes the test method to fail.
 I've a patch to fix that test method which I'll send shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


NullPointerException in RequestSendThread

2015-01-19 Thread Jaikiran Pai
I often see the following exception while running some tests 
(ProducerFailureHandlingTest.testNoResponse is one such instance):



[2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread], 
Controller 0 fails to send a request to broker 
id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)

java.lang.NullPointerException
at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:150)

at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)


Looking at that code in question, I can see that the NPE can be trigger 
when the receive is null which can happen if the isRunning is false 
(i.e a shutdown has been requested). The fix to prevent this seems 
straightforward:


diff --git 
a/core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
b/core/src/main/scala/kafka/controller/ControllerChannelManager.scala

index eb492f0..10f4c5a 100644
--- a/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
+++ b/core/src/main/scala/kafka/controller/ControllerChannelManager.scala
@@ -144,20 +144,22 @@ class RequestSendThread(val controllerId: Int,
   Utils.swallow(Thread.sleep(300))
   }
 }
-var response: RequestOrResponse = null
-request.requestId.get match {
-  case RequestKeys.LeaderAndIsrKey =
-response = LeaderAndIsrResponse.readFrom(receive.buffer)
-  case RequestKeys.StopReplicaKey =
-response = StopReplicaResponse.readFrom(receive.buffer)
-  case RequestKeys.UpdateMetadataKey =
-response = UpdateMetadataResponse.readFrom(receive.buffer)
-}
-stateChangeLogger.trace(Controller %d epoch %d received 
response %s for a request sent to broker %s
-  .format(controllerId, 
controllerContext.epoch, response.toString, toBroker.toString))

+if (receive != null) {
+  var response: RequestOrResponse = null
+  request.requestId.get match {
+case RequestKeys.LeaderAndIsrKey =
+  response = LeaderAndIsrResponse.readFrom(receive.buffer)
+case RequestKeys.StopReplicaKey =
+  response = StopReplicaResponse.readFrom(receive.buffer)
+case RequestKeys.UpdateMetadataKey =
+  response = UpdateMetadataResponse.readFrom(receive.buffer)
+  }
+  stateChangeLogger.trace(Controller %d epoch %d received 
response %s for a request sent to broker %s
+.format(controllerId, controllerContext.epoch, 
response.toString, toBroker.toString))


-if(callback != null) {
-  callback(response)
+  if (callback != null) {
+callback(response)
+  }
 }
   }


However can this really be considered a fix or would this just be hiding 
the real issue and would there be something more that will have to be 
done in this case? I'm on trunk FWIW.



-Jaikiran


Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-19 Thread Joe Stein
 For 2, how about we make a change to log a warning for ack  1 in 0.8.2
and then drop the ack  1 support in trunk (w/o bumping up the protocol
version)?

+1

On Mon, Jan 19, 2015 at 12:35 PM, Jun Rao j...@confluent.io wrote:

 For 2, how about we make a change to log a warning for ack  1 in 0.8.2
 and then drop the ack  1 support in trunk (w/o bumping up the protocol
 version)? Thanks,

 Jun

 On Sun, Jan 18, 2015 at 8:24 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

 Overall, agree on point #1, less sure on point #2.

 1. Some protocols never ever add new errors, while others add errors
 without bumping versions. HTTP is a good example of the second type.
 HTTP-451 was added fairly recently, there are some errors specific to
 NGINX, etc. No one cares. I think we should properly document in the
 wire-protocol doc that new errors can be added, and I think we should
 strongly suggest (and implement ourselves) that unknown error codes
 should be shown to users (or at least logged), so they can be googled
 and understood through our documentation.
 In addition, hierarchy of error codes, so clients will know if an
 error is retry-able just by looking at the code could be nice. Same
 for adding an error string to the protocol. These are future
 enhancements that should be discussed separately.

 2. I think we want to allow admins to upgrade their Kafka brokers
 without having to chase down clients in their organization and without
 getting blamed if clients break. I think it makes sense to have one
 version that will support existing behavior, but log warnings, so
 admins will know about misbehaving clients and can track them down
 before an upgrade that breaks them (or before the broken config causes
 them to lose data!). Hopefully this is indeed a very rare behavior and
 we are taking extra precaution for nothing, but I have customers where
 one traumatic upgrade means they will never upgrade a Kafka again, so
 I'm being conservative.

 Gwen


 On Sun, Jan 18, 2015 at 3:50 PM, Jun Rao j...@confluent.io wrote:
  Overall, I agree with Jay on both points.
 
  1. I think it's reasonable to add new error codes w/o bumping up the
  protocol version. In most cases, by adding new error codes, we are just
  refining the categorization of those unknown errors. So, a client
 shouldn't
  behave worse than before as long as unknown errors have been properly
  handled.
 
  2. I think it's reasonable to just document that 0.8.2 will be the last
  release that will support ack  1 and remove the support completely in
 trunk
  w/o bumping up the protocol. This is because (a) we never included ack
  1
  explicitly in the documentation and so the usage should be limited; (2)
 ack
  1 doesn't provide the guarantee that people really want and so it
  shouldn't really be used.
 
  Thanks,
 
  Jun
 
 
  On Sun, Jan 18, 2015 at 11:03 AM, Jay Kreps jay.kr...@gmail.com
 wrote:
 
  Hey guys,
 
  I really think we are discussing two things here:
 
  How should we generally handle changes to the set of errors? Should
  introducing new errors be considered a protocol change or should we
 reserve
  the right to introduce new error codes?
  Given that this particular change is possibly incompatible, how should
 we
  handle it?
 
  I think it would be good for people who are responding here to be
 specific
  about which they are addressing.
 
  Here is what I think:
 
  1. Errors should be extensible within a protocol version.
 
  We should change the protocol documentation to list the errors that
 can be
  given back from each api, their meaning, and how to handle them, BUT we
  should explicitly state that the set of errors are open ended. That is
 we
  should reserve the right to introduce new errors and explicitly state
 that
  clients need a blanket unknown error handling mechanism. The error
 can
  link to the protocol definition (something like Unknown error 42, see
  protocol definition at http://link;). We could make this work really
 well by
  instructing all the clients to report the error in a very googlable
 way as
  Oracle does with their error format (e.g. ORA-32) so that if you
 ever get
  the raw error google will take you to the definition.
 
  I agree that a more rigid definition seems like right thing, but
 having
  just implemented two clients and spent a bunch of time on the server
 side, I
  think, it will work out poorly in practice. Here is why:
 
  I think we will make a lot of mistakes in nailing down the set of error
  codes up front and we will end up going through 3-4 churns of the
 protocol
  definition just realizing the set of errors that can be thrown. I
 think this
  churn will actually make life worse for clients that now have to
 figure out
  7 identical versions of the protocol and will be a mess in terms of
 testing
  on the server side. I actually know this to be true because while
  implementing the clients I tried to guess the errors that could be
 thrown,
  then checked my guess by close code 

Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-19 Thread Jun Rao
For 2, how about we make a change to log a warning for ack  1 in 0.8.2 and
then drop the ack  1 support in trunk (w/o bumping up the protocol
version)? Thanks,

Jun

On Sun, Jan 18, 2015 at 8:24 PM, Gwen Shapira gshap...@cloudera.com wrote:

 Overall, agree on point #1, less sure on point #2.

 1. Some protocols never ever add new errors, while others add errors
 without bumping versions. HTTP is a good example of the second type.
 HTTP-451 was added fairly recently, there are some errors specific to
 NGINX, etc. No one cares. I think we should properly document in the
 wire-protocol doc that new errors can be added, and I think we should
 strongly suggest (and implement ourselves) that unknown error codes
 should be shown to users (or at least logged), so they can be googled
 and understood through our documentation.
 In addition, hierarchy of error codes, so clients will know if an
 error is retry-able just by looking at the code could be nice. Same
 for adding an error string to the protocol. These are future
 enhancements that should be discussed separately.

 2. I think we want to allow admins to upgrade their Kafka brokers
 without having to chase down clients in their organization and without
 getting blamed if clients break. I think it makes sense to have one
 version that will support existing behavior, but log warnings, so
 admins will know about misbehaving clients and can track them down
 before an upgrade that breaks them (or before the broken config causes
 them to lose data!). Hopefully this is indeed a very rare behavior and
 we are taking extra precaution for nothing, but I have customers where
 one traumatic upgrade means they will never upgrade a Kafka again, so
 I'm being conservative.

 Gwen


 On Sun, Jan 18, 2015 at 3:50 PM, Jun Rao j...@confluent.io wrote:
  Overall, I agree with Jay on both points.
 
  1. I think it's reasonable to add new error codes w/o bumping up the
  protocol version. In most cases, by adding new error codes, we are just
  refining the categorization of those unknown errors. So, a client
 shouldn't
  behave worse than before as long as unknown errors have been properly
  handled.
 
  2. I think it's reasonable to just document that 0.8.2 will be the last
  release that will support ack  1 and remove the support completely in
 trunk
  w/o bumping up the protocol. This is because (a) we never included ack 
 1
  explicitly in the documentation and so the usage should be limited; (2)
 ack
  1 doesn't provide the guarantee that people really want and so it
  shouldn't really be used.
 
  Thanks,
 
  Jun
 
 
  On Sun, Jan 18, 2015 at 11:03 AM, Jay Kreps jay.kr...@gmail.com wrote:
 
  Hey guys,
 
  I really think we are discussing two things here:
 
  How should we generally handle changes to the set of errors? Should
  introducing new errors be considered a protocol change or should we
 reserve
  the right to introduce new error codes?
  Given that this particular change is possibly incompatible, how should
 we
  handle it?
 
  I think it would be good for people who are responding here to be
 specific
  about which they are addressing.
 
  Here is what I think:
 
  1. Errors should be extensible within a protocol version.
 
  We should change the protocol documentation to list the errors that can
 be
  given back from each api, their meaning, and how to handle them, BUT we
  should explicitly state that the set of errors are open ended. That is
 we
  should reserve the right to introduce new errors and explicitly state
 that
  clients need a blanket unknown error handling mechanism. The error can
  link to the protocol definition (something like Unknown error 42, see
  protocol definition at http://link;). We could make this work really
 well by
  instructing all the clients to report the error in a very googlable way
 as
  Oracle does with their error format (e.g. ORA-32) so that if you ever
 get
  the raw error google will take you to the definition.
 
  I agree that a more rigid definition seems like right thing, but
 having
  just implemented two clients and spent a bunch of time on the server
 side, I
  think, it will work out poorly in practice. Here is why:
 
  I think we will make a lot of mistakes in nailing down the set of error
  codes up front and we will end up going through 3-4 churns of the
 protocol
  definition just realizing the set of errors that can be thrown. I think
 this
  churn will actually make life worse for clients that now have to figure
 out
  7 identical versions of the protocol and will be a mess in terms of
 testing
  on the server side. I actually know this to be true because while
  implementing the clients I tried to guess the errors that could be
 thrown,
  then checked my guess by close code inspection. It turned out that I
 always
  missed things in my belief about errors, but more importantly even after
  close code inspection I found tons of other errors in my stress testing.
  In practice error handling always involves 

Re: Review Request 30019: Patch for kafka-1876

2015-01-19 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30019/#review68631
---

Ship it!


Ship It!

- Sriharsha Chintalapani


On Jan. 19, 2015, 1:51 a.m., Jun Rao wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30019/
 ---
 
 (Updated Jan. 19, 2015, 1:51 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: kafka-1876
 https://issues.apache.org/jira/browse/kafka-1876
 
 
 Repository: kafka
 
 
 Description
 ---
 
 bind to a specific version of scala 2.11
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
 
 Diff: https://reviews.apache.org/r/30019/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jun Rao
 




Re: Review Request 30026: Patch for KAFKA-1878

2015-01-19 Thread Jaikiran Pai


 On Jan. 19, 2015, 4:14 p.m., Jun Rao wrote:
  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala,
   lines 310-320
  https://reviews.apache.org/r/30026/diff/1/?file=824938#file824938line310
 
  It seems it's simpler to just change the defaultOffsetPartition in the 
  broker config to 1. The test will run faster that way.
 
 Jaikiran Pai wrote:
 That sounds good too. I don't yet have enough knowledge of the code to be 
 sure it wouldn't introduce other issues, so went with this very isolated 
 change. I'll update this patch (and run the test) with the change you 
 suggest. Thanks Jun Rao!

Patch updated and ran the tests a few times with this change and it passed.


- Jaikiran


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30026/#review68620
---


On Jan. 19, 2015, 4:33 p.m., Jaikiran Pai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30026/
 ---
 
 (Updated Jan. 19, 2015, 4:33 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1878
 https://issues.apache.org/jira/browse/KAFKA-1878
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1878 Set a smaller value for the number of partitions for the offset 
 commit topic in the test, to prevent timeouts while fetching metadata for the 
 topic
 
 
 Diffs
 -
 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 420a1dd30264c72704cc383a4161034c7922177d 
 
 Diff: https://reviews.apache.org/r/30026/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jaikiran Pai
 




[jira] [Updated] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1761:
---
   Resolution: Fixed
Fix Version/s: 0.8.2
   Status: Resolved  (was: Patch Available)

Thanks for the patch. Committed to 0.8.2 and trunk.

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Manikumar Reddy
Priority: Minor
 Fix For: 0.8.2

 Attachments: KAFKA-1761.patch, KAFKA-1761_2015-01-19_11:51:58.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1728) update 082 docs

2015-01-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282685#comment-14282685
 ] 

Jun Rao commented on KAFKA-1728:


Thanks for the update. Could you attach a patch against 
https://svn.apache.org/repos/asf/kafka/site/082 ? Thanks,

 update 082 docs
 ---

 Key: KAFKA-1728
 URL: https://issues.apache.org/jira/browse/KAFKA-1728
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2
Reporter: Jun Rao
Priority: Blocker
 Fix For: 0.8.2


 We need to update the docs for 082 release.
 https://svn.apache.org/repos/asf/kafka/site/082
 http://kafka.apache.org/082/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30046: Patch for KAFKA-1879

2015-01-19 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30046/#review68659
---



core/src/main/scala/kafka/server/KafkaApis.scala
https://reviews.apache.org/r/30046/#comment113005

Should we include the actual requiredAcks used in the request?


- Jun Rao


On Jan. 19, 2015, 6:58 p.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30046/
 ---
 
 (Updated Jan. 19, 2015, 6:58 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1879
 https://issues.apache.org/jira/browse/KAFKA-1879
 
 
 Repository: kafka
 
 
 Description
 ---
 
 fix string format
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7def85254b1a17bfcaef96e49c3c98bc5a93c423 
 
 Diff: https://reviews.apache.org/r/30046/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




[jira] [Updated] (KAFKA-1144) commitOffsets can be passed the offsets to commit

2015-01-19 Thread Tony Stevenson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Stevenson updated KAFKA-1144:
--
Reporter: Imran Rashid  (was: Imran Rashid)

 commitOffsets can be passed the offsets to commit
 -

 Key: KAFKA-1144
 URL: https://issues.apache.org/jira/browse/KAFKA-1144
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.8.0
Reporter: Imran Rashid
Assignee: Neha Narkhede
 Attachments: 
 0001-allow-committing-of-arbitrary-offsets-to-facilitate-.patch, 
 0002-add-protection-against-backward-commits.patch, 
 0003-dont-do-conditional-update-check-if-the-path-doesnt-.patch


 This adds another version of commitOffsets that takes the offsets to commit 
 as a parameter.
 Without this change, getting correct user code is very hard. Despite kafka's 
 at-least-once guarantees, most user code doesn't actually have that 
 guarantee, and is almost certainly wrong if doing batch processing. Getting 
 it right requires some very careful synchronization between all consumer 
 threads, which is both:
 1) painful to get right
 2) slow b/c of the need to stop all workers during a commit.
 This small change simplifies a lot of this. This was discussed extensively on 
 the user mailing list, on the thread are kafka consumer apps guaranteed to 
 see msgs at least once?
 You can also see an example implementation of a user api which makes use of 
 this, to get proper at-least-once guarantees by user code, even for batches:
 https://github.com/quantifind/kafka-utils/pull/1
 I'm open to any suggestions on how to add unit tests for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1880) Add support for checking binary/source compatibility

2015-01-19 Thread Ashish Kumar Singh (JIRA)
Ashish Kumar Singh created KAFKA-1880:
-

 Summary: Add support for checking binary/source compatibility
 Key: KAFKA-1880
 URL: https://issues.apache.org/jira/browse/KAFKA-1880
 Project: Kafka
  Issue Type: New Feature
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh


Recent discussions around compatibility shows how important compatibility is to 
users. [Java API Compliance 
Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is a 
tool for checking backward binary and source-level compatibility of a Java 
library API. Kafka can leverage the tool to find and fix existing 
incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1881) transient unit test failure in testDeleteTopicWithCleaner due to OOME

2015-01-19 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-1881:
--

 Summary: transient unit test failure in testDeleteTopicWithCleaner 
due to OOME
 Key: KAFKA-1881
 URL: https://issues.apache.org/jira/browse/KAFKA-1881
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao


kafka.admin.DeleteTopicTest  testDeleteTopicWithCleaner FAILED
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:39)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
at kafka.log.SkimpyOffsetMap.init(OffsetMap.scala:42)
at kafka.log.LogCleaner$CleanerThread.init(LogCleaner.scala:177)
at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at kafka.log.LogCleaner.init(LogCleaner.scala:86)
at kafka.log.LogManager.init(LogManager.scala:64)
at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:337)
at kafka.server.KafkaServer.startup(KafkaServer.scala:85)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
at 
kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
at 
kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:272)
at 
kafka.admin.DeleteTopicTest.testDeleteTopicWithCleaner(DeleteTopicTest.scala:241)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1876:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the reviews. Committed to 0.8.2 and trunk after fixing the README 
accordingly.

 pom file for scala 2.11 should reference a specific version
 ---

 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-1876.patch, twoeleven.tgz


 Currently, the pom file specifies the following scala dependency for 2.11.
 dependency
   groupIdorg.scala-lang/groupId
   artifactIdscala-library/artifactId
   version2.11/version
   scopecompile/scope
 /dependency
 However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
 etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30046: Patch for KAFKA-1879

2015-01-19 Thread Jun Rao


 On Jan. 19, 2015, 7:22 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, lines 197-201
  https://reviews.apache.org/r/30046/diff/1/?file=825225#file825225line197
 
  Should we include the actual requiredAcks used in the request?

Also, we are not deprecating the pamameter. We are just deprecating some values 
for this parameter. So, perhaps we can reword the logging message a bit.


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30046/#review68659
---


On Jan. 19, 2015, 6:58 p.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30046/
 ---
 
 (Updated Jan. 19, 2015, 6:58 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1879
 https://issues.apache.org/jira/browse/KAFKA-1879
 
 
 Repository: kafka
 
 
 Description
 ---
 
 fix string format
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7def85254b1a17bfcaef96e49c3c98bc5a93c423 
 
 Diff: https://reviews.apache.org/r/30046/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




[jira] [Commented] (KAFKA-1881) transient unit test failure in testDeleteTopicWithCleaner due to OOME

2015-01-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282917#comment-14282917
 ] 

Jun Rao commented on KAFKA-1881:


Perhaps we can just reduce the amount of data written in writeDups().


 transient unit test failure in testDeleteTopicWithCleaner due to OOME
 -

 Key: KAFKA-1881
 URL: https://issues.apache.org/jira/browse/KAFKA-1881
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao

 kafka.admin.DeleteTopicTest  testDeleteTopicWithCleaner FAILED
 java.lang.OutOfMemoryError: Java heap space
 at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:39)
 at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
 at kafka.log.SkimpyOffsetMap.init(OffsetMap.scala:42)
 at kafka.log.LogCleaner$CleanerThread.init(LogCleaner.scala:177)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:86)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.Range.foreach(Range.scala:141)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at kafka.log.LogCleaner.init(LogCleaner.scala:86)
 at kafka.log.LogManager.init(LogManager.scala:64)
 at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:337)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:85)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest$$anonfun$10.apply(DeleteTopicTest.scala:272)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.List.foreach(List.scala:318)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:272)
 at 
 kafka.admin.DeleteTopicTest.testDeleteTopicWithCleaner(DeleteTopicTest.scala:241)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1876:
-
Attachment: twoeleven.tgz

gradle works yes however maven fails since it doesn't have unicorns and pixie 
dust built in. 

I attached the project i used to test gradle and maven w/ 2.11 support.

We need this patch otherwise anyone using pom w/ kafka wanting 2.11 won't work

patch LGTM +1

 pom file for scala 2.11 should reference a specific version
 ---

 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-1876.patch, twoeleven.tgz


 Currently, the pom file specifies the following scala dependency for 2.11.
 dependency
   groupIdorg.scala-lang/groupId
   artifactIdscala-library/artifactId
   version2.11/version
   scopecompile/scope
 /dependency
 However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
 etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-19 Thread Gwen Shapira
Sounds good to me.
I'll open a new JIRA for 0.8.2 with just an extra log warning, to
avoid making KAFKA-1697 any more confusing.

On Mon, Jan 19, 2015 at 9:46 AM, Joe Stein joe.st...@stealth.ly wrote:
  For 2, how about we make a change to log a warning for ack  1 in 0.8.2
 and then drop the ack  1 support in trunk (w/o bumping up the protocol
 version)?

 +1


 On Mon, Jan 19, 2015 at 12:35 PM, Jun Rao j...@confluent.io wrote:

 For 2, how about we make a change to log a warning for ack  1 in 0.8.2
 and then drop the ack  1 support in trunk (w/o bumping up the protocol
 version)? Thanks,

 Jun

 On Sun, Jan 18, 2015 at 8:24 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

 Overall, agree on point #1, less sure on point #2.

 1. Some protocols never ever add new errors, while others add errors
 without bumping versions. HTTP is a good example of the second type.
 HTTP-451 was added fairly recently, there are some errors specific to
 NGINX, etc. No one cares. I think we should properly document in the
 wire-protocol doc that new errors can be added, and I think we should
 strongly suggest (and implement ourselves) that unknown error codes
 should be shown to users (or at least logged), so they can be googled
 and understood through our documentation.
 In addition, hierarchy of error codes, so clients will know if an
 error is retry-able just by looking at the code could be nice. Same
 for adding an error string to the protocol. These are future
 enhancements that should be discussed separately.

 2. I think we want to allow admins to upgrade their Kafka brokers
 without having to chase down clients in their organization and without
 getting blamed if clients break. I think it makes sense to have one
 version that will support existing behavior, but log warnings, so
 admins will know about misbehaving clients and can track them down
 before an upgrade that breaks them (or before the broken config causes
 them to lose data!). Hopefully this is indeed a very rare behavior and
 we are taking extra precaution for nothing, but I have customers where
 one traumatic upgrade means they will never upgrade a Kafka again, so
 I'm being conservative.

 Gwen


 On Sun, Jan 18, 2015 at 3:50 PM, Jun Rao j...@confluent.io wrote:
  Overall, I agree with Jay on both points.
 
  1. I think it's reasonable to add new error codes w/o bumping up the
  protocol version. In most cases, by adding new error codes, we are just
  refining the categorization of those unknown errors. So, a client
  shouldn't
  behave worse than before as long as unknown errors have been properly
  handled.
 
  2. I think it's reasonable to just document that 0.8.2 will be the last
  release that will support ack  1 and remove the support completely in
  trunk
  w/o bumping up the protocol. This is because (a) we never included ack
   1
  explicitly in the documentation and so the usage should be limited; (2)
  ack
  1 doesn't provide the guarantee that people really want and so it
  shouldn't really be used.
 
  Thanks,
 
  Jun
 
 
  On Sun, Jan 18, 2015 at 11:03 AM, Jay Kreps jay.kr...@gmail.com
  wrote:
 
  Hey guys,
 
  I really think we are discussing two things here:
 
  How should we generally handle changes to the set of errors? Should
  introducing new errors be considered a protocol change or should we
  reserve
  the right to introduce new error codes?
  Given that this particular change is possibly incompatible, how should
  we
  handle it?
 
  I think it would be good for people who are responding here to be
  specific
  about which they are addressing.
 
  Here is what I think:
 
  1. Errors should be extensible within a protocol version.
 
  We should change the protocol documentation to list the errors that
  can be
  given back from each api, their meaning, and how to handle them, BUT
  we
  should explicitly state that the set of errors are open ended. That is
  we
  should reserve the right to introduce new errors and explicitly state
  that
  clients need a blanket unknown error handling mechanism. The error
  can
  link to the protocol definition (something like Unknown error 42, see
  protocol definition at http://link;). We could make this work really
  well by
  instructing all the clients to report the error in a very googlable
  way as
  Oracle does with their error format (e.g. ORA-32) so that if you
  ever get
  the raw error google will take you to the definition.
 
  I agree that a more rigid definition seems like right thing, but
  having
  just implemented two clients and spent a bunch of time on the server
  side, I
  think, it will work out poorly in practice. Here is why:
 
  I think we will make a lot of mistakes in nailing down the set of
  error
  codes up front and we will end up going through 3-4 churns of the
  protocol
  definition just realizing the set of errors that can be thrown. I
  think this
  churn will actually make life worse for clients that now have to
  figure out
  7 identical versions of the 

[jira] [Commented] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282784#comment-14282784
 ] 

Ewen Cheslack-Postava commented on KAFKA-1876:
--

Based on the output of {{./gradlew -PscalaVersion=2.11 core:dependencies}}, it 
looks like this happens to work because the scala dependencies end up being 
pulled in by scala-xml_2.11/scala-parser-combinators_2.11. Specifying an exact 
version gets the right behavior instead of defaulting to 2.11.1, which is what 
pulling it in transitively does.

The patch looks fine for making releaseTarGzAll and uploadArchives use the 
2.11.5 instead of using whatever version the other dependencies happen to pull 
in. We might also want validation of the scala version in scala.gradle. With 
this patch, I can still run {{./gradlew -PscalaVersion=2.11 jar}} and end up 
with the same behavior where the specific scala version is chosen implicitly 
via transitive dependencies.

 pom file for scala 2.11 should reference a specific version
 ---

 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-1876.patch


 Currently, the pom file specifies the following scala dependency for 2.11.
 dependency
   groupIdorg.scala-lang/groupId
   artifactIdscala-library/artifactId
   version2.11/version
   scopecompile/scope
 /dependency
 However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
 etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1879) Log warning when receiving produce requests with acks 1

2015-01-19 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-1879:
---

 Summary: Log warning when receiving produce requests with acks  1
 Key: KAFKA-1879
 URL: https://issues.apache.org/jira/browse/KAFKA-1879
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.2


0.8.2 deprecates support for acks  1.
We want to start logging warnings when client use this deprecated behavior, so 
we can safely drop it in the next release (see KAFKA-1697 for more details).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1728) update 082 docs

2015-01-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1728:
---
Attachment: default-config-value-0.8.2.patch

Uploaded patch to correct default config values in 0.8.2 docs.

1. log.segment.delete.delay.ms, num.consumer.fetchers, ,
partition.assignment.strategy  config props are not available in docs. Can I 
include them?

2. Offset Management props needs to merged from KAFKA-1729

 update 082 docs
 ---

 Key: KAFKA-1728
 URL: https://issues.apache.org/jira/browse/KAFKA-1728
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2
Reporter: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: default-config-value-0.8.2.patch


 We need to update the docs for 082 release.
 https://svn.apache.org/repos/asf/kafka/site/082
 http://kafka.apache.org/082/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1852) OffsetCommitRequest can commit offset on unknown topic

2015-01-19 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1852:
--
Attachment: KAFKA-1852_2015-01-19_10:44:01.patch

 OffsetCommitRequest can commit offset on unknown topic
 --

 Key: KAFKA-1852
 URL: https://issues.apache.org/jira/browse/KAFKA-1852
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Sriharsha Chintalapani
 Attachments: KAFKA-1852.patch, KAFKA-1852_2015-01-19_10:44:01.patch


 Currently, we allow an offset to be committed to Kafka, even when the 
 topic/partition for the offset doesn't exist. We probably should disallow 
 that and send an error back in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29912: Patch for KAFKA-1852

2015-01-19 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/
---

(Updated Jan. 19, 2015, 6:44 p.m.)


Review request for kafka.


Bugs: KAFKA-1852
https://issues.apache.org/jira/browse/KAFKA-1852


Repository: kafka


Description
---

KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.


Diffs (updated)
-

  core/src/main/scala/kafka/server/KafkaApis.scala 
ec8d9f7ba44741db40875458bd524c4062ad6a26 
  core/src/main/scala/kafka/server/OffsetManager.scala 
0bdd42fea931cddd072c0fff765b10526db6840a 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 

Diff: https://reviews.apache.org/r/29912/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Commented] (KAFKA-1852) OffsetCommitRequest can commit offset on unknown topic

2015-01-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282844#comment-14282844
 ] 

Sriharsha Chintalapani commented on KAFKA-1852:
---

Updated reviewboard https://reviews.apache.org/r/29912/diff/
 against branch origin/trunk

 OffsetCommitRequest can commit offset on unknown topic
 --

 Key: KAFKA-1852
 URL: https://issues.apache.org/jira/browse/KAFKA-1852
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Sriharsha Chintalapani
 Attachments: KAFKA-1852.patch, KAFKA-1852_2015-01-19_10:44:01.patch


 Currently, we allow an offset to be committed to Kafka, even when the 
 topic/partition for the offset doesn't exist. We probably should disallow 
 that and send an error back in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-01-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1729:
---
Fix Version/s: 0.8.2

 add doc for Kafka-based offset management in 0.8.2
 --

 Key: KAFKA-1729
 URL: https://issues.apache.org/jira/browse/KAFKA-1729
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jun Rao
Assignee: Joel Koshy
 Fix For: 0.8.2

 Attachments: KAFKA-1782-doc-v1.patch, KAFKA-1782-doc-v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1879) Log warning when receiving produce requests with acks 1

2015-01-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1879:

Status: Patch Available  (was: Open)

 Log warning when receiving produce requests with acks  1
 -

 Key: KAFKA-1879
 URL: https://issues.apache.org/jira/browse/KAFKA-1879
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1879.patch


 0.8.2 deprecates support for acks  1.
 We want to start logging warnings when client use this deprecated behavior, 
 so we can safely drop it in the next release (see KAFKA-1697 for more 
 details).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1879) Log warning when receiving produce requests with acks 1

2015-01-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282866#comment-14282866
 ] 

Gwen Shapira commented on KAFKA-1879:
-

Created reviewboard https://reviews.apache.org/r/30046/diff/
 against branch origin/0.8.2

 Log warning when receiving produce requests with acks  1
 -

 Key: KAFKA-1879
 URL: https://issues.apache.org/jira/browse/KAFKA-1879
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1879.patch


 0.8.2 deprecates support for acks  1.
 We want to start logging warnings when client use this deprecated behavior, 
 so we can safely drop it in the next release (see KAFKA-1697 for more 
 details).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1879) Log warning when receiving produce requests with acks 1

2015-01-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1879:

Attachment: KAFKA-1879.patch

 Log warning when receiving produce requests with acks  1
 -

 Key: KAFKA-1879
 URL: https://issues.apache.org/jira/browse/KAFKA-1879
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1879.patch


 0.8.2 deprecates support for acks  1.
 We want to start logging warnings when client use this deprecated behavior, 
 so we can safely drop it in the next release (see KAFKA-1697 for more 
 details).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30046: Patch for KAFKA-1879

2015-01-19 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30046/
---

Review request for kafka.


Bugs: KAFKA-1879
https://issues.apache.org/jira/browse/KAFKA-1879


Repository: kafka


Description
---

fix string format


Diffs
-

  core/src/main/scala/kafka/server/KafkaApis.scala 
7def85254b1a17bfcaef96e49c3c98bc5a93c423 

Diff: https://reviews.apache.org/r/30046/diff/


Testing
---


Thanks,

Gwen Shapira



[jira] [Commented] (KAFKA-1874) missing import util.parsing.json.JSON

2015-01-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282868#comment-14282868
 ] 

Jun Rao commented on KAFKA-1874:


You need to add the dependency to scala-parser-combinators_2.11.

 missing import util.parsing.json.JSON
 -

 Key: KAFKA-1874
 URL: https://issues.apache.org/jira/browse/KAFKA-1874
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 Oracle JDK 1.7.0_72
 eclipse Mars M4
 Scala 2.11.5
Reporter: Sree Vaddi
  Labels: class, missing, scala
 Fix For: 0.8.2

 Attachments: Screen Shot 2015-01-17 at 3.14.33 PM.png

   Original Estimate: 1h
  Remaining Estimate: 1h

 core project
 main scala folder
 kafka.utils.Json.scala file
 line#21
 import util.parsing.json.JSON
 this class is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >