[jira] [Commented] (KAFKA-853) Allow OffsetFetchRequest to initialize offsets

2015-01-20 Thread Balaji Seshadri (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285163#comment-14285163
 ] 

Balaji Seshadri commented on KAFKA-853:
---

[~mgharat] i need to analyze more before i answer your question,i just now took 
this JIRA.

> Allow OffsetFetchRequest to initialize offsets
> --
>
> Key: KAFKA-853
> URL: https://issues.apache.org/jira/browse/KAFKA-853
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1
>Reporter: David Arthur
>Assignee: Balaji Seshadri
> Fix For: 0.9.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> It would be nice for the OffsetFetchRequest API to have the option to 
> initialize offsets instead of returning unknown_topic_or_partition. It could 
> mimic the Offsets API by adding the "time" field and then follow the same 
> code path on the server as the Offset API. 
> In this case, the response would need to a boolean to indicate if the 
> returned offset was initialized or fetched from ZK.
> This would simplify the client logic when dealing with new topics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-853) Allow OffsetFetchRequest to initialize offsets

2015-01-20 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285162#comment-14285162
 ] 

Mayuresh Gharat commented on KAFKA-853:
---

Hi Balaji Seshadri,

I am trying to understand what do you mean by "It could mimic the Offsets API 
by adding the "time" field and then follow the same code path on the server as 
the Offset API. " 

Can you give a brief workflow?

> Allow OffsetFetchRequest to initialize offsets
> --
>
> Key: KAFKA-853
> URL: https://issues.apache.org/jira/browse/KAFKA-853
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1
>Reporter: David Arthur
>Assignee: Balaji Seshadri
> Fix For: 0.9.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> It would be nice for the OffsetFetchRequest API to have the option to 
> initialize offsets instead of returning unknown_topic_or_partition. It could 
> mimic the Offsets API by adding the "time" field and then follow the same 
> code path on the server as the Offset API. 
> In this case, the response would need to a boolean to indicate if the 
> returned offset was initialized or fetched from ZK.
> This would simplify the client logic when dealing with new topics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-853) Allow OffsetFetchRequest to initialize offsets

2015-01-20 Thread Balaji Seshadri (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balaji Seshadri reassigned KAFKA-853:
-

Assignee: Balaji Seshadri

> Allow OffsetFetchRequest to initialize offsets
> --
>
> Key: KAFKA-853
> URL: https://issues.apache.org/jira/browse/KAFKA-853
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1
>Reporter: David Arthur
>Assignee: Balaji Seshadri
> Fix For: 0.9.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> It would be nice for the OffsetFetchRequest API to have the option to 
> initialize offsets instead of returning unknown_topic_or_partition. It could 
> mimic the Offsets API by adding the "time" field and then follow the same 
> code path on the server as the Offset API. 
> In this case, the response would need to a boolean to indicate if the 
> returned offset was initialized or fetched from ZK.
> This would simplify the client logic when dealing with new topics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-01-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285116#comment-14285116
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Pinging [~junrao] for review :)

I believe the recent patch fixes the concerned raised in previous reviews.

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch, KAFKA-1809_2014-12-25_11:04:17.patch, 
> KAFKA-1809_2014-12-27_12:03:35.patch, KAFKA-1809_2014-12-27_12:22:56.patch, 
> KAFKA-1809_2014-12-29_12:11:36.patch, KAFKA-1809_2014-12-30_11:25:29.patch, 
> KAFKA-1809_2014-12-30_18:48:46.patch, KAFKA-1809_2015-01-05_14:23:57.patch, 
> KAFKA-1809_2015-01-05_20:25:15.patch, KAFKA-1809_2015-01-05_21:40:14.patch, 
> KAFKA-1809_2015-01-06_11:46:22.patch, KAFKA-1809_2015-01-13_18:16:21.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285105#comment-14285105
 ] 

Gwen Shapira commented on KAFKA-1856:
-

Non-binding +1 from me :)
Looks good and I'm excited about getting automated results on the JIRA.

[~joestein]:
If this also looks good to you, perhaps you can commit and create a jenkins job 
as described here:  http://wiki.apache.org/general/PreCommitBuilds ?

Once we ask Apache Infra to automatically run this when patches become 
available, test results will be very visible on every JIRA. We should probably 
warn the community before doing that.



> Add PreCommit Patch Testing
> ---
>
> Key: KAFKA-1856
> URL: https://issues.apache.org/jira/browse/KAFKA-1856
> Project: Kafka
>  Issue Type: Task
>Reporter: Ashish Kumar Singh
>Assignee: Ashish Kumar Singh
> Attachments: KAFKA-1856.patch, KAFKA-1856_2015-01-18_21:43:56.patch
>
>
> h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
> h2. Motivation
> *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
> is growing, mechanism to ensure quality of the product is required. Quality 
> becomes hard to measure and maintain in an open source project, because of a 
> wide community of contributors. Luckily, Kafka is not the first open source 
> project and can benefit from learnings of prior projects.
> PreCommit tests are the tests that are run for each patch that gets attached 
> to an open JIRA. Based on tests results, test execution framework, test bot, 
> +1 or -1 the patch. Having PreCommit tests take the load off committers to 
> look at or test each patch.
> h2. Tests in Kafka
> h3. Unit and Integraiton Tests
> [Unit and Integration 
> tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
>  are cardinal to help contributors to avoid breaking existing functionalities 
> while adding new functionalities or fixing older ones. These tests, atleast 
> the ones relevant to the changes, must be run by contributors before 
> attaching a patch to a JIRA.
> h3. System Tests
> [System 
> tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
> are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
> and not some specific method or class.
> h2. Apache PreCommit tests
> Apache provides a mechanism to automatically build a project and run a series 
> of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
> test framework will comment with a +1 or -1 on the JIRA.
> You can read more about the framework here:
> http://wiki.apache.org/general/PreCommitBuilds
> h2. Plan
> # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
> other projects) that will take a jira as a parameter, apply on the 
> appropriate branch, build the project, run tests and report results. This 
> script should be committed into the Kafka code-base. To begin with, this will 
> only run unit tests. We can add code sanity checks, system_tests, etc in the 
> future.
> # Create a jenkins job for running the test (as described in 
> http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
> manually. This must be done by a committer with Jenkins access.
> # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
> to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-20 Thread Gwen Shapira
Thanks for the detailed document, Jiangjie. Super helpful.

Few questions:

1. You mention that "A ConsumerRebalanceListener class is created and
could be wired into ZookeeperConsumerConnector to avoid duplicate
messages when consumer rebalance occurs in mirror maker."

Is this something the user needs to do or configure? or is the wiring
of rebalance listener into the zookeeper consumer will be part of the
enhancement?
In other words, will we need to do anything extra to avoid duplicates
during rebalance in MirrorMaker?

2. "The only source of truth for offsets in consume-then-send pattern
is end user." - I assume you don't mean an actual person, right? So
what does "end user" refer to? Can you clarify when will the offset
commit thread commit offsets? And which JIRA implements this?

3. Maintaining message order - In which JIRA do we implement this part?

Again, thanks a lot for documenting this and even more for the
implementation - it is super important for many use cases.

Gwen


Gwen

On Tue, Jan 20, 2015 at 4:31 PM, Jiangjie Qin  wrote:
> Hi Kafka Devs,
>
> We are working on Kafka Mirror Maker enhancement. A KIP is posted to document 
> and discuss on the followings:
> 1. KAFKA-1650: No Data loss mirror maker change
> 2. KAFKA-1839: To allow partition aware mirror.
> 3. KAFKA-1840: To allow message filtering/format conversion
> Feedbacks are welcome. Please let us know if you have any questions or 
> concerns.
>
> Thanks.
>
> Jiangjie (Becket) Qin


[jira] [Resolved] (KAFKA-1674) auto.create.topics.enable docs are misleading

2015-01-20 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-1674.

Resolution: Duplicate

This is fixed in KAFKA-1728.

> auto.create.topics.enable docs are misleading
> -
>
> Key: KAFKA-1674
> URL: https://issues.apache.org/jira/browse/KAFKA-1674
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Stevo Slavic
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {{auto.create.topics.enable}} is currently 
> [documented|http://kafka.apache.org/08/configuration.html] with
> {quote}
> Enable auto creation of topic on the server. If this is set to true then 
> attempts to produce, consume, or fetch metadata for a non-existent topic will 
> automatically create it with the default replication factor and number of 
> partitions.
> {quote}
> In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
> message on non-existing topic.
> After 
> [discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
>  with [~junrao] conclusion was that it's documentation issue which needs to 
> be fixed.
> Before fixing docs, please check once more if this is just non-working 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1728) update 082 docs

2015-01-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285029#comment-14285029
 ] 

Jun Rao commented on KAFKA-1728:


Thanks for the patch. +1 and committed to site.

1. Yes, could you create a new patch to add them?

> update 082 docs
> ---
>
> Key: KAFKA-1728
> URL: https://issues.apache.org/jira/browse/KAFKA-1728
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: default-config-value-0.8.2.patch
>
>
> We need to update the docs for 082 release.
> https://svn.apache.org/repos/asf/kafka/site/082
> http://kafka.apache.org/082/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1830) Closing socket connection to /10.118.192.104. (kafka.network.Processor)

2015-01-20 Thread Mingjie Lai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284975#comment-14284975
 ] 

Mingjie Lai commented on KAFKA-1830:


[~guozhang] Can you tell me which JIRA or git commit fixed the issue? Can you 
help to port it to 0.8.2 branch?

> Closing socket connection to /10.118.192.104. (kafka.network.Processor)
> ---
>
> Key: KAFKA-1830
> URL: https://issues.apache.org/jira/browse/KAFKA-1830
> Project: Kafka
>  Issue Type: Test
>  Components: log
>Affects Versions: 0.8.1
> Environment: Linux OS, 5 node CDH5.12 cluster, Scala 2.10.4
>Reporter: Tapas Swain
>Priority: Critical
>
> I was testing Spark-Kafka integration . Created one producer which pushes 
> data to kafka topic. One consumer reads that data and processes it and 
> publish results to another kafka topic. Suddenly the following log is seen in 
> the console.
> [2014-12-26 15:20:04,643] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:04,848] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,053] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,257] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,462] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,666] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,870] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,074] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,280] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,484] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,689] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,911] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,116] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,320] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,525] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,729] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,934] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,140] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,345] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,551] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,756] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,960] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,165] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,370] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,574] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,778] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,983] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,189] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,394] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,599] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,804] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,009] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,214] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,418] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,623] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,827] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Proc

New consumer plans

2015-01-20 Thread Jay Kreps
There is a draft patch for the new consumer up on KAFKA-1760:
  https://issues.apache.org/jira/browse/KAFKA-1760

I chatted with Guozhang earlier today and here was our thought on how to
proceed:
1. There are changes to NetworkClient  and Sender that I'll describe below.
These should be closely reviewed as (a) NetworkClient is an important
interface and we should want to get it right, and (b) these changes may
break the new producer if there is any problem with them.
2. The rest of the consumer we will do a couple rounds of high-level review
on but probably not as deep. We will check it in and the proceed to add
more system and integration tests on consumer functionality.
3. In parallel a few of the LI folks will take up the consumer co-ordinator
server-side implementation.

So right now what would be helpful would be for people to take a look at
the networkclient and sender changes. There are some annoying javadoc
auto-formatting changes which I'll try to get out of there, so ignore those
for now.

Let me try to motivate the new NetworkClient changes so people can
understand them:
1. Added a method to check the number of in-flight requests per node, it
matches the existing in-flight method but is just for one node.
2. Added a completeAll() and completeAll(node) method that blocks until all
requests (or all requests for a given node) have completed. This is added
to help implement blocking requests in the co-ordinator. There are
corresponding methods in the selector to allow muting individual
connections so that you no longer select on them.
3. Separated poll into a poll method and a send method. Previously to
initiate a new request you had to also poll, which returned responses. This
was great if you were ready to process responses, but actually these two
things are somewhat separate. Now you always initiate requests with send
and actual I/O is always done by poll(). This makes it possible to initiate
non-blocking requests without needing to process responses.
4. Added a new RequestCompletionHandler callback interface. This can
optionally be provided when you initiate a request and will be invoked on
the response when the request is complete. The rationale for this is to
make it easier to implement asynchronous processing when it is possible for
requests to be initiated from many places in the code. This makes it a lot
easier to ensure the response is always handled and also to define the
request and response in the same place.

Cheers,

-Jay


[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-20 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284963#comment-14284963
 ] 

Aditya A Auradkar commented on KAFKA-1886:
--

If interested, I hacked an existing test for this.

def testConsumerEmptyTopic() {
  val newTopic = "new-topic"
  TestUtils.createTopic(zkClient, newTopic, numPartitions = 1, 
replicationFactor = 1, servers = servers)
  val thread = new Thread {
override def run {
  System.out.println("Starting the fetch")
  val start = System.currentTimeMillis()
  try
  {
val fetchResponse = consumer.fetch(new 
FetchRequestBuilder().minBytes(10).maxWait(3000).addFetch(newTopic, 0, 0, 
1).build())
  }
  catch {
  case e: Throwable =>{
val  end = System.currentTimeMillis()
System.out.println("Caught exception" + e + ". Took " + (end - 
start));
System.out.println("Fetch interrupted " + 
Thread.currentThread().isInterrupted)
  }
  }
}
  }

 thread.start()
  Thread.sleep(1000)
  thread.interrupt()
  thread.join()
  System.out.println("Ending test")
  }

> SimpleConsumer swallowing ClosedByInterruptException
> 
>
> Key: KAFKA-1886
> URL: https://issues.apache.org/jira/browse/KAFKA-1886
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Aditya A Auradkar
>Assignee: Jun Rao
>
> This issue was originally reported by a Samza developer. I've included an 
> exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
> my dev setup.
> From: criccomi
> Hey all,
> Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
> interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
> Throwable in its sendRequest method [2]. I'm wondering: if 
> blockingChannel.send/receive throws a ClosedByInterruptException
> when the thread is interrupted, what happens? It looks like sendRequest will 
> catch the exception (which I
> think clears the thread's interrupted flag), and then retries the send. If 
> the send succeeds on the retry, I think that the ClosedByInterruptException 
> exception is effectively swallowed, and the BrokerProxy will continue
> fetching messages as though its thread was never interrupted.
> Am I misunderstanding how things work?
> Cheers,
> Chris
> [1] 
> https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
> [2] 
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-20 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284961#comment-14284961
 ] 

Aditya A Auradkar commented on KAFKA-1886:
--

Did a good bit of poking around. Basically, I wrote a test that runs a 
SimpleConsumer inside a thread and interrupts that thread from the main thread. 
This forces a ClosedByInterruptException that we catch in the 
SimpleConsumer:sendRequest method. Catching this exception does not reset the 
interrupt status of the Thread. The returned exception is a 
ClosedChannelException and the original exception is swallowed.

I can't spot any bug in Kafka here. I can suggest a couple of improvements:
- Don't retry inside SimpleConsumer if we catch a ClosedByInterruptException. 
Seems like extra work for nothing.
- Inspect code to check if we are catching InterruptedException somewhere. 
Based on a cursory inspection, I couldn't find anything.

> SimpleConsumer swallowing ClosedByInterruptException
> 
>
> Key: KAFKA-1886
> URL: https://issues.apache.org/jira/browse/KAFKA-1886
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Aditya A Auradkar
>Assignee: Jun Rao
>
> This issue was originally reported by a Samza developer. I've included an 
> exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
> my dev setup.
> From: criccomi
> Hey all,
> Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
> interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
> Throwable in its sendRequest method [2]. I'm wondering: if 
> blockingChannel.send/receive throws a ClosedByInterruptException
> when the thread is interrupted, what happens? It looks like sendRequest will 
> catch the exception (which I
> think clears the thread's interrupted flag), and then retries the send. If 
> the send succeeds on the retry, I think that the ClosedByInterruptException 
> exception is effectively swallowed, and the BrokerProxy will continue
> fetching messages as though its thread was never interrupted.
> Am I misunderstanding how things work?
> Cheers,
> Chris
> [1] 
> https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
> [2] 
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-20 Thread Jiangjie Qin
Hi Kafka Devs,

We are working on Kafka Mirror Maker enhancement. A KIP is posted to document 
and discuss on the followings:
1. KAFKA-1650: No Data loss mirror maker change
2. KAFKA-1839: To allow partition aware mirror.
3. KAFKA-1840: To allow message filtering/format conversion
Feedbacks are welcome. Please let us know if you have any questions or concerns.

Thanks.

Jiangjie (Becket) Qin


[jira] [Created] (KAFKA-1888) Add a "rolling upgrade" system test

2015-01-20 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-1888:
---

 Summary: Add a "rolling upgrade" system test
 Key: KAFKA-1888
 URL: https://issues.apache.org/jira/browse/KAFKA-1888
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.9.0


To help test upgrades and compatibility between versions, it will be cool to 
add a rolling-upgrade test to system tests:
Given two versions (just a path to the jars?), check that you can do a
rolling upgrade of the brokers from one version to another (using clients from 
the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #375

2015-01-20 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-1823; Fix transient failure in PartitionAssignorTest; reviewed 
by Guozhang Wang and Neha Narkhede

--
[...truncated 496 lines...]
kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.

[jira] [Commented] (KAFKA-1887) controller error message on shutting the last broker

2015-01-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284502#comment-14284502
 ] 

Jun Rao commented on KAFKA-1887:


When shutting down the broker, we unregister the broker from ZK first, followed 
by shutting down the controller. It seems that we should be shutting down the 
controller first.

> controller error message on shutting the last broker
> 
>
> Key: KAFKA-1887
> URL: https://issues.apache.org/jira/browse/KAFKA-1887
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Priority: Minor
> Fix For: 0.8.3
>
>
> We always see the following error in state-change log on shutting down the 
> last broker.
> [2015-01-20 13:21:04,036] ERROR Controller 0 epoch 3 initiated state change 
> for partition [test,0] from OfflinePartition to OnlinePartition failed 
> (state.change.logger)
> kafka.common.NoReplicaOnlineException: No replica for partition [test,0] is 
> alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
> at 
> kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:357)
> at 
> kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
> at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
> at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
> at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:446)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:373)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
> at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1887) controller error message on shutting the last broker

2015-01-20 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-1887:
--

 Summary: controller error message on shutting the last broker
 Key: KAFKA-1887
 URL: https://issues.apache.org/jira/browse/KAFKA-1887
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao
Priority: Minor
 Fix For: 0.8.3


We always see the following error in state-change log on shutting down the last 
broker.

[2015-01-20 13:21:04,036] ERROR Controller 0 epoch 3 initiated state change for 
partition [test,0] from OfflinePartition to OnlinePartition failed 
(state.change.logger)
kafka.common.NoReplicaOnlineException: No replica for partition [test,0] is 
alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
at 
kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
at 
kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:357)
at 
kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at 
kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at 
kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:446)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:373)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1839) Adding an afterRebalance callback to the ConsumerRebalanceListener of ZookeeperConsumerConnector

2015-01-20 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284449#comment-14284449
 ] 

Jiangjie Qin commented on KAFKA-1839:
-

Updated reviewboard https://reviews.apache.org/r/30083/diff/
 against branch origin/trunk

> Adding an afterRebalance callback to the ConsumerRebalanceListener of 
> ZookeeperConsumerConnector
> 
>
> Key: KAFKA-1839
> URL: https://issues.apache.org/jira/browse/KAFKA-1839
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1839_2015-01-20_11:49:12.patch, 
> KAFKA-1839_2015-01-20_13:28:25.patch
>
>
> Currently there is only a beforeReleasePartition callback in consumer 
> rebalance listener of ZookeeperConsumerConnector. It is useful to have an 
> afterRebalance callback with new partition assignment passed in. There are 
> some use cases where knowing the topic partition changes is useful (e.g. 
> mirror maker can create same number of partitions for new topic in target 
> cluster). The new consumer will have such callback but probably won't be 
> production ready very soon. Because adding this callback is not much amount 
> of work so we can do this as a short term solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1839) Adding an afterRebalance callback to the ConsumerRebalanceListener of ZookeeperConsumerConnector

2015-01-20 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1839:

Attachment: KAFKA-1839_2015-01-20_13:28:25.patch

> Adding an afterRebalance callback to the ConsumerRebalanceListener of 
> ZookeeperConsumerConnector
> 
>
> Key: KAFKA-1839
> URL: https://issues.apache.org/jira/browse/KAFKA-1839
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1839_2015-01-20_11:49:12.patch, 
> KAFKA-1839_2015-01-20_13:28:25.patch
>
>
> Currently there is only a beforeReleasePartition callback in consumer 
> rebalance listener of ZookeeperConsumerConnector. It is useful to have an 
> afterRebalance callback with new partition assignment passed in. There are 
> some use cases where knowing the topic partition changes is useful (e.g. 
> mirror maker can create same number of partitions for new topic in target 
> cluster). The new consumer will have such callback but probably won't be 
> production ready very soon. Because adding this callback is not much amount 
> of work so we can do this as a short term solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30083: Patch for KAFKA-1839

2015-01-20 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30083/
---

(Updated Jan. 20, 2015, 9:28 p.m.)


Review request for kafka.


Bugs: KAFKA-1839
https://issues.apache.org/jira/browse/KAFKA-1839


Repository: kafka


Description (updated)
---

Modified unit test.


Minor code change


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
e6ff7683a0df4a7d221e949767e57c34703d5aad 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
191a8677444e53b043e9ad6e94c5a9191c32599e 
  core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
facf509841918bdcf92a271d29f6c825ff99345a 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 
  core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
24954de66ccc5158696166b7e2aabad0f1b1f287 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
a17e8532c44aadf84b8da3a57bcc797a848b5020 

Diff: https://reviews.apache.org/r/30083/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-1823) transient unit test failure in PartitionAssignorTest

2015-01-20 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284424#comment-14284424
 ] 

Joel Koshy commented on KAFKA-1823:
---

Committed to trunk. Thanks for the review.

> transient unit test failure in PartitionAssignorTest
> 
>
> Key: KAFKA-1823
> URL: https://issues.apache.org/jira/browse/KAFKA-1823
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Joel Koshy
> Attachments: KAFKA-1823.patch
>
>
> Saw the following transient unit test failure.
> unit.kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor 
> FAILED
> java.lang.UnsupportedOperationException: empty.max
> at 
> scala.collection.TraversableOnce$class.max(TraversableOnce.scala:216)
> at scala.collection.AbstractIterator.max(Iterator.scala:1157)
> at 
> unit.kafka.consumer.PartitionAssignorTest$.unit$kafka$consumer$PartitionAssignorTest$$assignAndVerify(PartitionAssignorTest.scala:190)
> at 
> unit.kafka.consumer.PartitionAssignorTest$$anonfun$testRoundRobinPartitionAssignor$1.apply$mcVI$sp(PartitionAssignorTest.scala:54)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at 
> unit.kafka.consumer.PartitionAssignorTest.testRoundRobinPartitionAssignor(PartitionAssignorTest.scala:39)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1823) transient unit test failure in PartitionAssignorTest

2015-01-20 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1823:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> transient unit test failure in PartitionAssignorTest
> 
>
> Key: KAFKA-1823
> URL: https://issues.apache.org/jira/browse/KAFKA-1823
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Joel Koshy
> Attachments: KAFKA-1823.patch
>
>
> Saw the following transient unit test failure.
> unit.kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor 
> FAILED
> java.lang.UnsupportedOperationException: empty.max
> at 
> scala.collection.TraversableOnce$class.max(TraversableOnce.scala:216)
> at scala.collection.AbstractIterator.max(Iterator.scala:1157)
> at 
> unit.kafka.consumer.PartitionAssignorTest$.unit$kafka$consumer$PartitionAssignorTest$$assignAndVerify(PartitionAssignorTest.scala:190)
> at 
> unit.kafka.consumer.PartitionAssignorTest$$anonfun$testRoundRobinPartitionAssignor$1.apply$mcVI$sp(PartitionAssignorTest.scala:54)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at 
> unit.kafka.consumer.PartitionAssignorTest.testRoundRobinPartitionAssignor(PartitionAssignorTest.scala:39)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-20 Thread Aditya A Auradkar (JIRA)
Aditya A Auradkar created KAFKA-1886:


 Summary: SimpleConsumer swallowing ClosedByInterruptException
 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Jun Rao


This issue was originally reported by a Samza developer. I've included an 
exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
my dev setup.

From: criccomi
Hey all,
Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
Throwable in its sendRequest method [2]. I'm wondering: if 
blockingChannel.send/receive throws a ClosedByInterruptException
when the thread is interrupted, what happens? It looks like sendRequest will 
catch the exception (which I
think clears the thread's interrupted flag), and then retries the send. If the 
send succeeds on the retry, I think that the ClosedByInterruptException 
exception is effectively swallowed, and the BrokerProxy will continue
fetching messages as though its thread was never interrupted.
Am I misunderstanding how things work?
Cheers,
Chris
[1] 
https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
[2] 
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1596) Exception in KafkaScheduler while shutting down

2015-01-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284409#comment-14284409
 ] 

Sriharsha Chintalapani commented on KAFKA-1596:
---

[~nehanarkhede] [~guozhang] looks like KAFKA-1724 is a duplicate of this. I 
have details on how it happens on that JIRA and steps to reproduce.  This is 
easy to reproduce in single-host env.  We have two instances of KafkaScheduler 
being initiated one in KafkaBroker and another one in 
KafkaController.autoRebalanceScheduler.  KafkaBroker.kafkaScheduler will start 
initially but incase if the broker is the controller and before it starts up, 
if there is onControllerResignation gets called due to zookeeper watcher it 
calls KafkaScheduler.shutdown() which is not started yet causing above 
exception to be thrown.  More details are under  the reviewboard here 
https://reviews.apache.org/r/28027/ . Please check my comments there.

> Exception in KafkaScheduler while shutting down
> ---
>
> Key: KAFKA-1596
> URL: https://issues.apache.org/jira/browse/KAFKA-1596
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>  Labels: newbie
> Attachments: kafka-1596.patch
>
>
> Saw this while trying to reproduce KAFKA-1577. It is very minor and won't 
> happen in practice but annoying nonetheless.
> {code}
> [2014-08-14 18:03:56,686] INFO zookeeper state changed (SyncConnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2014-08-14 18:03:56,776] INFO Loading logs. (kafka.log.LogManager)
> [2014-08-14 18:03:56,783] INFO Logs loading complete. (kafka.log.LogManager)
> [2014-08-14 18:03:57,120] INFO Starting log cleanup with a period of 30 
> ms. (kafka.log.LogManager)
> [2014-08-14 18:03:57,124] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager)
> [2014-08-14 18:03:57,158] INFO Awaiting socket connections on 0.0.0.0:9092. 
> (kafka.network.Acceptor)
> [2014-08-14 18:03:57,160] INFO [Socket Server on Broker 0], Started 
> (kafka.network.SocketServer)
> ^C[2014-08-14 18:03:57,203] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2014-08-14 18:03:57,211] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)
> [2014-08-14 18:03:57,222] INFO [Socket Server on Broker 0], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-08-14 18:03:57,226] INFO [Replica Manager on Broker 0]: Shut down 
> (kafka.server.ReplicaManager)
> [2014-08-14 18:03:57,228] INFO [ReplicaFetcherManager on broker 0] shutting 
> down (kafka.server.ReplicaFetcherManager)
> [2014-08-14 18:03:57,233] INFO [ReplicaFetcherManager on broker 0] shutdown 
> completed (kafka.server.ReplicaFetcherManager)
> [2014-08-14 18:03:57,274] INFO [Replica Manager on Broker 0]: Shut down 
> completely (kafka.server.ReplicaManager)
> [2014-08-14 18:03:57,276] INFO Shutting down. (kafka.log.LogManager)
> [2014-08-14 18:03:57,296] INFO Will not load MX4J, mx4j-tools.jar is not in 
> the classpath (kafka.utils.Mx4jLoader$)
> [2014-08-14 18:03:57,297] INFO Shutdown complete. (kafka.log.LogManager)
> [2014-08-14 18:03:57,301] FATAL Fatal error during KafkaServerStable startup. 
> Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.schedule(KafkaScheduler.scala:95)
> at kafka.server.ReplicaManager.startup(ReplicaManager.scala:138)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:112)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:28)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> [2014-08-14 18:03:57,324] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2014-08-14 18:03:57,326] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2014-08-14 18:03:57,329] INFO Session: 0x147d5b0a51a closed 
> (org.apache.zookeeper.ZooKeeper)
> [2014-08-14 18:03:57,329] INFO EventThread shut down 
> (org.apache.zookeeper.ClientCnxn)
> [2014-08-14 18:03:57,329] INFO [Kafka Server 0], shut down completed 
> (kafka.server.KafkaServer)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1294) Enable leadership balancing by default

2015-01-20 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-1294.
---
   Resolution: Fixed
Fix Version/s: 0.8.2

This is fixed in KAFKA-1305.

> Enable leadership balancing by default
> --
>
> Key: KAFKA-1294
> URL: https://issues.apache.org/jira/browse/KAFKA-1294
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
> Fix For: 0.8.2
>
>
> We should change 
>   auto.leader.rebalance.enable=true
> The current state where any time nodes are restarted the traffic gets all 
> lopsided is very unintuitive.
> It sounds like the only reason we haven't done this is because we aren't sure 
> it works. We should just try it in an integration env and the system tests 
> and then make it the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-01-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284313#comment-14284313
 ] 

Sriharsha Chintalapani commented on KAFKA-1866:
---

The above patch tested by running a metrics reporter and deleting a topic.

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-01-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284310#comment-14284310
 ] 

Sriharsha Chintalapani commented on KAFKA-1866:
---

Created reviewboard https://reviews.apache.org/r/30084/diff/
 against branch origin/trunk

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-01-20 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1866:
--
Status: Patch Available  (was: Open)

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-01-20 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1866:
--
Attachment: KAFKA-1866.patch

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30084: Patch for KAFKA-1866

2015-01-20 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30084/
---

Review request for kafka.


Bugs: KAFKA-1866
https://issues.apache.org/jira/browse/KAFKA-1866


Repository: kafka


Description
---

KAFKA-1866. LogStartOffset gauge throws exceptions after log.delete().


Diffs
-

  core/src/main/scala/kafka/log/Log.scala 
846023bb98d0fa0603016466360c97071ac935ea 

Diff: https://reviews.apache.org/r/30084/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



Re: Review Request 30083: Patch for KAFKA-1839

2015-01-20 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30083/
---

(Updated Jan. 20, 2015, 7:49 p.m.)


Review request for kafka.


Bugs: KAFKA-1839
https://issues.apache.org/jira/browse/KAFKA-1839


Repository: kafka


Description (updated)
---

Modified unit test.


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
e6ff7683a0df4a7d221e949767e57c34703d5aad 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
191a8677444e53b043e9ad6e94c5a9191c32599e 
  core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
facf509841918bdcf92a271d29f6c825ff99345a 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 
  core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
24954de66ccc5158696166b7e2aabad0f1b1f287 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
a17e8532c44aadf84b8da3a57bcc797a848b5020 

Diff: https://reviews.apache.org/r/30083/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-1839) Adding an afterRebalance callback to the ConsumerRebalanceListener of ZookeeperConsumerConnector

2015-01-20 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284298#comment-14284298
 ] 

Jiangjie Qin commented on KAFKA-1839:
-

Updated reviewboard https://reviews.apache.org/r/30083/diff/
 against branch origin/trunk

> Adding an afterRebalance callback to the ConsumerRebalanceListener of 
> ZookeeperConsumerConnector
> 
>
> Key: KAFKA-1839
> URL: https://issues.apache.org/jira/browse/KAFKA-1839
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
> Attachments: KAFKA-1839_2015-01-20_11:49:12.patch
>
>
> Currently there is only a beforeReleasePartition callback in consumer 
> rebalance listener of ZookeeperConsumerConnector. It is useful to have an 
> afterRebalance callback with new partition assignment passed in. There are 
> some use cases where knowing the topic partition changes is useful (e.g. 
> mirror maker can create same number of partitions for new topic in target 
> cluster). The new consumer will have such callback but probably won't be 
> production ready very soon. Because adding this callback is not much amount 
> of work so we can do this as a short term solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1839) Adding an afterRebalance callback to the ConsumerRebalanceListener of ZookeeperConsumerConnector

2015-01-20 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1839:

Assignee: Jiangjie Qin
  Status: Patch Available  (was: Open)

> Adding an afterRebalance callback to the ConsumerRebalanceListener of 
> ZookeeperConsumerConnector
> 
>
> Key: KAFKA-1839
> URL: https://issues.apache.org/jira/browse/KAFKA-1839
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1839_2015-01-20_11:49:12.patch
>
>
> Currently there is only a beforeReleasePartition callback in consumer 
> rebalance listener of ZookeeperConsumerConnector. It is useful to have an 
> afterRebalance callback with new partition assignment passed in. There are 
> some use cases where knowing the topic partition changes is useful (e.g. 
> mirror maker can create same number of partitions for new topic in target 
> cluster). The new consumer will have such callback but probably won't be 
> production ready very soon. Because adding this callback is not much amount 
> of work so we can do this as a short term solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1839) Adding an afterRebalance callback to the ConsumerRebalanceListener of ZookeeperConsumerConnector

2015-01-20 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1839:

Attachment: KAFKA-1839_2015-01-20_11:49:12.patch

> Adding an afterRebalance callback to the ConsumerRebalanceListener of 
> ZookeeperConsumerConnector
> 
>
> Key: KAFKA-1839
> URL: https://issues.apache.org/jira/browse/KAFKA-1839
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
> Attachments: KAFKA-1839_2015-01-20_11:49:12.patch
>
>
> Currently there is only a beforeReleasePartition callback in consumer 
> rebalance listener of ZookeeperConsumerConnector. It is useful to have an 
> afterRebalance callback with new partition assignment passed in. There are 
> some use cases where knowing the topic partition changes is useful (e.g. 
> mirror maker can create same number of partitions for new topic in target 
> cluster). The new consumer will have such callback but probably won't be 
> production ready very soon. Because adding this callback is not much amount 
> of work so we can do this as a short term solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-20 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1840:

Attachment: KAFKA-1840_2015-01-20_11:36:14.patch

> Add a simple message handler in Mirror Maker
> 
>
> Key: KAFKA-1840
> URL: https://issues.apache.org/jira/browse/KAFKA-1840
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1840.patch, KAFKA-1840_2015-01-20_11:36:14.patch
>
>
> Currently mirror maker simply mirror all the messages it consumes from the 
> source cluster to target cluster. It would be useful to allow user to do some 
> simple process such as filtering/reformatting in mirror maker. We can allow 
> user to wire in a message handler to handle messages. The default handler 
> could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-20 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284284#comment-14284284
 ] 

Jiangjie Qin commented on KAFKA-1840:
-

Updated reviewboard https://reviews.apache.org/r/30063/diff/
 against branch origin/trunk

> Add a simple message handler in Mirror Maker
> 
>
> Key: KAFKA-1840
> URL: https://issues.apache.org/jira/browse/KAFKA-1840
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1840.patch, KAFKA-1840_2015-01-20_11:36:14.patch
>
>
> Currently mirror maker simply mirror all the messages it consumes from the 
> source cluster to target cluster. It would be useful to allow user to do some 
> simple process such as filtering/reformatting in mirror maker. We can allow 
> user to wire in a message handler to handle messages. The default handler 
> could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30063: Patch for KAFKA-1840

2015-01-20 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30063/
---

(Updated Jan. 20, 2015, 7:36 p.m.)


Review request for kafka.


Bugs: KAFKA-1840
https://issues.apache.org/jira/browse/KAFKA-1840


Repository: kafka


Description (updated)
---

Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
KAFKA-1840


Addressed Joel's comments


Diffs (updated)
-

  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 

Diff: https://reviews.apache.org/r/30063/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: Review Request 30083: Patch for KAFKA-1839

2015-01-20 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30083/
---

(Updated Jan. 20, 2015, 6:54 p.m.)


Review request for kafka.


Bugs: KAFKA-1839
https://issues.apache.org/jira/browse/KAFKA-1839


Repository: kafka


Description (updated)
---

Added a beforeStartingFetchers hook to consumer rebalance listener.
Modified unit test.


Diffs
-

  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
e6ff7683a0df4a7d221e949767e57c34703d5aad 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
191a8677444e53b043e9ad6e94c5a9191c32599e 
  core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
facf509841918bdcf92a271d29f6c825ff99345a 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 
  core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
24954de66ccc5158696166b7e2aabad0f1b1f287 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
a17e8532c44aadf84b8da3a57bcc797a848b5020 

Diff: https://reviews.apache.org/r/30083/diff/


Testing
---


Thanks,

Jiangjie Qin



Review Request 30083: Patch for KAFKA-1839

2015-01-20 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30083/
---

Review request for kafka.


Bugs: KAFKA-1839
https://issues.apache.org/jira/browse/KAFKA-1839


Repository: kafka


Description
---

Modified unit test.


Diffs
-

  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
e6ff7683a0df4a7d221e949767e57c34703d5aad 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
191a8677444e53b043e9ad6e94c5a9191c32599e 
  core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
facf509841918bdcf92a271d29f6c825ff99345a 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 
  core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
24954de66ccc5158696166b7e2aabad0f1b1f287 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
a17e8532c44aadf84b8da3a57bcc797a848b5020 

Diff: https://reviews.apache.org/r/30083/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: Review Request 30063: Patch for KAFKA-1840

2015-01-20 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30063/#review68750
---



core/src/main/scala/kafka/tools/MirrorMaker.scala


```
val customRebalanceListenerOpt = if (customRebalanceListenerClass != null)
  Some(Utils.createObject...)
else
  None
...
consumerRebalanceListener = new 
InternalRebalanceListener(mirrorDataChannel, customRebalanceListenerOpt)
  
```



core/src/main/scala/kafka/tools/MirrorMaker.scala


See comment above


- Joel Koshy


On Jan. 20, 2015, 4:26 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30063/
> ---
> 
> (Updated Jan. 20, 2015, 4:26 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1840
> https://issues.apache.org/jira/browse/KAFKA-1840
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> KAFKA-1840
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 5cbc8103e33a0a234d158c048e5314e841da6249 
> 
> Diff: https://reviews.apache.org/r/30063/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



Re: Review Request 27799: Patch for KAFKA-1760

2015-01-20 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review68699
---



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java


Magic number.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java


Should this be a private class?



clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java


Function names of partitionAutoAssigned and needsPartitionAssignment are a 
bit confusing. Probably rename to partitionAutoAssigned to topicSubscribed?



clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java


Needs updated.


- Guozhang Wang


On Jan. 19, 2015, 3:10 a.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 19, 2015, 3:10 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> 
> Diffs
> -
> 
>   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/apache/ka

[jira] [Commented] (KAFKA-1853) Unsuccessful suffix rename of expired LogSegment can leak open files and also leave the LogSegment in an invalid state

2015-01-20 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283989#comment-14283989
 ] 

jaikiran pai commented on KAFKA-1853:
-

Hi [~jkreps], I've updated the patch to take into account your review comments. 
It took me a while to do this because I was planning to see if this could be 
done in a different manner by not using the File.rename() Java API (as it 
appears to be more of an implementation detail). But I haven't been able to 
think of any other approach for now and I decided to go ahead with this change 
(of course pending approval).


> Unsuccessful suffix rename of expired LogSegment can leak open files and also 
> leave the LogSegment in an invalid state
> --
>
> Key: KAFKA-1853
> URL: https://issues.apache.org/jira/browse/KAFKA-1853
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: jaikiran pai
> Fix For: 0.8.3
>
> Attachments: KAFKA-1853_2015-01-20_22:04:29.patch
>
>
> As noted in this discussion in the user mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201501.mbox/%3C54AE3661.8080007%40gmail.com%3E
>  an unsuccessful attempt at renaming the underlying files of a LogSegment can 
> lead to file leaks and also leave the LogSegment in an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1853) Unsuccessful suffix rename of expired LogSegment can leak open files and also leave the LogSegment in an invalid state

2015-01-20 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283981#comment-14283981
 ] 

jaikiran pai commented on KAFKA-1853:
-

Updated reviewboard https://reviews.apache.org/r/29755/diff/
 against branch origin/trunk

> Unsuccessful suffix rename of expired LogSegment can leak open files and also 
> leave the LogSegment in an invalid state
> --
>
> Key: KAFKA-1853
> URL: https://issues.apache.org/jira/browse/KAFKA-1853
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: jaikiran pai
> Fix For: 0.8.3
>
> Attachments: KAFKA-1853_2015-01-20_22:04:29.patch
>
>
> As noted in this discussion in the user mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201501.mbox/%3C54AE3661.8080007%40gmail.com%3E
>  an unsuccessful attempt at renaming the underlying files of a LogSegment can 
> lead to file leaks and also leave the LogSegment in an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634

2015-01-20 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review68729
---



core/src/main/scala/kafka/server/KafkaApis.scala


I just realized that if we have a v0 or v1 request then we use the offset 
manager default retention which is one day.

However, if it is v2 and the user does not override it in the offset commit 
request, then the retention defaults to Long.MaxValue. I think that default 
makes sense for OffsetCommitRequest. However, I think the broker needs to 
protect itself and have an upper threshold for retention. i.e., maybe we should 
have a maxRetentionMs config in the broker.

What do you think?



core/src/main/scala/kafka/server/KafkaApis.scala


if it is _after_ v2



core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala


This file needs to be rebased.


- Joel Koshy


On Jan. 14, 2015, 11:50 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Jan. 14, 2015, 11:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/browse/KAFKA-1634
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Incorporated Joel and Jun's comments
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
> 121e880a941fcd3e6392859edba11a94236494cc 
>   
> clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
>  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 050615c72efe7dbaa4634f53943bd73273d20ffb 
>   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
> c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
>   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
> 4cabffeacea09a49913505db19a96a55d58c0909 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 191a8677444e53b043e9ad6e94c5a9191c32599e 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> c011a1b79bd6c4e832fe7d097daacb0d647d1cd4 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> a069eb9272c92ef62387304b60de1fe473d7ff49 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 3c79428962604800983415f6f705e04f52acb8fb 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> e58fbb922e93b0c31dff04f187fcadb4ece986d7 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> cd16ced5465d098be7a60498326b2a98c248f343 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 4a3a5b264a021e55c39f4d7424ce04ee591503ef 
>   core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
> ba1e48e4300c9fb32e36e7266cb05294f2a481e5 
> 
> Diff: https://reviews.apache.org/r/27391/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Updated] (KAFKA-1853) Unsuccessful suffix rename of expired LogSegment can leak open files and also leave the LogSegment in an invalid state

2015-01-20 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1853:

Attachment: KAFKA-1853_2015-01-20_22:04:29.patch

> Unsuccessful suffix rename of expired LogSegment can leak open files and also 
> leave the LogSegment in an invalid state
> --
>
> Key: KAFKA-1853
> URL: https://issues.apache.org/jira/browse/KAFKA-1853
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: jaikiran pai
> Fix For: 0.8.3
>
> Attachments: KAFKA-1853_2015-01-20_22:04:29.patch
>
>
> As noted in this discussion in the user mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201501.mbox/%3C54AE3661.8080007%40gmail.com%3E
>  an unsuccessful attempt at renaming the underlying files of a LogSegment can 
> lead to file leaks and also leave the LogSegment in an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29755: Patch for KAFKA-1853

2015-01-20 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29755/
---

(Updated Jan. 20, 2015, 4:35 p.m.)


Review request for kafka.


Bugs: KAFKA-1853
https://issues.apache.org/jira/browse/KAFKA-1853


Repository: kafka


Description (updated)
---

KAFKA-1853 Prevent leaking open file resources when renaming of the LogSegment 
fails


Diffs (updated)
-

  core/src/main/scala/kafka/log/FileMessageSet.scala 
b2652ddbe2f857028d5980e29a008b2c614694a3 
  core/src/main/scala/kafka/log/Log.scala 
846023bb98d0fa0603016466360c97071ac935ea 
  core/src/main/scala/kafka/log/OffsetIndex.scala 
1c4c7bd89e19ea942cf1d01eafe502129e97f535 

Diff: https://reviews.apache.org/r/29755/diff/


Testing
---

Have run the existing tests in LogManagerTest (which includes a test for 
cleaning of expired LogSegments) and those have passed with this change. I did 
give a thought of trying to replicate a failed rename scenario and then to 
ensure that we don't leak resources anymore, but that's not straightforward to 
do in the tests, so haven't added any new tests.


Thanks,

Jaikiran Pai



Re: Review Request 30078: Patch for KAFKA-1885

2015-01-20 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30078/
---

(Updated Jan. 20, 2015, 4:15 p.m.)


Review request for kafka.


Changes
---

An attempt at better formatting the test run details


Bugs: KAFKA-1885
https://issues.apache.org/jira/browse/KAFKA-1885


Repository: kafka


Description
---

KAFKA-1885 Upgrade junit dependency in core to 4.6 version to allow running 
individual test methods via gradle command line


Diffs
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 

Diff: https://reviews.apache.org/r/30078/diff/


Testing (updated)
---

Tested that existing support to run an entire individual test case from the 
command line works as advertised:


```./gradlew -Dtest.single=ProducerFailureHandlingTest core:test

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.test.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.test.ProducerFailureHandlingTest > testBrokerFailure PASSED

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

BUILD SUCCESSFUL

```

Also tested that with this change it is now possible to run individual test 
methods as follows:


```
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED
```


```
./gradlew core:test --tests 
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED
```


Thanks,

Jaikiran Pai



Re: Review Request 30078: Patch for KAFKA-1885

2015-01-20 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30078/
---

(Updated Jan. 20, 2015, 4:11 p.m.)


Review request for kafka.


Changes
---

Added details about the tests done locally


Bugs: KAFKA-1885
https://issues.apache.org/jira/browse/KAFKA-1885


Repository: kafka


Description
---

KAFKA-1885 Upgrade junit dependency in core to 4.6 version to allow running 
individual test methods via gradle command line


Diffs
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 

Diff: https://reviews.apache.org/r/30078/diff/


Testing (updated)
---

Tested that existing support to run an entire individual test case from the 
command line works as advertised:

`./gradlew -Dtest.single=ProducerFailureHandlingTest core:test

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.test.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.test.ProducerFailureHandlingTest > testBrokerFailure PASSED

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

BUILD SUCCESSFUL

`

Also tested that with this change it's now possible to run individual test 
methods as follows:

`
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED
`
`
./gradlew core:test --tests 
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED
`


Thanks,

Jaikiran Pai



[jira] [Updated] (KAFKA-1885) Allow test methods in "core" to be individually run from outside of the IDE

2015-01-20 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1885:

Status: Patch Available  (was: Open)

> Allow test methods in "core" to be individually run from outside of the IDE
> ---
>
> Key: KAFKA-1885
> URL: https://issues.apache.org/jira/browse/KAFKA-1885
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: jaikiran pai
>Assignee: jaikiran pai
> Attachments: KAFKA-1885.patch
>
>
> Gradle in combination with Java plugin allows test "filtering" which lets 
> users run select test classes or even select test methods from the command 
> line. See "Test filtering" section here 
> http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
> has examples of the commands. Currently we have this working in the "clients" 
> and I can run something like:
> {code}
> ./gradlew clients:test --tests 
> org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
> {code}
> and that command then only runs that specific test method 
> (testMetadataUpdateWaitTime) from the MetadataTest class.
> {code}
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.4
> :clients:compileJava UP-TO-DATE
> :clients:processResources UP-TO-DATE
> :clients:classes UP-TO-DATE
> :clients:compileTestJava UP-TO-DATE
> :clients:processTestResources UP-TO-DATE
> :clients:testClasses UP-TO-DATE
> :clients:test
> org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
> PASSED
> BUILD SUCCESSFUL
> Total time: 12.714 secs
> {code}
> I've found this useful when I need to do some quick tests and also reproduce 
> issues that aren't noticed sometimes if the whole test class is run.
> This currently only works for the "clients" and not for "core" --because the 
> "core" doesn't have the Java plugin applied to it in the gradle build--. I've 
> a patch which does that (and one other related thing) which then allowed me 
> to run individual test methods even for the core tests. I will create a 
> review request for it.
> Edit: I was wrong about the java plugin not being applied to "core". It is 
> indeed already applied but my attempt to get test methods running 
> individually for "core" were failing for a different reason related to JUnit 
> version dependency. I'll be addressing that in the patch and uploading for 
> review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1885) Allow test methods in "core" to be individually run from outside of the IDE

2015-01-20 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283961#comment-14283961
 ] 

jaikiran pai commented on KAFKA-1885:
-

Created reviewboard https://reviews.apache.org/r/30078/diff/
 against branch origin/trunk

> Allow test methods in "core" to be individually run from outside of the IDE
> ---
>
> Key: KAFKA-1885
> URL: https://issues.apache.org/jira/browse/KAFKA-1885
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: jaikiran pai
>Assignee: jaikiran pai
> Attachments: KAFKA-1885.patch
>
>
> Gradle in combination with Java plugin allows test "filtering" which lets 
> users run select test classes or even select test methods from the command 
> line. See "Test filtering" section here 
> http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
> has examples of the commands. Currently we have this working in the "clients" 
> and I can run something like:
> {code}
> ./gradlew clients:test --tests 
> org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
> {code}
> and that command then only runs that specific test method 
> (testMetadataUpdateWaitTime) from the MetadataTest class.
> {code}
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.4
> :clients:compileJava UP-TO-DATE
> :clients:processResources UP-TO-DATE
> :clients:classes UP-TO-DATE
> :clients:compileTestJava UP-TO-DATE
> :clients:processTestResources UP-TO-DATE
> :clients:testClasses UP-TO-DATE
> :clients:test
> org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
> PASSED
> BUILD SUCCESSFUL
> Total time: 12.714 secs
> {code}
> I've found this useful when I need to do some quick tests and also reproduce 
> issues that aren't noticed sometimes if the whole test class is run.
> This currently only works for the "clients" and not for "core" --because the 
> "core" doesn't have the Java plugin applied to it in the gradle build--. I've 
> a patch which does that (and one other related thing) which then allowed me 
> to run individual test methods even for the core tests. I will create a 
> review request for it.
> Edit: I was wrong about the java plugin not being applied to "core". It is 
> indeed already applied but my attempt to get test methods running 
> individually for "core" were failing for a different reason related to JUnit 
> version dependency. I'll be addressing that in the patch and uploading for 
> review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1885) Allow test methods in "core" to be individually run from outside of the IDE

2015-01-20 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1885:

Attachment: KAFKA-1885.patch

> Allow test methods in "core" to be individually run from outside of the IDE
> ---
>
> Key: KAFKA-1885
> URL: https://issues.apache.org/jira/browse/KAFKA-1885
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: jaikiran pai
>Assignee: jaikiran pai
> Attachments: KAFKA-1885.patch
>
>
> Gradle in combination with Java plugin allows test "filtering" which lets 
> users run select test classes or even select test methods from the command 
> line. See "Test filtering" section here 
> http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
> has examples of the commands. Currently we have this working in the "clients" 
> and I can run something like:
> {code}
> ./gradlew clients:test --tests 
> org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
> {code}
> and that command then only runs that specific test method 
> (testMetadataUpdateWaitTime) from the MetadataTest class.
> {code}
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.4
> :clients:compileJava UP-TO-DATE
> :clients:processResources UP-TO-DATE
> :clients:classes UP-TO-DATE
> :clients:compileTestJava UP-TO-DATE
> :clients:processTestResources UP-TO-DATE
> :clients:testClasses UP-TO-DATE
> :clients:test
> org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
> PASSED
> BUILD SUCCESSFUL
> Total time: 12.714 secs
> {code}
> I've found this useful when I need to do some quick tests and also reproduce 
> issues that aren't noticed sometimes if the whole test class is run.
> This currently only works for the "clients" and not for "core" --because the 
> "core" doesn't have the Java plugin applied to it in the gradle build--. I've 
> a patch which does that (and one other related thing) which then allowed me 
> to run individual test methods even for the core tests. I will create a 
> review request for it.
> Edit: I was wrong about the java plugin not being applied to "core". It is 
> indeed already applied but my attempt to get test methods running 
> individually for "core" were failing for a different reason related to JUnit 
> version dependency. I'll be addressing that in the patch and uploading for 
> review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30078: Patch for KAFKA-1885

2015-01-20 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30078/
---

Review request for kafka.


Bugs: KAFKA-1885
https://issues.apache.org/jira/browse/KAFKA-1885


Repository: kafka


Description
---

KAFKA-1885 Upgrade junit dependency in core to 4.6 version to allow running 
individual test methods via gradle command line


Diffs
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 

Diff: https://reviews.apache.org/r/30078/diff/


Testing
---


Thanks,

Jaikiran Pai



[jira] [Updated] (KAFKA-1885) Allow test methods in "core" to be individually run from outside of the IDE

2015-01-20 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1885:

Description: 
Gradle in combination with Java plugin allows test "filtering" which lets users 
run select test classes or even select test methods from the command line. See 
"Test filtering" section here 
http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
has examples of the commands. Currently we have this working in the "clients" 
and I can run something like:

{code}
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
{code}

and that command then only runs that specific test method 
(testMetadataUpdateWaitTime) from the MetadataTest class.

{code}
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.4
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:test

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED

BUILD SUCCESSFUL

Total time: 12.714 secs
{code}

I've found this useful when I need to do some quick tests and also reproduce 
issues that aren't noticed sometimes if the whole test class is run.

This currently only works for the "clients" and not for "core" --because the 
"core" doesn't have the Java plugin applied to it in the gradle build--. I've a 
patch which does that (and one other related thing) which then allowed me to 
run individual test methods even for the core tests. I will create a review 
request for it.

Edit: I was wrong about the java plugin not being applied to "core". It is 
indeed already applied but my attempt to get test methods running individually 
for "core" were failing for a different reason related to JUnit version 
dependency. I'll be addressing that in the patch and uploading for review.


  was:
Gradle in combination with Java plugin allows test "filtering" which lets users 
run select test classes or even select test methods from the command line. See 
"Test filtering" section here 
http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
has examples of the commands. Currently we have this working in the "clients" 
and I can run something like:

{code}
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
{code}

and that command then only runs that specific test method 
(testMetadataUpdateWaitTime) from the MetadataTest class.

{code}
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.4
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:test

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED

BUILD SUCCESSFUL

Total time: 12.714 secs
{code}

I've found this useful when I need to do some quick tests and also reproduce 
issues that aren't noticed sometimes if the whole test class is run.

This currently only works for the "clients" and not for "core" because the 
"core" doesn't have the Java plugin applied to it in the gradle build. I've a 
patch which does that (and one other related thing) which then allowed me to 
run individual test methods even for the core tests. I will create a review 
request for it.



> Allow test methods in "core" to be individually run from outside of the IDE
> ---
>
> Key: KAFKA-1885
> URL: https://issues.apache.org/jira/browse/KAFKA-1885
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: jaikiran pai
>Assignee: jaikiran pai
>
> Gradle in combination with Java plugin allows test "filtering" which lets 
> users run select test classes or even select test methods from the command 
> line. See "Test filtering" section here 
> http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
> has examples of the commands. Currently we have this working in the "clients" 
> and I can run something like:
> {code}
> ./gradlew clients:test --tests 
> org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
> {code}
> and that command then only runs that specific test method 
> (testMetadataUpdateWaitTime) from the MetadataTest class.
> {code}
> To honour the JVM settings for this buil

[jira] [Updated] (KAFKA-1885) Allow test methods in "core" to be individually run from outside of the IDE

2015-01-20 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1885:

Description: 
Gradle in combination with Java plugin allows test "filtering" which lets users 
run select test classes or even select test methods from the command line. See 
"Test filtering" section here 
http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
has examples of the commands. Currently we have this working in the "clients" 
and I can run something like:

{code}
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
{code}

and that command then only runs that specific test method 
(testMetadataUpdateWaitTime) from the MetadataTest class.

{code}
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.4
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:test

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED

BUILD SUCCESSFUL

Total time: 12.714 secs
{code}

I've found this useful when I need to do some quick tests and also reproduce 
issues that aren't noticed sometimes if the whole test class is run.

This currently only works for the "clients" and not for "core" because the 
"core" doesn't have the Java plugin applied to it in the gradle build. I've a 
patch which does that (and one other related thing) which then allowed me to 
run individual test methods even for the core tests. I will create a review 
request for it.


  was:
Gradle in combination with Java plugin allows test "filtering" which lets users 
run select test classes or even select test methods from the command line. See 
"Test filtering" section here 
http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
has examples of the commands. Currently we have this working in the "clients" 
and I can run something like:

{code}
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
{code}

and that command then only runs that specific test method 
(testMetadataUpdateWaitTime) from the MetadataTest class.

{code}
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.4
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:test

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED

BUILD SUCCESSFUL

Total time: 12.714 secs
{code}

I've found this useful when I need to do some quick tests and also reproduce 
issues that aren't noticed sometimes if the whole test class is run.

This currently only works for the "clients" because the "code" doesn't have the 
Java plugin applied to it in the gradle build. I've a patch which does that 
(and one other related thing) which then allowed me to run individual test 
methods even for the core tests. I will create a review request for it.



> Allow test methods in "core" to be individually run from outside of the IDE
> ---
>
> Key: KAFKA-1885
> URL: https://issues.apache.org/jira/browse/KAFKA-1885
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: jaikiran pai
>Assignee: jaikiran pai
>
> Gradle in combination with Java plugin allows test "filtering" which lets 
> users run select test classes or even select test methods from the command 
> line. See "Test filtering" section here 
> http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
> has examples of the commands. Currently we have this working in the "clients" 
> and I can run something like:
> {code}
> ./gradlew clients:test --tests 
> org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
> {code}
> and that command then only runs that specific test method 
> (testMetadataUpdateWaitTime) from the MetadataTest class.
> {code}
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.4
> :clients:compileJava UP-TO-DATE
> :clients:processResources UP-TO-DATE
> :clients:classes UP-TO-DATE
> :clients:compileTestJava UP-TO-DATE
> :clients:

[jira] [Created] (KAFKA-1885) Allow test methods in "core" to be individually run from outside of the IDE

2015-01-20 Thread jaikiran pai (JIRA)
jaikiran pai created KAFKA-1885:
---

 Summary: Allow test methods in "core" to be individually run from 
outside of the IDE
 Key: KAFKA-1885
 URL: https://issues.apache.org/jira/browse/KAFKA-1885
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: jaikiran pai
Assignee: jaikiran pai


Gradle in combination with Java plugin allows test "filtering" which lets users 
run select test classes or even select test methods from the command line. See 
"Test filtering" section here 
http://www.gradle.org/docs/2.0/userguide/java_plugin.html#sec:java_test which 
has examples of the commands. Currently we have this working in the "clients" 
and I can run something like:

{code}
./gradlew clients:test --tests 
org.apache.kafka.clients.producer.MetadataTest.testMetadataUpdateWaitTime
{code}

and that command then only runs that specific test method 
(testMetadataUpdateWaitTime) from the MetadataTest class.

{code}
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.4
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:test

org.apache.kafka.clients.producer.MetadataTest > testMetadataUpdateWaitTime 
PASSED

BUILD SUCCESSFUL

Total time: 12.714 secs
{code}

I've found this useful when I need to do some quick tests and also reproduce 
issues that aren't noticed sometimes if the whole test class is run.

This currently only works for the "clients" because the "code" doesn't have the 
Java plugin applied to it in the gradle build. I've a patch which does that 
(and one other related thing) which then allowed me to run individual test 
methods even for the core tests. I will create a review request for it.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1337) Rationalize new producer configs

2015-01-20 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-1337.

   Resolution: Fixed
Fix Version/s: 0.8.2

The followup patch has been committed to 0.8.2. Resolving the jira.

> Rationalize new producer configs
> 
>
> Key: KAFKA-1337
> URL: https://issues.apache.org/jira/browse/KAFKA-1337
> Project: Kafka
>  Issue Type: Sub-task
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.2
>
> Attachments: KAFKA-1337.patch, KAFKA-1337.patch, KAFKA-1337.patch, 
> KAFKA-1337.patch
>
>
> New producer configs have been added somewhat haphazardly. Before we consider 
> the producer final, let's do the following:
> 1. Go back and think about all the config names and make sure they are really 
> what we want.
> 2. Add doc strings for all the configs
> 3. Make the config keys non-public.
> 4. Add a feature to differentiate important from non-important configs.
> 5. Add code to generate the website documentation off the internal config 
> docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1000) Inbuilt consumer offset management feature for kakfa

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1000:
---
Fix Version/s: 0.8.2

> Inbuilt consumer offset management feature for kakfa
> 
>
> Key: KAFKA-1000
> URL: https://issues.apache.org/jira/browse/KAFKA-1000
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer
>Affects Versions: 0.8.1
>Reporter: Tejas Patil
>Assignee: Tejas Patil
>Priority: Minor
>  Labels: features
> Fix For: 0.8.2
>
>
> Kafka currently stores offsets in zookeeper. This is a problem for several 
> reasons. First it means the consumer must embed the zookeeper client which is 
> not available in all languages. Secondly offset commits are actually quite 
> frequent and Zookeeper does not scale this kind of high-write load. 
> This Jira is for tracking the phase #2 of Offset Management [0]. Joel and I 
> have been working on this. [1] is the overall design of the feature.
> [0] : https://cwiki.apache.org/confluence/display/KAFKA/Offset+Management
> [1] : 
> https://cwiki.apache.org/confluence/display/KAFKA/Inbuilt+Consumer+Offset+Management



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1045) producer zk.connect config

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-1045.

   Resolution: Fixed
Fix Version/s: 0.8.2

Zookeeper configs are not required for new and old producer implementations. 
Hence closing the issue.

> producer zk.connect config
> --
>
> Key: KAFKA-1045
> URL: https://issues.apache.org/jira/browse/KAFKA-1045
> Project: Kafka
>  Issue Type: Bug
>Reporter: sjk
> Fix For: 0.8.2
>
>
> java.lang.IllegalArgumentException: requirement failed: Missing required 
> property 'metadata.broker.list'
> props.put("zk.connect", KafkaConfig.getZooAddress());
> when i config zk, why the above tip appear?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1109) Need to fix GC log configuration code, not able to override KAFKA_GC_LOG_OPTS

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1109:
---
Attachment: KAFKA-1109.patch

> Need to fix GC log configuration code, not able to override KAFKA_GC_LOG_OPTS
> -
>
> Key: KAFKA-1109
> URL: https://issues.apache.org/jira/browse/KAFKA-1109
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
> Environment: *nix
>Reporter: Viktor Kolodrevskiy
> Attachments: KAFKA-1109.patch, KAFKA_1109_fix.patch
>
>
> kafka-run-class.sh contains GC log code:
> # GC options
> GC_FILE_SUFFIX='-gc.log'
> GC_LOG_FILE_NAME=''
> if [ "$1" = "daemon" ] && [ -z "$KAFKA_GC_LOG_OPTS"] ; then
>   shift
>   GC_LOG_FILE_NAME=$1$GC_FILE_SUFFIX
>   shift
>   KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps "
> fi
> So when in my scripts I start kafka and want to override KAFKA_GC_LOG_OPTS by 
> exporting new values I get:
> Exception in thread "main" java.lang.NoClassDefFoundError: daemon
> Caused by: java.lang.ClassNotFoundException: daemon
> That's because shift is not done when KAFKA_GC_LOG_OPTS is set and "daemon" 
> is passed as main class.
> I suggest to replace it with this code:
> # GC options
> GC_FILE_SUFFIX='-gc.log'
> GC_LOG_FILE_NAME=''
> if [ "$1" = "daemon" ] && [ -z "$KAFKA_GC_LOG_OPTS" ] ; then
>   shift
>   GC_LOG_FILE_NAME=$1$GC_FILE_SUFFIX
>   shift
>   KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps "
> else
> if [ "$1" = "daemon" ] && [ "$KAFKA_GC_LOG_OPTS" != "" ] ; then
>   shift 2
> fi
> fi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1109) Need to fix GC log configuration code, not able to override KAFKA_GC_LOG_OPTS

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1109:
---
Assignee: Manikumar Reddy
  Status: Patch Available  (was: Open)

> Need to fix GC log configuration code, not able to override KAFKA_GC_LOG_OPTS
> -
>
> Key: KAFKA-1109
> URL: https://issues.apache.org/jira/browse/KAFKA-1109
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
> Environment: *nix
>Reporter: Viktor Kolodrevskiy
>Assignee: Manikumar Reddy
> Attachments: KAFKA-1109.patch, KAFKA_1109_fix.patch
>
>
> kafka-run-class.sh contains GC log code:
> # GC options
> GC_FILE_SUFFIX='-gc.log'
> GC_LOG_FILE_NAME=''
> if [ "$1" = "daemon" ] && [ -z "$KAFKA_GC_LOG_OPTS"] ; then
>   shift
>   GC_LOG_FILE_NAME=$1$GC_FILE_SUFFIX
>   shift
>   KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps "
> fi
> So when in my scripts I start kafka and want to override KAFKA_GC_LOG_OPTS by 
> exporting new values I get:
> Exception in thread "main" java.lang.NoClassDefFoundError: daemon
> Caused by: java.lang.ClassNotFoundException: daemon
> That's because shift is not done when KAFKA_GC_LOG_OPTS is set and "daemon" 
> is passed as main class.
> I suggest to replace it with this code:
> # GC options
> GC_FILE_SUFFIX='-gc.log'
> GC_LOG_FILE_NAME=''
> if [ "$1" = "daemon" ] && [ -z "$KAFKA_GC_LOG_OPTS" ] ; then
>   shift
>   GC_LOG_FILE_NAME=$1$GC_FILE_SUFFIX
>   shift
>   KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps "
> else
> if [ "$1" = "daemon" ] && [ "$KAFKA_GC_LOG_OPTS" != "" ] ; then
>   shift 2
> fi
> fi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30073: Patch for KAFKA-1109

2015-01-20 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30073/
---

Review request for kafka.


Bugs: KAFKA-1109
https://issues.apache.org/jira/browse/KAFKA-1109


Repository: kafka


Description
---

Corrected kafka-run-class.sh script to override KAFKA_GC_LOG_OPTS environment 
property


Diffs
-

  bin/kafka-run-class.sh 22a9865b5939450a9d7f4ea2eee5eba2c1ec758c 

Diff: https://reviews.apache.org/r/30073/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Commented] (KAFKA-1109) Need to fix GC log configuration code, not able to override KAFKA_GC_LOG_OPTS

2015-01-20 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283755#comment-14283755
 ] 

Manikumar Reddy commented on KAFKA-1109:


Created reviewboard https://reviews.apache.org/r/30073/diff/
 against branch origin/trunk

> Need to fix GC log configuration code, not able to override KAFKA_GC_LOG_OPTS
> -
>
> Key: KAFKA-1109
> URL: https://issues.apache.org/jira/browse/KAFKA-1109
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
> Environment: *nix
>Reporter: Viktor Kolodrevskiy
> Attachments: KAFKA-1109.patch, KAFKA_1109_fix.patch
>
>
> kafka-run-class.sh contains GC log code:
> # GC options
> GC_FILE_SUFFIX='-gc.log'
> GC_LOG_FILE_NAME=''
> if [ "$1" = "daemon" ] && [ -z "$KAFKA_GC_LOG_OPTS"] ; then
>   shift
>   GC_LOG_FILE_NAME=$1$GC_FILE_SUFFIX
>   shift
>   KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps "
> fi
> So when in my scripts I start kafka and want to override KAFKA_GC_LOG_OPTS by 
> exporting new values I get:
> Exception in thread "main" java.lang.NoClassDefFoundError: daemon
> Caused by: java.lang.ClassNotFoundException: daemon
> That's because shift is not done when KAFKA_GC_LOG_OPTS is set and "daemon" 
> is passed as main class.
> I suggest to replace it with this code:
> # GC options
> GC_FILE_SUFFIX='-gc.log'
> GC_LOG_FILE_NAME=''
> if [ "$1" = "daemon" ] && [ -z "$KAFKA_GC_LOG_OPTS" ] ; then
>   shift
>   GC_LOG_FILE_NAME=$1$GC_FILE_SUFFIX
>   shift
>   KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps "
> else
> if [ "$1" = "daemon" ] && [ "$KAFKA_GC_LOG_OPTS" != "" ] ; then
>   shift 2
> fi
> fi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1124) Sending to a new topic (with auto.create.topics.enable) returns ERROR

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-1124.

   Resolution: Fixed
Fix Version/s: 0.8.2

This got fixed in new producer.

> Sending to a new topic (with auto.create.topics.enable) returns ERROR
> -
>
> Key: KAFKA-1124
> URL: https://issues.apache.org/jira/browse/KAFKA-1124
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Jason Rosenberg
>  Labels: usability
> Fix For: 0.8.2
>
>
> I had thought this was reported issue, but can't seem to find a previous 
> report for it.
> If auto.create.topics.enable is true, a producer still gets an ERROR logged 
> on the first attempt to send a message to a new topic, e.g.:
> 2013-11-06 03:00:08,638 ERROR [Thread-1] async.DefaultEventHandler - Failed 
> to collate messages by topic, partition due to: Failed to fetch topic 
> metadata for topic: mynewtopic
> 2013-11-06 03:00:08,638  INFO [Thread-1] async.DefaultEventHandler - Back off 
> for 100 ms before retrying send. Remaining retries = 3
> This usually clears itself up immediately on retry (after 100 ms), as handled 
> by the the kafka.producer.async.DefaultEventHandler (with retries enabled).
> However, this is logged to the client as an ERROR, and looks scary, when in 
> fact it should have been a normal operation (since we have 
> auto.create.topics.enable=true).
> There should be a better interaction here between the producer client and the 
> server.
> Perhaps the server can create the topic in flight before returning the 
> metadata request.
> Or, if it needs to be asynchronous, it could return a code which indicates 
> something like: "The topic doesn't exist yet, it is being created, try again 
> shortly".and have the client automatically retry (even if retries not 
> enabled, since it's not an ERROR condition, really).
> The ERROR log level is a problem since apps often have alert systems set up 
> to notify when any ERROR happens, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1337) Rationalize new producer configs

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1337:
---
Affects Version/s: 0.8.2

> Rationalize new producer configs
> 
>
> Key: KAFKA-1337
> URL: https://issues.apache.org/jira/browse/KAFKA-1337
> Project: Kafka
>  Issue Type: Sub-task
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1337.patch, KAFKA-1337.patch, KAFKA-1337.patch, 
> KAFKA-1337.patch
>
>
> New producer configs have been added somewhat haphazardly. Before we consider 
> the producer final, let's do the following:
> 1. Go back and think about all the config names and make sure they are really 
> what we want.
> 2. Add doc strings for all the configs
> 3. Make the config keys non-public.
> 4. Add a feature to differentiate important from non-important configs.
> 5. Add code to generate the website documentation off the internal config 
> docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1239) New producer checklist

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1239:
---
Affects Version/s: (was: 0.9.0)
   0.8.2

> New producer checklist
> --
>
> Key: KAFKA-1239
> URL: https://issues.apache.org/jira/browse/KAFKA-1239
> Project: Kafka
>  Issue Type: New Feature
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Neha Narkhede
>
> Here is the list of todo items we have for the new producer (in no
> particular order):
> 1. Rename to org.apache.* package
> 2. Discuss config approach
> 3. Finalize config approach
> 4. Add slf4j logging for debugging purposes
> 5. Discuss metrics approach
> 6. Add metrics
> 7. Convert perf test to optionally use new producer
> 8. Get system tests passing with new producer
> 9. Write integration tests that test the producer against the real server
> 10. Expand unit test coverage a bit
> 11. Performance testing and analysis.
> 12. Add compression support
> 13. Discuss and perhaps add retry support
> 14. Discuss the approach to protocol definition and perhaps refactor a bit
> 15. Deeper code review
> 16. Convert mirror maker
> This doesn't count general bug fixing which I assume we will do as we find
> them.
> Let's file subtasks for each of the above, so there is a single place to 
> track what's outstanding on the new producer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1434) kafka.admin.TopicCommand missing --delete topic command

2015-01-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-1434.

   Resolution: Fixed
Fix Version/s: 0.8.2

This issue is fixed in 0.8.2. Pl enable "delete.topic.enable"  broker config 
property  to enable topic deletion feature.

> kafka.admin.TopicCommand missing --delete topic command
> ---
>
> Key: KAFKA-1434
> URL: https://issues.apache.org/jira/browse/KAFKA-1434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Michał Czerwiński
> Fix For: 0.8.2
>
>
> It is not possible to delete topic as --delete command is not available:
> ~/kafka_2.10-0.8.1.1/bin# ./kafka-run-class.sh kafka.admin.TopicCommand
> Command must include exactly one action: --list, --describe, --create or 
> --alter
> Option  Description
> --  ---
> --alter Alter the configuration for the topic.
> --configA topic configuration override for the
>   topic being created or altered.
> --createCreate a new topic.
> --deleteConfigA topic configuration override to be
>   removed for an existing topic
> --describe  List details for the given topics.
> --help  Print usage information.
> --list  List all available topics.
> --partitions  The number of partitions for the topic
>   being created or altered (WARNING:
>   If partitions are increased for a
>   topic that has a key, the partition
>   logic or ordering of the messages
>   will be affected
> --replica-assignmentA list of manual partition-to-broker
>  broker_id_for_part1_replica2 ,  created or altered.
>   broker_id_for_part2_replica1 :
>   broker_id_for_part2_replica2 , ...>
> --replication-factorreplication factor> partition in the topic being 
> created.
> --topic  The topic to be create, alter or
>   describe. Can also accept a regular
>   expression except for --create 
> option
> --topics-with-overrides if set when describing topics, only
>   show topics that have overridden
>   configs
> --unavailable-partitionsif set when describing topics, only
>   show partitions whose leader is not
>   available
> --under-replicated-partitions   if set when describing topics, only
>   show under replicated partitions
> --zookeeper   REQUIRED: The connection string for
>   the zookeeper connection in the form
>   host:port. Multiple URLS can be
>   given to allow fail-over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: Patch for KAFKA-1760

2015-01-20 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review68693
---



clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java


"Data will be load balanced ..." I do not understand what does this mean?



clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java


"If not server ..." Remove this sentence?



clients/src/main/java/org/apache/kafka/clients/KafkaClient.java


Wondering why we make newline for @param but keep the same line for @return?



clients/src/main/java/org/apache/kafka/clients/NetworkClient.java


Is this function really private? If yes we do not need keep the javadoc for 
it.



clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java


The javadocs here are a little confusing: users looking for its function 
APIs need to look into KafkaConsumer, an implementation of the interface 
Consumer.



clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java


"Partitions .." Remove this line as it only make sense when consumer 
subscribe by topics.



clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java


"if the current offset is smaller than the oldest ..." => "the current 
offset does not exist on the server"



clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java


Should this inherit from CommonClientConfig?



clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java


"The maxmium amount of data per-partition ... The maximum total memory used 
..."



clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java


Could these two be moved to CommonClientConfig?



clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java


"none" => "disable" according to the docs?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Maybe add some comments on why consumerId / generationId are initialized as 
such and when they will be updated and used.



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


There is a potential risk that a topic is deleted and the consumer 
unscribes to it, but not removing it from its metadata topic list, causing the 
underlying network client to keep refreshing metadata.



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Should the ordering of these two "else if" be swapped?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Do we need to back off here?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Incomplete comments.



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Rename this variable?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Should we change the "this.client.inFlightRequestCount .. " condition to 
just "node.ready()"?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Magic number "-1": should we define sth. like "OrdinaryConsumerReplicaId"?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Do not understand the TODO statement.



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Rename the function name?



clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java


This kafka exception could be thrown other places besides committed() and 
position(), it could also be thrown in:

private resetOffset() -> 

  private fetchMissingPositionsOrResetThem() ->

public position()
public pool()