[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-19 Thread James Lent (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702992#comment-14702992
 ] 

James Lent commented on KAFKA-1387:
---

Your approach sounds much simpler than mine (which I like).  Similar to what I 
proposed doing only at startup (ensureNodeDoesNotExist method).  I am however 
not sure I understand the exact change you propose.  As I remember the 
createEphemeralPathExpectConflictHandleZKBug is called by three code paths:

- Register Broker
- Register Consumer
- Leadership election  

In my change I specifically tried avoid changing the Leadership election logic.

Is your change basically to implement your new logic (delete if already exists) 
instead of calling createEphemeralPathExpectConflictHandleZKBug in the first 
two cases?  If so I agree it sounds reasonable.  I suppose in a 
misconfiguration case two nodes might get into a registration war over the 
Broker node, but, that could (perhaps) be handled at startup (second one fails 
to start up).

If your propose replacing the createEphemeralPathExpectConflictHandleZKBug for 
the Leadership election case too then I am less comfortable making (and 
testing) that change.  I have never really dug into that logic too much.

One other factor to consider is that I am a bit backed up a work right now and 
this will not be issue will not be my highest priority.


 Kafka getting stuck creating ephemeral node it has already created when two 
 zookeeper sessions are established in a very short period of time
 -

 Key: KAFKA-1387
 URL: https://issues.apache.org/jira/browse/KAFKA-1387
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Fedor Korotkiy
Priority: Blocker
  Labels: newbie, patch, zkclient-problems
 Attachments: kafka-1387.patch


 Kafka broker re-registers itself in zookeeper every time handleNewSession() 
 callback is invoked.
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
  
 Now imagine the following sequence of events.
 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
 the zkClient, but not invoked yet.
 2) Zookeeper session reestablishes again, queueing callback second time.
 3) First callback is invoked, creating /broker/[id] ephemeral path.
 4) Second callback is invoked and it tries to create /broker/[id] path using 
 createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
 already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
 stuck in the infinite loop.
 Seems like controller election code have the same issue.
 I'am able to reproduce this issue on the 0.8.1 branch from github using the 
 following configs.
 # zookeeper
 tickTime=10
 dataDir=/tmp/zk/
 clientPort=2101
 maxClientCnxns=0
 # kafka
 broker.id=1
 log.dir=/tmp/kafka
 zookeeper.connect=localhost:2101
 zookeeper.connection.timeout.ms=100
 zookeeper.sessiontimeout.ms=100
 Just start kafka and zookeeper and then pause zookeeper several times using 
 Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2441) SSL/TLS in official docs

2015-08-19 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-2441:
-

Assignee: Sriharsha Chintalapani

 SSL/TLS in official docs
 

 Key: KAFKA-2441
 URL: https://issues.apache.org/jira/browse/KAFKA-2441
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Ismael Juma
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 We need to add a section in the official documentation regarding SSL/TLS:
 http://kafka.apache.org/documentation.html
 There is already a wiki page where some of the information is already present:
 https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1685) Implement TLS/SSL tests

2015-08-19 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-1685.
---
Resolution: Fixed

 Implement TLS/SSL tests
 ---

 Key: KAFKA-1685
 URL: https://issues.apache.org/jira/browse/KAFKA-1685
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.8.2.1
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 We need to write a suite of unit tests for TLS authentication. This should be 
 doable with a junit integration test. We can use the simple authorization 
 plugin with only a single user whitelisted. The test can start the server and 
 then connects with and without TLS and validates that access is only possible 
 when authenticated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1070) Auto-assign node id

2015-08-19 Thread Alex Etling (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703049#comment-14703049
 ] 

Alex Etling commented on KAFKA-1070:


Thank you for the help!   I appreciate it.   

 Auto-assign node id
 ---

 Key: KAFKA-1070
 URL: https://issues.apache.org/jira/browse/KAFKA-1070
 Project: Kafka
  Issue Type: Bug
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
  Labels: usability
 Fix For: 0.8.3

 Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
 KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
 KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch, 
 KAFKA-1070_2014-11-20_10:50:04.patch, KAFKA-1070_2014-11-25_20:29:37.patch, 
 KAFKA-1070_2015-01-01_17:39:30.patch, KAFKA-1070_2015-01-12_10:46:54.patch, 
 KAFKA-1070_2015-01-12_18:30:17.patch


 It would be nice to have Kafka brokers auto-assign node ids rather than 
 having that be a configuration. Having a configuration is irritating because 
 (1) you have to generate a custom config for each broker and (2) even though 
 it is in configuration, changing the node id can cause all kinds of bad 
 things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1691) new java consumer needs ssl support as a client

2015-08-19 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-1691.
---
Resolution: Fixed

 new java consumer needs ssl support as a client
 ---

 Key: KAFKA-1691
 URL: https://issues.apache.org/jira/browse/KAFKA-1691
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1691) new java consumer needs ssl support as a client

2015-08-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703134#comment-14703134
 ] 

Jun Rao commented on KAFKA-1691:


This is done as part of KAFKA-1690.

 new java consumer needs ssl support as a client
 ---

 Key: KAFKA-1691
 URL: https://issues.apache.org/jira/browse/KAFKA-1691
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1756) never allow the replica fetch size to be less than the max message size

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1756:

Fix Version/s: (was: 0.8.3)

 never allow the replica fetch size to be less than the max message size
 ---

 Key: KAFKA-1756
 URL: https://issues.apache.org/jira/browse/KAFKA-1756
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1, 0.8.2.0
Reporter: Joe Stein
Priority: Blocker

 There exists a very hazardous scenario where if the max.message.bytes is 
 greather than the replica.fetch.max.bytes the message will never replicate. 
 This will bring the ISR down to 1 (eventually/quickly once 
 replica.lag.max.messages is reached). If during this window the leader itself 
 goes out of the ISR then the new leader will commit the last offset it 
 replicated. This is also bad for sync producers with -1 ack because they will 
 all block (heard affect caused upstream) in this scenario too.
 The fix here is two fold
 1) when setting max.message.bytes using kafka-topics we must check first each 
 and every broker (which will need some thought about how todo this because of 
 the topiccommand zk notification) that max.message.bytes = 
 replica.fetch.max.bytes and if it is NOT then DO NOT create the topic
 2) if you change this in server.properties then the broker should not start 
 if max.message.bytes  replica.fetch.max.bytes
 This does beg the question/issue some about centralizing certain/some/all 
 configurations so that inconsistencies do not occur (where broker 1 has 
 max.message.bytes  replica.fetch.max.bytes but broker 2 max.message.bytes = 
 replica.fetch.max.bytes because of error in properties). I do not want to 
 conflate this ticket but I think it is worth mentioning/bringing up here as 
 it is a good example where it could make sense. 
 I set this as BLOCKER for 0.8.2-beta because we did so much work to enable 
 consistency vs availability and 0 data loss this corner case should be part 
 of 0.8.2-final
 Also, I could go one step further (though I would not consider this part as a 
 blocker for 0.8.2 but interested to what other folks think) about a consumer 
 replica fetch size so that if the message max is increased messages will no 
 longer be consumed (since the consumer fetch max would be   max.message.bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703184#comment-14703184
 ] 

Ismael Juma commented on KAFKA-2411:


[~gwenshap], [~junrao] and [~harsha_ch], I filed a PR 
(https://github.com/apache/kafka/pull/151) for this issue. It would be really 
helpful to get some feedback. I added a number of questions to the PR itself.

 remove usage of BlockingChannel in the broker
 -

 Key: KAFKA-2411
 URL: https://issues.apache.org/jira/browse/KAFKA-2411
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Ismael Juma
 Fix For: 0.8.3


 In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
 still a few places where we use BlockingChannel for inter-broker 
 communication. We need to replace those usage with Selector/NetworkClient to 
 enable inter-broker communication over SSL. Specially, BlockingChannel is 
 currently used in the following places.
 1. ControllerChannelManager: for the controller to propagate metadata to the 
 brokers.
 2. KafkaServer: for the broker to send controlled shutdown request to the 
 controller.
 3. -AbstractFetcherThread: for the follower to fetch data from the leader 
 (through SimpleConsumer)- moved to KAFKA-2440



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2444) Fail test: kafka.api.QuotasTest testThrottledProducerConsumer FAILED

2015-08-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703183#comment-14703183
 ] 

Gwen Shapira commented on KAFKA-2444:
-

[~aauradkar] - can you take a look?

 Fail test: kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
 --

 Key: KAFKA-2444
 URL: https://issues.apache.org/jira/browse/KAFKA-2444
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira

 This test has been failing on Jenkins builds several times in the last few 
 days. For example: https://builds.apache.org/job/Kafka-trunk/591/console
 kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
 junit.framework.AssertionFailedError: Should have been throttled
 at junit.framework.Assert.fail(Assert.java:47)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at 
 kafka.api.QuotasTest.testThrottledProducerConsumer(QuotasTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1489:

Labels:   (was: newbie)

 Global threshold on data retention size
 ---

 Key: KAFKA-1489
 URL: https://issues.apache.org/jira/browse/KAFKA-1489
 Project: Kafka
  Issue Type: New Feature
  Components: log
Affects Versions: 0.8.1.1
Reporter: Andras Sereny

 Currently, Kafka has per topic settings to control the size of one single log 
 (log.retention.bytes). With lots of topics of different volume and as they 
 grow in number, it could become tedious to maintain topic level settings 
 applying to a single log. 
 Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
 stored, so it'd make sense to have a configurable threshold to control how 
 much space *all* data in one Kafka log data directory can take up.
 See also:
 http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
 http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1489:

Fix Version/s: (was: 0.8.3)

 Global threshold on data retention size
 ---

 Key: KAFKA-1489
 URL: https://issues.apache.org/jira/browse/KAFKA-1489
 Project: Kafka
  Issue Type: New Feature
  Components: log
Affects Versions: 0.8.1.1
Reporter: Andras Sereny

 Currently, Kafka has per topic settings to control the size of one single log 
 (log.retention.bytes). With lots of topics of different volume and as they 
 grow in number, it could become tedious to maintain topic level settings 
 applying to a single log. 
 Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
 stored, so it'd make sense to have a configurable threshold to control how 
 much space *all* data in one Kafka log data directory can take up.
 See also:
 http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
 http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-934) kafka hadoop consumer and producer use older 0.19.2 hadoop api's

2015-08-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703240#comment-14703240
 ] 

Gwen Shapira commented on KAFKA-934:


I'm really not sure. I removed it from 0.8.3 since it is not one of the 0.8.3 
targets, I don't think there are specific future plans, although I agree that 
once Copycat has an HDFS connector, it may be silly to maintain both.

 kafka hadoop consumer and producer use older 0.19.2 hadoop api's
 

 Key: KAFKA-934
 URL: https://issues.apache.org/jira/browse/KAFKA-934
 Project: Kafka
  Issue Type: Bug
  Components: contrib
Affects Versions: 0.8.0
 Environment: [amilkowski@localhost impl]$ uname -a
 Linux localhost.localdomain 3.9.4-200.fc18.x86_64 #1 SMP Fri May 24 20:10:49 
 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Andrew Milkowski
Assignee: Sriharsha Chintalapani
  Labels: hadoop, hadoop-2.0, newbie

 New hadoop api present in 0.20.1 especially package  
 org.apache.hadoop.mapredude.lib is not used 
 code affected is both consumer and producer in kafka in the contrib package
 [amilkowski@localhost contrib]$ pwd
 /opt/local/git/kafka/contrib
 [amilkowski@localhost contrib]$ ls -lt
 total 12
 drwxrwxr-x 8 amilkowski amilkowski 4096 May 30 11:14 hadoop-consumer
 drwxrwxr-x 6 amilkowski amilkowski 4096 May 29 19:31 hadoop-producer
 drwxrwxr-x 6 amilkowski amilkowski 4096 May 29 16:43 target
 [amilkowski@localhost contrib]$ 
 in example
 import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.RunningJob;
 import org.apache.hadoop.mapred.TextOutputFormat;
 use 0.19.2 hadoop api format, this prevents merging of hadoop feature into 
 more modern hadoop implementation
 instead of drawing from 0.20.1 api set in import org.apache.hadoop.mapreduce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Guozhang Wang
Even under the second option, it sounds like we still cannot include the
code and doc changes in one commit?

Guozhang

On Wed, Aug 19, 2015 at 8:56 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:

 oops.. i did not check Ismail's mail.

 On Wed, Aug 19, 2015 at 9:25 PM, Manikumar Reddy ku...@nmsworks.co.in
 wrote:

  Hi,
 
We have raised a Apache Infra ticket for migrating site docs from svn
   - git .
Currently, the gitwcsub client only supports using the asf-site
  branch for site docs.
Infra team is suggesting to create  new git repo for site docs.
 
 Infra ticket here:
 https://issues.apache.org/jira/browse/INFRA-10143
 
 Possible Options:
 1. Maintain code and docs in same repo, but on different branches
  (trunk and asf-site)
 2. Create a new git repo for docs and integrate with gitwcsub.
 
 I vote for second option.
 
 
  Kumar
 
  On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro 
 edward.ribe...@gmail.com
  wrote:
 
  FYI, I created a tiny trivial patch to address a typo in the web site
  (KAFKA-2418), so maybe you can review it and eventually commit before
  moving to github. ;)
 
  Cheers,
  Eddie
  Em 12/08/2015 06:01, Ismael Juma ism...@juma.me.uk escreveu:
 
   Hi Gwen,
  
   I filed KAFKA-2425 as KAFKA-2364 is about improving the website
   documentation. Aseem Bansal seemed interested in helping us with the
  move
   so I pinged him in the issue.
  
   Best,
   Ismael
  
   On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
  wrote:
  
Ah, there is already a JIRA in the title. Never mind :)
   
On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
  wrote:
   
 The vote opened 5 days ago. I believe we can conclude with 3
 binding
   +1,
3
 non-binding +1 and no -1.

 Ismael, are you opening and JIRA and migrating? Or are we looking
  for a
 volunteer?

 On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
 asi...@cloudera.com
wrote:

 +1 on same repo.

 On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
 edward.ribe...@gmail.com
 wrote:

  +1. As soon as possible, please. :)
 
  On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
 n...@confluent.io
  
 wrote:
 
   +1 on the same repo for code and website. It helps to keep
  both in
 sync.
  
   On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
  ghe...@cloudera.com
 wrote:
  
+1 for the same repo. The closer docs can be to code the
 more
 accurate
   they
are likely to be. The same way we encourage unit tests for
 a
  new
feature/patch. Updating the docs can be the same.
   
If we follow Sqoop's process for example, how would small
fixes/adjustments/additions to the live documentation occur
without
 a
  new
release?
   
On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
   wangg...@gmail.com

   wrote:
   
 I am +1 on same repo too. I think keeping one git history
  of
code
 /
  doc
 change may actually be beneficial for this approach as
  well.

 Guozhang

 On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
   g...@confluent.io

   wrote:

  I prefer same repo for one-commit / lower-barrier
  benefits.
 
  Sqoop has the following process, which decouples
   documentation
   changes
 from
  website changes:
 
  1. Code github repo contains a doc directory, with the
  documentation
  written and maintained in AsciiDoc. Only one version of
  the
 documentation,
  since it is source controlled with the code. (unlike
  current
SVN
   where
we
  have directories per version)
 
  2. Build process compiles the AsciiDoc to HTML and PDF
 
  3. When releasing, we post the documentation of the new
release
 to
   the
  website
 
  Gwen
 
  On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
ism...@juma.me.uk
 
wrote:
 
   Hi,
  
   For reference, here is the previous discussion on
  moving
   the
   website
to
   Git:
  
   http://search-hadoop.com/m/uyzND11JliU1E8QU92
  
   People were positive to the idea as Jay said. I would
  like
to
  see a
bit
  of
   a discussion around whether the website should be
 part
  of
the
  same
repo
  as
   the code or not. I'll get the ball rolling.
  
   Pros for same repo:
   * One commit can update the code and website, which
  means:
   ** Lower barrier for updating docs along with
 relevant
   code
  changes
   ** Easier to require that both are updated at the
 same
   time
   * More eyeballs on the 

[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703163#comment-14703163
 ] 

ASF GitHub Bot commented on KAFKA-2411:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/151

KAFKA-2411; remove usage of blocking channel



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2411-remove-usage-of-blocking-channel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/151.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #151


commit dbcde7e828a250708752866c4610298773dea006
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T13:30:35Z

Introduce `ChannelBuilders.create` and use it in `ClientUtils` and 
`SocketServer`

commit 6de8b9b18c6bfb67e72a4fccc10768dff15098f8
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:22:55Z

Use `Selector` instead of `BlockingChannel` for controlled shutdown

commit da7a980887ab2b5d007ddf80c3059b6619d52f99
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:23:11Z

Use `Selector` instead of `BlockingChannel` in `ControllerChannelManager`




 remove usage of BlockingChannel in the broker
 -

 Key: KAFKA-2411
 URL: https://issues.apache.org/jira/browse/KAFKA-2411
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Ismael Juma
 Fix For: 0.8.3


 In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
 still a few places where we use BlockingChannel for inter-broker 
 communication. We need to replace those usage with Selector/NetworkClient to 
 enable inter-broker communication over SSL. Specially, BlockingChannel is 
 currently used in the following places.
 1. ControllerChannelManager: for the controller to propagate metadata to the 
 brokers.
 2. KafkaServer: for the broker to send controlled shutdown request to the 
 controller.
 3. -AbstractFetcherThread: for the follower to fetch data from the leader 
 (through SimpleConsumer)- moved to KAFKA-2440



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-28 - Add a transform client for data processing

2015-08-19 Thread Yan Fang
Hi Guozhang,

Thank you for writing the KIP-28 up. (Hope this is the right thread for me to 
post some comments. :) 

I still have some confusing about the implementation of the Processor:

1. why do we maintain a separate consumer and producer for each worker thread?
— from my understanding, the new consumer api will be able to fetch certain 
topic-partition. Is one consumer enough for one Kafka.process (it is shared 
among work threads)? The same thing for the producer, is one producer enough 
for sending out messages to the brokers? Will this have better performance?

2. how is the “Stream Synchronization” achieved?
— you talked about “pause” and “notify” the consumer. Still not very clear. 
If worker thread has group_1 {topicA-0, topicB-0} and group_2 {topicA-1, 
topicB-1}, and topicB is much slower. How can we pause the consumer to sync 
topicA and topicB if there is only one consumer?

3. how does the partition timestamp monotonically increase?
— “When the lowest timestamp corresponding record gets processed by the 
thread, the partition time possibly gets advanced.” How does the “gets 
advanced” work? Do we get another “lowest message timestamp value”? But doing 
this, may not get an “advanced” timestamp.

4. thoughts about the local state management.
— from the description, I think there is one kv store per partition-group. 
That means if one work thread is assigned more than one partition groups, it 
will have more than one kv-store connections. How can we avoid mis-operation? 
Because one partition group can easily write to another partition group’s kv 
store (they are in the same thread). 

5. do we plan to implement the throttle ?
— since we are “forwarding” the messages. It is very possible that, 
upstream-processor is much faster than the downstream-processor, how do we plan 
to deal with this?

6. how does the parallelism work?
— we achieve this by simply adding more threads? Or we plan to have the 
mechanism which can deploy different threads to different machines? It is easy 
to image that we can deploy different processors to different machines, then 
how about the work threads? Then how is the fault-tolerance? Maybe this is 
out-of-scope of the KIP?

Two nits in the KIP-28 doc:

1. miss the “close” method interfaceProcessorK1,V1,K2,V2. We have the 
“override close()” in KafkaProcessor.

2. “punctuate” does not accept “parameter”, while StatefulProcessJob has a 
punctuate method that accepts parameter.

Thanks,
Yan

[jira] [Updated] (KAFKA-1904) run sanity failed test

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1904:

Fix Version/s: (was: 0.8.3)

 run sanity failed test
 --

 Key: KAFKA-1904
 URL: https://issues.apache.org/jira/browse/KAFKA-1904
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Joe Stein
Priority: Blocker
 Attachments: run_sanity.log.gz


 _test_case_name  :  testcase_1
 _test_class_name  :  ReplicaBasicTest
 arg : bounce_broker  :  true
 arg : broker_type  :  leader
 arg : message_producing_free_time_sec  :  15
 arg : num_iteration  :  2
 arg : num_messages_to_produce_per_producer_call  :  50
 arg : num_partition  :  2
 arg : replica_factor  :  3
 arg : sleep_seconds_between_producer_calls  :  1
 validation_status  : 
  Test completed  :  FAILED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1678) add new options for reassign partition to better manager dead brokers

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1678:

Fix Version/s: (was: 0.8.3)

 add new options for reassign partition to better manager dead brokers
 -

 Key: KAFKA-1678
 URL: https://issues.apache.org/jira/browse/KAFKA-1678
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Joe Stein
Assignee: Dmitry Pekar
  Labels: operations

 Four changes here each requiring a) system tests b) unit test c) code to-do 
 the actual work and 4) we should run it on dexter too (we should post the 
 patch before running in test lab so others can do the same too at that time).
  --replace-broker
  --decommission-broker
 fix two bugs
 1) do not allow the user to start reassignment for a topic that doesn't exist
 2) do not allow reassign to brokers that don't exist.
 There could be other reassign like issue that come up also from others. My 
 initial preference is one patch depending on what the issues/changes are and 
 where in the code we are too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1714) more better bootstrapping of the gradle-wrapper.jar

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1714:

Fix Version/s: (was: 0.8.3)

 more better bootstrapping of the gradle-wrapper.jar 
 

 Key: KAFKA-1714
 URL: https://issues.apache.org/jira/browse/KAFKA-1714
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.2.0
Reporter: Joe Stein

 From https://issues.apache.org/jira/browse/KAFKA-1490 we moved out the 
 gradle-wrapper.jar for our source maintenance. This makes builds for folks 
 coming in the first step somewhat problematic.  A bootstrap step is required 
 if this could be somehow incorporated that would be great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1665) controller state gets stuck in message after execute

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1665:

Fix Version/s: (was: 0.8.3)

 controller state gets stuck in message after execute
 

 Key: KAFKA-1665
 URL: https://issues.apache.org/jira/browse/KAFKA-1665
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein

 I had a 0.8.1.1 Kafka Broker go down, and I was trying to use the reassign 
 partition script to move topics off that broker. When I describe the topics, 
 I see the following:
 Topic: mini__022active_120__33__mini Partition: 0 Leader: 2131118 
 Replicas: 2131118,2166601,2163421 Isr: 2131118,2166601
 This shows that the broker “2163421” is down. So I create the following file 
 /tmp/move_topic.json:
 {
 version: 1,
 partitions: [
 {
 topic: mini__022active_120__33__mini,
 partition: 0,
 replicas: [
 2131118, 2166601,  2156998
 ]
 }
 ]
 }
 And then do this:
 ./kafka-reassign-partitions.sh --execute --reassignment-json-file 
 /tmp/move_topic.json
 Successfully started reassignment of partitions 
 {version:1,partitions:[{topic:mini__022active_120__33__mini,partition:0,replicas:[2131118,2166601,2156998]}]}
 However, when I try to verify this, I get the following error:
 ./kafka-reassign-partitions.sh --verify --reassignment-json-file 
 /tmp/move_topic.json
 Status of partition reassignment:
 ERROR: Assigned replicas (2131118,2166601,2156998,2163421) don't match the 
 list of replicas for reassignment (2131118,2166601,2156998) for partition 
 [mini__022active_120__33__mini,0]
 Reassignment of partition [mini__022active_120__33__mini,0] failed
 If I describe the topics, I now see there are 4 replicas. This has been like 
 this for many hours now, so it seems to have permanently moved to 4 replicas 
 for some reason.
 Topic:mini__022active_120__33__mini PartitionCount:1 ReplicationFactor:4 
 Configs:
 Topic: mini__022active_120__33__mini Partition: 0 Leader: 2131118 
 Replicas: 2131118,2166601,2156998,2163421 Isr: 2131118,2166601
 If I re-execute and re-verify, I get the same error. So it seems to be wedged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1659) Ability to cleanly abort the KafkaProducer

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1659:

Fix Version/s: (was: 0.8.3)

 Ability to cleanly abort the KafkaProducer
 --

 Key: KAFKA-1659
 URL: https://issues.apache.org/jira/browse/KAFKA-1659
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Jun Rao

 I would like the ability to abort the Java Client's KafkaProducer. This 
 includes the stopping the writing of buffered records.
 The motivation for this is described 
 [here|http://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3CCAOk4UxB7BJm6HSgLXrR01sksB2dOC3zdt0NHaKHz1EALR6%3DCTQ%40mail.gmail.com%3E].
 A sketch of this method is:
 {code}
 public void abort() {
 try {
 ioThread.interrupt();
 ioThread.stop(new ThreadDeath());
 } catch (IllegalAccessException e) {
 }
 }
 {code}
 but of course it is preferable to stop the {{ioThread}} by cooperation, 
 rather than use the deprecated {{Thread.stop(new ThreadDeath())}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1675) bootstrapping tidy-up

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1675:

Fix Version/s: (was: 0.8.3)

 bootstrapping tidy-up
 -

 Key: KAFKA-1675
 URL: https://issues.apache.org/jira/browse/KAFKA-1675
 Project: Kafka
  Issue Type: Bug
Reporter: Szczepan Faber
Assignee: Ivan Lyutov
 Attachments: KAFKA-1675.patch


 I'd like to suggest following changes:
 1. remove the 'gradlew' and 'gradlew.bat' scripts from the source tree. Those 
 scripts don't work, e.g. they fail with exception when invoked. I just got a 
 user report where those scripts were invoked by the user and it led to an 
 exception that was not easy to grasp. Bootstrapping step will generate those 
 files anyway.
 2. move the 'gradleVersion' extra property from the 'build.gradle' into 
 'gradle.properties'. Otherwise it is hard to automate the bootstrapping 
 process - in order to find out the gradle version, I need to evaluate the 
 build script, and for that I need gradle with correct version (kind of a 
 vicious circle). Project properties declared in the gradle.properties file 
 can be accessed exactly the same as the 'ext' properties, for example: 
 'project.gradleVersion'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Manikumar Reddy
oops.. i did not check Ismail's mail.

On Wed, Aug 19, 2015 at 9:25 PM, Manikumar Reddy ku...@nmsworks.co.in
wrote:

 Hi,

   We have raised a Apache Infra ticket for migrating site docs from svn
  - git .
   Currently, the gitwcsub client only supports using the asf-site
 branch for site docs.
   Infra team is suggesting to create  new git repo for site docs.

Infra ticket here:
https://issues.apache.org/jira/browse/INFRA-10143

Possible Options:
1. Maintain code and docs in same repo, but on different branches
 (trunk and asf-site)
2. Create a new git repo for docs and integrate with gitwcsub.

I vote for second option.


 Kumar

 On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro edward.ribe...@gmail.com
 wrote:

 FYI, I created a tiny trivial patch to address a typo in the web site
 (KAFKA-2418), so maybe you can review it and eventually commit before
 moving to github. ;)

 Cheers,
 Eddie
 Em 12/08/2015 06:01, Ismael Juma ism...@juma.me.uk escreveu:

  Hi Gwen,
 
  I filed KAFKA-2425 as KAFKA-2364 is about improving the website
  documentation. Aseem Bansal seemed interested in helping us with the
 move
  so I pinged him in the issue.
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
 wrote:
 
   Ah, there is already a JIRA in the title. Never mind :)
  
   On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
 wrote:
  
The vote opened 5 days ago. I believe we can conclude with 3 binding
  +1,
   3
non-binding +1 and no -1.
   
Ismael, are you opening and JIRA and migrating? Or are we looking
 for a
volunteer?
   
On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh asi...@cloudera.com
   wrote:
   
+1 on same repo.
   
On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
edward.ribe...@gmail.com
wrote:
   
 +1. As soon as possible, please. :)

 On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede n...@confluent.io
 
wrote:

  +1 on the same repo for code and website. It helps to keep
 both in
sync.
 
  On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
 ghe...@cloudera.com
wrote:
 
   +1 for the same repo. The closer docs can be to code the more
accurate
  they
   are likely to be. The same way we encourage unit tests for a
 new
   feature/patch. Updating the docs can be the same.
  
   If we follow Sqoop's process for example, how would small
   fixes/adjustments/additions to the live documentation occur
   without
a
 new
   release?
  
   On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
  wangg...@gmail.com
   
  wrote:
  
I am +1 on same repo too. I think keeping one git history
 of
   code
/
 doc
change may actually be beneficial for this approach as
 well.
   
Guozhang
   
On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
  g...@confluent.io
   
  wrote:
   
 I prefer same repo for one-commit / lower-barrier
 benefits.

 Sqoop has the following process, which decouples
  documentation
  changes
from
 website changes:

 1. Code github repo contains a doc directory, with the
 documentation
 written and maintained in AsciiDoc. Only one version of
 the
documentation,
 since it is source controlled with the code. (unlike
 current
   SVN
  where
   we
 have directories per version)

 2. Build process compiles the AsciiDoc to HTML and PDF

 3. When releasing, we post the documentation of the new
   release
to
  the
 website

 Gwen

 On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
   ism...@juma.me.uk

   wrote:

  Hi,
 
  For reference, here is the previous discussion on
 moving
  the
  website
   to
  Git:
 
  http://search-hadoop.com/m/uyzND11JliU1E8QU92
 
  People were positive to the idea as Jay said. I would
 like
   to
 see a
   bit
 of
  a discussion around whether the website should be part
 of
   the
 same
   repo
 as
  the code or not. I'll get the ball rolling.
 
  Pros for same repo:
  * One commit can update the code and website, which
 means:
  ** Lower barrier for updating docs along with relevant
  code
 changes
  ** Easier to require that both are updated at the same
  time
  * More eyeballs on the website changes
  * Automatically branched with the relevant code
 
  Pros for separate repo:
  * Potentially simpler for website-only changes (smaller
   repo,
 less
  verification needed)
  * Website changes don't clutter the code Git history
  * No risk of website change affecting the code
 
  Your thoughts, please.
   

[jira] [Updated] (KAFKA-1694) kafka command line and centralized operations

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1694:

Fix Version/s: (was: 0.8.3)

 kafka command line and centralized operations
 -

 Key: KAFKA-1694
 URL: https://issues.apache.org/jira/browse/KAFKA-1694
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Andrii Biletskyi
Priority: Critical
 Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
 KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
 KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
 KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
 KAFKA-1772_1802_1775_1774_v2.patch


 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1825:

Fix Version/s: (was: 0.8.3)

 leadership election state is stale and never recovers without all brokers 
 restarting
 

 Key: KAFKA-1825
 URL: https://issues.apache.org/jira/browse/KAFKA-1825
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1, 0.8.2.0
Reporter: Joe Stein
Priority: Critical
 Attachments: KAFKA-1825.executable.tgz


 I am not sure what is the cause here but I can succinctly and repeatedly  
 reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
 the same manner.
 The code to reproduce this is here 
 https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers
 scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)
 create topic 
 producer client sends in 380,000 messages/sec (attached executable)
 everything is fine until you kill -SIGTERM broker #2 
 then at that point the state goes bad for that topic.  even trying to use the 
 console producer (with the sarama producer off) doesn't work.
 doing a describe the yoyoma topic looks fine, ran prefered leadership 
 election lots of issues... still can't produce... only resolution is bouncing 
 all brokers :(
 root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh 
 --zookeeper 10.218.189.234:2181 --describe
 Topic:yoyoma  PartitionCount:36   ReplicationFactor:3 Configs:
   Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
   Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
   Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
   Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
   Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
   Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
   Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
   Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
   Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
   Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
   Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
   Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
   Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
   Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
   Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
   Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
   Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
   Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
   Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
   Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
   Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
   Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
   Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
   Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
   Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
   Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
   Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
   Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
   Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
   Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
   Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
   Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
   Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
   Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
   Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
   Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
 root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
 bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
 Successfully started preferred replica election for partitions 
 Set([yoyoma,29], [yoyoma,14], [yoyoma,22], [yoyoma,15], [yoyoma,3], 
 [yoyoma,11], [yoyoma,32], [yoyoma,23], [yoyoma,18], [yoyoma,25], [yoyoma,26], 
 [yoyoma,1], [yoyoma,9], [yoyoma,33], [yoyoma,5], 

[jira] [Updated] (KAFKA-686) 0.8 Kafka broker should give a better error message when running against 0.7 zookeeper

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-686:
---
Fix Version/s: (was: 0.8.3)

 0.8 Kafka broker should give a better error message when running against 0.7 
 zookeeper
 --

 Key: KAFKA-686
 URL: https://issues.apache.org/jira/browse/KAFKA-686
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Jay Kreps
  Labels: newbie, patch
 Attachments: KAFAK-686-null-pointer-fix.patch, 
 KAFKA-686-null-pointer-fix-2.patch


 People will not know that the zookeeper paths are not compatible. When you 
 try to start the 0.8 broker pointed at a 0.7 zookeeper you get a 
 NullPointerException. We should detect this and give a more sane error.
 Error:
 kafka.common.KafkaException: Can't parse json string: null
 at kafka.utils.Json$.liftedTree1$1(Json.scala:20)
 at kafka.utils.Json$.parseFull(Json.scala:16)
 at 
 kafka.utils.ZkUtils$$anonfun$getReplicaAssignmentForTopics$2.apply(ZkUtils.scala:498)
 at 
 kafka.utils.ZkUtils$$anonfun$getReplicaAssignmentForTopics$2.apply(ZkUtils.scala:494)
 at 
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
 at scala.collection.immutable.List.foreach(List.scala:45)
 at 
 kafka.utils.ZkUtils$.getReplicaAssignmentForTopics(ZkUtils.scala:494)
 at 
 kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:446)
 at 
 kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:220)
 at 
 kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:85)
 at 
 kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:53)
 at 
 kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:43)
 at kafka.controller.KafkaController.startup(KafkaController.scala:381)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:90)
 at 
 kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
 at kafka.Kafka$.main(Kafka.scala:46)
 at kafka.Kafka.main(Kafka.scala)
 Caused by: java.lang.NullPointerException
 at 
 scala.util.parsing.combinator.lexical.Scanners$Scanner.init(Scanners.scala:52)
 at scala.util.parsing.json.JSON$.parseRaw(JSON.scala:71)
 at scala.util.parsing.json.JSON$.parseFull(JSON.scala:85)
 at kafka.utils.Json$.liftedTree1$1(Json.scala:17)
 ... 16 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1561) Data Loss for Incremented Replica Factor and Leader Election

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1561:

Fix Version/s: (was: 0.8.3)

 Data Loss for Incremented Replica Factor and Leader Election
 

 Key: KAFKA-1561
 URL: https://issues.apache.org/jira/browse/KAFKA-1561
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Attachments: broker0.log, broker2.log, consumer.log, producer.log


 This is reported on the mailing list (thanks to Jad).
 {quote}
 Hi,
 I have a test that continuously sends messages to one broker, brings up
 another broker, and adds it as a replica for all partitions, with it being
 the preferred replica for some. I have auto.leader.rebalance.enable=true,
 so replica election gets triggered. Data is being pumped to the old broker
 all the while. It seems that some data gets lost while switching over to
 the new leader. Is this a bug, or do I have something misconfigured? I also
 have request.required.acks=-1 on the producer.
 Here's what I think is happening:
 1. Producer writes message to broker 0, [EventServiceUpsertTopic,13], w/
 broker 0 currently leader, with ISR=(0), so write returns successfully,
 even when acks = -1. Correlation id 35836
 Producer log:
 [2014-07-24 14:44:26,991]  [DEBUG]  [dw-97 - PATCH
 /v1/events/type_for_test_bringupNewBroker_shouldRebalance_shouldNotLoseData/event?_idPath=idField_mergeFields=field1]
 [kafka.producer.BrokerPartitionInfo]  Partition
 [EventServiceUpsertTopic,13] has leader 0
 [2014-07-24 14:44:26,993]  [DEBUG]  [dw-97 - PATCH
 /v1/events/type_for_test_bringupNewBroker_shouldRebalance_shouldNotLoseData/event?_idPath=idField_mergeFields=field1]
 [k.producer.async.DefaultEventHandler]  Producer sent messages with
 correlation id 35836 for topics [EventServiceUpsertTopic,13] to broker 0 on
 localhost:56821
 2. Broker 1 is still catching up
 Broker 0 Log:
 [2014-07-24 14:44:26,992]  [DEBUG]  [kafka-request-handler-3]
 [kafka.cluster.Partition]  Partition [EventServiceUpsertTopic,13] on broker
 0: Old hw for partition [EventServiceUpsertTopic,13] is 971. New hw is 971.
 All leo's are 975,971
 [2014-07-24 14:44:26,992]  [DEBUG]  [kafka-request-handler-3]
 [kafka.server.KafkaApis]  [KafkaApi-0] Produce to local log in 0 ms
 [2014-07-24 14:44:26,992]  [DEBUG]  [kafka-processor-56821-0]
 [kafka.request.logger]  Completed request:Name: ProducerRequest; Version:
 0; CorrelationId: 35836; ClientId: ; RequiredAcks: -1; AckTimeoutMs: 1
 ms from client /127.0.0.1:57086
 ;totalTime:0,requestQueueTime:0,localTime:0,remoteTime:0,responseQueueTime:0,sendTime:0
 3. Leader election is triggered by the scheduler:
 Broker 0 Log:
 [2014-07-24 14:44:26,991]  [INFO ]  [kafka-scheduler-0]
 [k.c.PreferredReplicaPartitionLeaderSelector]
 [PreferredReplicaPartitionLeaderSelector]: Current leader 0 for partition [
 EventServiceUpsertTopic,13] is not the preferred replica. Trigerring
 preferred replica leader election
 [2014-07-24 14:44:26,993]  [DEBUG]  [kafka-scheduler-0]
 [kafka.utils.ZkUtils$]  Conditional update of path
 /brokers/topics/EventServiceUpsertTopic/partitions/13/state with value
 {controller_epoch:1,leader:1,version:1,leader_epoch:3,isr:[0,1]}
 and expected version 3 succeeded, returning the new version: 4
 [2014-07-24 14:44:26,994]  [DEBUG]  [kafka-scheduler-0]
 [k.controller.PartitionStateMachine]  [Partition state machine on
 Controller 0]: After leader election, leader cache is updated to
 Map(Snipped(Leader:1,ISR:0,1,LeaderEpoch:3,ControllerEpoch:1),EndSnip)
 [2014-07-24 14:44:26,994]  [INFO ]  [kafka-scheduler-0]
 [kafka.controller.KafkaController]  [Controller 0]: Partition [
 EventServiceUpsertTopic,13] completed preferred replica leader election.
 New leader is 1
 4. Broker 1 is still behind, but it sets the high water mark to 971!!!
 Broker 1 Log:
 [2014-07-24 14:44:26,999]  [INFO ]  [kafka-request-handler-6]
 [kafka.server.ReplicaFetcherManager]  [ReplicaFetcherManager on broker 1]
 Removed fetcher for partitions [EventServiceUpsertTopic,13]
 [2014-07-24 14:44:27,000]  [DEBUG]  [kafka-request-handler-6]
 [kafka.cluster.Partition]  Partition [EventServiceUpsertTopic,13] on broker
 1: Old hw for partition [EventServiceUpsertTopic,13] is 970. New hw is -1.
 All leo's are -1,971
 [2014-07-24 14:44:27,098]  [DEBUG]  [kafka-request-handler-3]
 [kafka.server.KafkaApis]  [KafkaApi-1] Maybe update partition HW due to
 fetch request: Name: FetchRequest; Version: 0; CorrelationId: 1; ClientId:
 ReplicaFetcherThread-0-1; ReplicaId: 0; MaxWait: 500 ms; MinBytes: 1 bytes;
 RequestInfo: [EventServiceUpsertTopic,13] -
 PartitionFetchInfo(971,1048576), Snipped
 [2014-07-24 14:44:27,098]  [DEBUG]  [kafka-request-handler-3]
 [kafka.cluster.Partition]  Partition [EventServiceUpsertTopic,13] on 

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Ismael Juma
Hi all,

It looks like it's not feasible to update the code and website in the same
commit given existing limitations of the Apache infra:

https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175

Best,
Ismael

On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk wrote:

 Hi Gwen,

 I filed KAFKA-2425 as KAFKA-2364 is about improving the website
 documentation. Aseem Bansal seemed interested in helping us with the move
 so I pinged him in the issue.

 Best,
 Ismael

 On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io wrote:

 Ah, there is already a JIRA in the title. Never mind :)

 On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io wrote:

  The vote opened 5 days ago. I believe we can conclude with 3 binding
 +1, 3
  non-binding +1 and no -1.
 
  Ismael, are you opening and JIRA and migrating? Or are we looking for a
  volunteer?
 
  On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh asi...@cloudera.com
 wrote:
 
  +1 on same repo.
 
  On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
  edward.ribe...@gmail.com
  wrote:
 
   +1. As soon as possible, please. :)
  
   On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede n...@confluent.io
  wrote:
  
+1 on the same repo for code and website. It helps to keep both in
  sync.
   
On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke ghe...@cloudera.com
  wrote:
   
 +1 for the same repo. The closer docs can be to code the more
  accurate
they
 are likely to be. The same way we encourage unit tests for a new
 feature/patch. Updating the docs can be the same.

 If we follow Sqoop's process for example, how would small
 fixes/adjustments/additions to the live documentation occur
 without
  a
   new
 release?

 On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
 wangg...@gmail.com
wrote:

  I am +1 on same repo too. I think keeping one git history of
 code
  /
   doc
  change may actually be beneficial for this approach as well.
 
  Guozhang
 
  On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
 g...@confluent.io
wrote:
 
   I prefer same repo for one-commit / lower-barrier benefits.
  
   Sqoop has the following process, which decouples
 documentation
changes
  from
   website changes:
  
   1. Code github repo contains a doc directory, with the
   documentation
   written and maintained in AsciiDoc. Only one version of the
  documentation,
   since it is source controlled with the code. (unlike current
 SVN
where
 we
   have directories per version)
  
   2. Build process compiles the AsciiDoc to HTML and PDF
  
   3. When releasing, we post the documentation of the new
 release
  to
the
   website
  
   Gwen
  
   On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
 ism...@juma.me.uk
  
 wrote:
  
Hi,
   
For reference, here is the previous discussion on moving
 the
website
 to
Git:
   
http://search-hadoop.com/m/uyzND11JliU1E8QU92
   
People were positive to the idea as Jay said. I would like
 to
   see a
 bit
   of
a discussion around whether the website should be part of
 the
   same
 repo
   as
the code or not. I'll get the ball rolling.
   
Pros for same repo:
* One commit can update the code and website, which means:
** Lower barrier for updating docs along with relevant code
   changes
** Easier to require that both are updated at the same time
* More eyeballs on the website changes
* Automatically branched with the relevant code
   
Pros for separate repo:
* Potentially simpler for website-only changes (smaller
 repo,
   less
verification needed)
* Website changes don't clutter the code Git history
* No risk of website change affecting the code
   
Your thoughts, please.
   
Best,
Ismael
   
On Fri, Jul 31, 2015 at 6:15 PM, Aseem Bansal 
asmbans...@gmail.com
wrote:
   
 Hi

 When discussing on KAFKA-2364 migrating docs from svn to
 git
   came
 up.
That
 would make contributing to docs much easier. I have
  contributed
to
 groovy/grails via github so I think having mirror on
 github
   could
 be
 useful.

 Also I think unless there is some good reason it should
 be a
 separate
repo.
 No need to mix docs and code.

 I can try that out.

 Thoughts?

   
  
 
 
 
  --
  -- Guozhang
 



 --
 Grant Henke
 Software Engineer | Cloudera
 gr...@cloudera.com | twitter.com/gchenke |
  linkedin.com/in/granthenke

   
   
 

[jira] [Commented] (KAFKA-934) kafka hadoop consumer and producer use older 0.19.2 hadoop api's

2015-08-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703214#comment-14703214
 ] 

Sriharsha Chintalapani commented on KAFKA-934:
--

[~gwenshap] are we still planning on maintaining these in light of copycat.

 kafka hadoop consumer and producer use older 0.19.2 hadoop api's
 

 Key: KAFKA-934
 URL: https://issues.apache.org/jira/browse/KAFKA-934
 Project: Kafka
  Issue Type: Bug
  Components: contrib
Affects Versions: 0.8.0
 Environment: [amilkowski@localhost impl]$ uname -a
 Linux localhost.localdomain 3.9.4-200.fc18.x86_64 #1 SMP Fri May 24 20:10:49 
 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Andrew Milkowski
Assignee: Sriharsha Chintalapani
  Labels: hadoop, hadoop-2.0, newbie

 New hadoop api present in 0.20.1 especially package  
 org.apache.hadoop.mapredude.lib is not used 
 code affected is both consumer and producer in kafka in the contrib package
 [amilkowski@localhost contrib]$ pwd
 /opt/local/git/kafka/contrib
 [amilkowski@localhost contrib]$ ls -lt
 total 12
 drwxrwxr-x 8 amilkowski amilkowski 4096 May 30 11:14 hadoop-consumer
 drwxrwxr-x 6 amilkowski amilkowski 4096 May 29 19:31 hadoop-producer
 drwxrwxr-x 6 amilkowski amilkowski 4096 May 29 16:43 target
 [amilkowski@localhost contrib]$ 
 in example
 import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.RunningJob;
 import org.apache.hadoop.mapred.TextOutputFormat;
 use 0.19.2 hadoop api format, this prevents merging of hadoop feature into 
 more modern hadoop implementation
 instead of drawing from 0.20.1 api set in import org.apache.hadoop.mapreduce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2218) reassignment tool needs to parse and validate the json

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2218:

Fix Version/s: (was: 0.8.3)

 reassignment tool needs to parse and validate the json
 --

 Key: KAFKA-2218
 URL: https://issues.apache.org/jira/browse/KAFKA-2218
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Critical

 Ran into a production issue with the broker.id being set to a string instead 
 of integer and the controller had nothing in the log and stayed stuck. 
 Eventually we saw this in the log of the brokers where coming from 
   
 me11:42 AM
 [2015-05-23 15:41:05,863] 67396362 [ZkClient-EventThread-14-ERROR 
 org.I0Itec.zkclient.ZkEventThread - Error handling event ZkEvent[Data of 
 /admin/reassign_partitions changed sent to 
 kafka.controller.PartitionsReassignedListener@78c6aab8]
 java.lang.ClassCastException: java.lang.String cannot be cast to 
 java.lang.Integer
  at scala.runtime.BoxesRunTime.unboxToInt(Unknown Source)
  at 
 kafka.controller.KafkaController$$anonfun$4.apply(KafkaController.scala:579)
 we then had to delete the znode from zookeeper (admin/reassign_partition) and 
 then fix the json and try it again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2219) reassign partition fails with offset 0 being asked for but obviously not there

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2219:

Fix Version/s: (was: 0.8.3)

 reassign partition fails with offset 0 being asked for but obviously not there
 --

 Key: KAFKA-2219
 URL: https://issues.apache.org/jira/browse/KAFKA-2219
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Joe Stein
Priority: Critical

 may be related to there being no data anymore in the partition
 [2015-05-23 15:51:05,506]  122615762 [request-expiration-task] ERROR 
 kafka.server.KafkaApis  - [KafkaApi-10206101] Error when processing fetch 
 request for partition [cs.sensor.wrapped,44] offset 0 from fo
 llower with correlation id 3
 kafka.common.OffsetOutOfRangeException: Request for offset 0 but we only have 
 log segments in the range 1339916216 to 1339916216.
 at kafka.log.Log.read(Log.scala:380)
 at 
 kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
 at 
 kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
 at 
 kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
 at 
 kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
 at 
 kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
 at 
 kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
 at java.lang.Thread.run(Thread.java:722)
 This topic was stopped from having data produced to it, wait for the log 
 clean up (so no data in partitions) and then reassign. it also maybe related 
 to some brokers being an bad state



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2180) topics never create on brokers though it succeeds in tool and is in zookeeper

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2180:

Fix Version/s: (was: 0.8.3)

 topics never create on brokers though it succeeds in tool and is in zookeeper
 -

 Key: KAFKA-2180
 URL: https://issues.apache.org/jira/browse/KAFKA-2180
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.2
Reporter: Joe Stein
Priority: Critical

 Ran into an issue with a 0.8.2.1 cluster where create topic was succeeding 
 when running bin/kafka-topics.sh --create and seen in zookeeper but brokers 
 never get updated. 
 We ended up fixing this by deleting the /controller znode so controller 
 leader election would result. Wwe really should have some better way to make 
 the controller failover ( KAFKA-1778 ) than rmr /controller in the zookeeper 
 shell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1753) add --decommission-broker option

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1753:

Fix Version/s: (was: 0.8.3)

 add --decommission-broker option
 

 Key: KAFKA-1753
 URL: https://issues.apache.org/jira/browse/KAFKA-1753
 Project: Kafka
  Issue Type: Sub-task
  Components: tools
Reporter: Dmitry Pekar
Assignee: Dmitry Pekar





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1715) better advertising of the bound and working interface

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1715:

Fix Version/s: (was: 0.8.3)

 better advertising of the bound and working interface
 -

 Key: KAFKA-1715
 URL: https://issues.apache.org/jira/browse/KAFKA-1715
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
  Labels: newbie

 As part of the auto discovery of brokers and meta data messaging we should 
 try to advertise the interface that is bound and working better. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Gwen Shapira
Yeah, so the way this works in few other projects I worked on is:

* The code repo has a /docs directory with the latest revision of the docs
(not multiple versions, just one that matches the latest state of code)
* When you submit a patch that requires doc modification, you modify all
relevant files in same patch and they get reviewed and committed together
(ideally)
* When we release, we copy the docs matching the release and commit to SVN
website. We also do this occasionally to fix bugs in earlier docs.
* Release artifacts include a copy of the docs

Nice to have:
* Docs are in Asciidoc and build generates the HTML. Asciidoc is easier to
edit and review.

I suggest a similar process for Kafka.

On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma ism...@juma.me.uk wrote:

 I should clarify: it's not possible unless we add an additional step that
 moves the docs from the code repo to the website repo.

 Ismael

 On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk wrote:

  Hi all,
 
  It looks like it's not feasible to update the code and website in the
 same
  commit given existing limitations of the Apache infra:
 
 
 
 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk wrote:
 
  Hi Gwen,
 
  I filed KAFKA-2425 as KAFKA-2364 is about improving the website
  documentation. Aseem Bansal seemed interested in helping us with the
 move
  so I pinged him in the issue.
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
 wrote:
 
  Ah, there is already a JIRA in the title. Never mind :)
 
  On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
 wrote:
 
   The vote opened 5 days ago. I believe we can conclude with 3 binding
  +1, 3
   non-binding +1 and no -1.
  
   Ismael, are you opening and JIRA and migrating? Or are we looking
 for a
   volunteer?
  
   On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh asi...@cloudera.com
  wrote:
  
   +1 on same repo.
  
   On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
   edward.ribe...@gmail.com
   wrote:
  
+1. As soon as possible, please. :)
   
On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede n...@confluent.io
   wrote:
   
 +1 on the same repo for code and website. It helps to keep both
 in
   sync.

 On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
 ghe...@cloudera.com
   wrote:

  +1 for the same repo. The closer docs can be to code the more
   accurate
 they
  are likely to be. The same way we encourage unit tests for a
 new
  feature/patch. Updating the docs can be the same.
 
  If we follow Sqoop's process for example, how would small
  fixes/adjustments/additions to the live documentation occur
  without
   a
new
  release?
 
  On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
  wangg...@gmail.com
 wrote:
 
   I am +1 on same repo too. I think keeping one git history of
  code
   /
doc
   change may actually be beneficial for this approach as well.
  
   Guozhang
  
   On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
  g...@confluent.io
 wrote:
  
I prefer same repo for one-commit / lower-barrier
 benefits.
   
Sqoop has the following process, which decouples
  documentation
 changes
   from
website changes:
   
1. Code github repo contains a doc directory, with the
documentation
written and maintained in AsciiDoc. Only one version of
 the
   documentation,
since it is source controlled with the code. (unlike
  current SVN
 where
  we
have directories per version)
   
2. Build process compiles the AsciiDoc to HTML and PDF
   
3. When releasing, we post the documentation of the new
  release
   to
 the
website
   
Gwen
   
On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
  ism...@juma.me.uk
   
  wrote:
   
 Hi,

 For reference, here is the previous discussion on moving
  the
 website
  to
 Git:

 http://search-hadoop.com/m/uyzND11JliU1E8QU92

 People were positive to the idea as Jay said. I would
  like to
see a
  bit
of
 a discussion around whether the website should be part
 of
  the
same
  repo
as
 the code or not. I'll get the ball rolling.

 Pros for same repo:
 * One commit can update the code and website, which
 means:
 ** Lower barrier for updating docs along with relevant
  code
changes
 ** Easier to require that both are updated at the same
  time
 * More eyeballs on the website changes
 * Automatically branched with the relevant code

 Pros for separate repo:
  

[jira] [Created] (KAFKA-2445) Failed test: kafka.producer.ProducerTest testSendWithDeadBroker FAILED

2015-08-19 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2445:
---

 Summary: Failed test: kafka.producer.ProducerTest  
testSendWithDeadBroker FAILED
 Key: KAFKA-2445
 URL: https://issues.apache.org/jira/browse/KAFKA-2445
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


This test failed on Jenkins build: 
https://builds.apache.org/job/Kafka-trunk/590/console

kafka.producer.ProducerTest  testSendWithDeadBroker FAILED
java.lang.AssertionError: Message set should have 1 message
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:44)
at 
kafka.producer.ProducerTest.testSendWithDeadBroker(ProducerTest.scala:260)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1420:

Fix Version/s: (was: 0.8.3)

 Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with 
 TestUtils.createTopic in unit tests
 --

 Key: KAFKA-1420
 URL: https://issues.apache.org/jira/browse/KAFKA-1420
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jonathan Natkins
  Labels: newbie
 Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch, 
 KAFKA-1420_2014-07-30_11:24:55.patch, KAFKA-1420_2014-08-02_11:04:15.patch, 
 KAFKA-1420_2014-08-10_14:12:05.patch, KAFKA-1420_2014-08-10_23:03:46.patch


 This is a follow-up JIRA from KAFKA-1389.
 There are a bunch of places in the unit tests where we misuse 
 AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, 
 where TestUtils.createTopic needs to be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1351) String.format is very expensive in Scala

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1351:

Fix Version/s: (was: 0.8.3)

 String.format is very expensive in Scala
 

 Key: KAFKA-1351
 URL: https://issues.apache.org/jira/browse/KAFKA-1351
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.7.2, 0.8.0, 0.8.1
Reporter: Neha Narkhede
  Labels: newbie
 Attachments: KAFKA-1351.patch, KAFKA-1351_2014-04-07_18:02:18.patch, 
 KAFKA-1351_2014-04-09_15:40:11.patch


 As found in KAFKA-1350, logging is causing significant overhead in the 
 performance of a Kafka server. There are several info statements that use 
 String.format which is particularly expensive. We should investigate adding 
 our own version of String.format that merely uses string concatenation under 
 the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1751) handle broker not exists and topic not exists scenarios

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1751:

Fix Version/s: (was: 0.8.3)

 handle broker not exists and topic not exists scenarios
 ---

 Key: KAFKA-1751
 URL: https://issues.apache.org/jira/browse/KAFKA-1751
 Project: Kafka
  Issue Type: Sub-task
  Components: tools
Reporter: Dmitry Pekar
Assignee: Dmitry Pekar
 Attachments: KAFKA-1751.patch, KAFKA-1751_2014-11-17_16:25:14.patch, 
 KAFKA-1751_2014-11-17_16:33:43.patch, KAFKA-1751_2014-11-19_11:56:57.patch, 
 KAFKA-1751_2014-11-27_13:39:24.patch, kafka-1751.patch


 merged with 1750 to pass by single code review process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Ismael Juma
I should clarify: it's not possible unless we add an additional step that
moves the docs from the code repo to the website repo.

Ismael

On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk wrote:

 Hi all,

 It looks like it's not feasible to update the code and website in the same
 commit given existing limitations of the Apache infra:


 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175

 Best,
 Ismael

 On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk wrote:

 Hi Gwen,

 I filed KAFKA-2425 as KAFKA-2364 is about improving the website
 documentation. Aseem Bansal seemed interested in helping us with the move
 so I pinged him in the issue.

 Best,
 Ismael

 On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io wrote:

 Ah, there is already a JIRA in the title. Never mind :)

 On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io wrote:

  The vote opened 5 days ago. I believe we can conclude with 3 binding
 +1, 3
  non-binding +1 and no -1.
 
  Ismael, are you opening and JIRA and migrating? Or are we looking for a
  volunteer?
 
  On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh asi...@cloudera.com
 wrote:
 
  +1 on same repo.
 
  On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
  edward.ribe...@gmail.com
  wrote:
 
   +1. As soon as possible, please. :)
  
   On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede n...@confluent.io
  wrote:
  
+1 on the same repo for code and website. It helps to keep both in
  sync.
   
On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke ghe...@cloudera.com
  wrote:
   
 +1 for the same repo. The closer docs can be to code the more
  accurate
they
 are likely to be. The same way we encourage unit tests for a new
 feature/patch. Updating the docs can be the same.

 If we follow Sqoop's process for example, how would small
 fixes/adjustments/additions to the live documentation occur
 without
  a
   new
 release?

 On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
 wangg...@gmail.com
wrote:

  I am +1 on same repo too. I think keeping one git history of
 code
  /
   doc
  change may actually be beneficial for this approach as well.
 
  Guozhang
 
  On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
 g...@confluent.io
wrote:
 
   I prefer same repo for one-commit / lower-barrier benefits.
  
   Sqoop has the following process, which decouples
 documentation
changes
  from
   website changes:
  
   1. Code github repo contains a doc directory, with the
   documentation
   written and maintained in AsciiDoc. Only one version of the
  documentation,
   since it is source controlled with the code. (unlike
 current SVN
where
 we
   have directories per version)
  
   2. Build process compiles the AsciiDoc to HTML and PDF
  
   3. When releasing, we post the documentation of the new
 release
  to
the
   website
  
   Gwen
  
   On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
 ism...@juma.me.uk
  
 wrote:
  
Hi,
   
For reference, here is the previous discussion on moving
 the
website
 to
Git:
   
http://search-hadoop.com/m/uyzND11JliU1E8QU92
   
People were positive to the idea as Jay said. I would
 like to
   see a
 bit
   of
a discussion around whether the website should be part of
 the
   same
 repo
   as
the code or not. I'll get the ball rolling.
   
Pros for same repo:
* One commit can update the code and website, which means:
** Lower barrier for updating docs along with relevant
 code
   changes
** Easier to require that both are updated at the same
 time
* More eyeballs on the website changes
* Automatically branched with the relevant code
   
Pros for separate repo:
* Potentially simpler for website-only changes (smaller
 repo,
   less
verification needed)
* Website changes don't clutter the code Git history
* No risk of website change affecting the code
   
Your thoughts, please.
   
Best,
Ismael
   
On Fri, Jul 31, 2015 at 6:15 PM, Aseem Bansal 
asmbans...@gmail.com
wrote:
   
 Hi

 When discussing on KAFKA-2364 migrating docs from svn
 to git
   came
 up.
That
 would make contributing to docs much easier. I have
  contributed
to
 groovy/grails via github so I think having mirror on
 github
   could
 be
 useful.

 Also I think unless there is some good reason it should
 be a
 separate
repo.
 No need to mix docs and code.

 I can try that out.

 Thoughts?

   

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Manikumar Reddy
Hi,

  We have raised a Apache Infra ticket for migrating site docs from svn  -
git .
  Currently, the gitwcsub client only supports using the asf-site branch
for site docs.
  Infra team is suggesting to create  new git repo for site docs.

   Infra ticket here:
   https://issues.apache.org/jira/browse/INFRA-10143

   Possible Options:
   1. Maintain code and docs in same repo, but on different branches (trunk
and asf-site)
   2. Create a new git repo for docs and integrate with gitwcsub.

   I vote for second option.


Kumar

On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro edward.ribe...@gmail.com
wrote:

 FYI, I created a tiny trivial patch to address a typo in the web site
 (KAFKA-2418), so maybe you can review it and eventually commit before
 moving to github. ;)

 Cheers,
 Eddie
 Em 12/08/2015 06:01, Ismael Juma ism...@juma.me.uk escreveu:

  Hi Gwen,
 
  I filed KAFKA-2425 as KAFKA-2364 is about improving the website
  documentation. Aseem Bansal seemed interested in helping us with the move
  so I pinged him in the issue.
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io wrote:
 
   Ah, there is already a JIRA in the title. Never mind :)
  
   On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
 wrote:
  
The vote opened 5 days ago. I believe we can conclude with 3 binding
  +1,
   3
non-binding +1 and no -1.
   
Ismael, are you opening and JIRA and migrating? Or are we looking
 for a
volunteer?
   
On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh asi...@cloudera.com
   wrote:
   
+1 on same repo.
   
On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
edward.ribe...@gmail.com
wrote:
   
 +1. As soon as possible, please. :)

 On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede n...@confluent.io
wrote:

  +1 on the same repo for code and website. It helps to keep both
 in
sync.
 
  On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
 ghe...@cloudera.com
wrote:
 
   +1 for the same repo. The closer docs can be to code the more
accurate
  they
   are likely to be. The same way we encourage unit tests for a
 new
   feature/patch. Updating the docs can be the same.
  
   If we follow Sqoop's process for example, how would small
   fixes/adjustments/additions to the live documentation occur
   without
a
 new
   release?
  
   On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
  wangg...@gmail.com
   
  wrote:
  
I am +1 on same repo too. I think keeping one git history of
   code
/
 doc
change may actually be beneficial for this approach as well.
   
Guozhang
   
On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
  g...@confluent.io
   
  wrote:
   
 I prefer same repo for one-commit / lower-barrier
 benefits.

 Sqoop has the following process, which decouples
  documentation
  changes
from
 website changes:

 1. Code github repo contains a doc directory, with the
 documentation
 written and maintained in AsciiDoc. Only one version of
 the
documentation,
 since it is source controlled with the code. (unlike
 current
   SVN
  where
   we
 have directories per version)

 2. Build process compiles the AsciiDoc to HTML and PDF

 3. When releasing, we post the documentation of the new
   release
to
  the
 website

 Gwen

 On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
   ism...@juma.me.uk

   wrote:

  Hi,
 
  For reference, here is the previous discussion on moving
  the
  website
   to
  Git:
 
  http://search-hadoop.com/m/uyzND11JliU1E8QU92
 
  People were positive to the idea as Jay said. I would
 like
   to
 see a
   bit
 of
  a discussion around whether the website should be part
 of
   the
 same
   repo
 as
  the code or not. I'll get the ball rolling.
 
  Pros for same repo:
  * One commit can update the code and website, which
 means:
  ** Lower barrier for updating docs along with relevant
  code
 changes
  ** Easier to require that both are updated at the same
  time
  * More eyeballs on the website changes
  * Automatically branched with the relevant code
 
  Pros for separate repo:
  * Potentially simpler for website-only changes (smaller
   repo,
 less
  verification needed)
  * Website changes don't clutter the code Git history
  * No risk of website change affecting the code
 
  Your thoughts, please.
 
  Best,
  Ismael
 
  On Fri, Jul 31, 2015 at 6:15 PM, Aseem Bansal 
  asmbans...@gmail.com
  

[GitHub] kafka pull request: KAFKA-2411; remove usage of blocking channel

2015-08-19 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/151

KAFKA-2411; remove usage of blocking channel



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2411-remove-usage-of-blocking-channel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/151.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #151


commit dbcde7e828a250708752866c4610298773dea006
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T13:30:35Z

Introduce `ChannelBuilders.create` and use it in `ClientUtils` and 
`SocketServer`

commit 6de8b9b18c6bfb67e72a4fccc10768dff15098f8
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:22:55Z

Use `Selector` instead of `BlockingChannel` for controlled shutdown

commit da7a980887ab2b5d007ddf80c3059b6619d52f99
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:23:11Z

Use `Selector` instead of `BlockingChannel` in `ControllerChannelManager`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2444) Fail test: kafka.api.QuotasTest testThrottledProducerConsumer FAILED

2015-08-19 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2444:
---

 Summary: Fail test: kafka.api.QuotasTest  
testThrottledProducerConsumer FAILED
 Key: KAFKA-2444
 URL: https://issues.apache.org/jira/browse/KAFKA-2444
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


This test has been failing on Jenkins builds several times in the last few 
days. For example: https://builds.apache.org/job/Kafka-trunk/591/console

kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
junit.framework.AssertionFailedError: Should have been throttled
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.assertTrue(Assert.java:20)
at 
kafka.api.QuotasTest.testThrottledProducerConsumer(QuotasTest.scala:136)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2444) Fail test: kafka.api.QuotasTest testThrottledProducerConsumer FAILED

2015-08-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703192#comment-14703192
 ] 

Aditya Auradkar commented on KAFKA-2444:


[~gwenshap] will do

 Fail test: kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
 --

 Key: KAFKA-2444
 URL: https://issues.apache.org/jira/browse/KAFKA-2444
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Aditya Auradkar

 This test has been failing on Jenkins builds several times in the last few 
 days. For example: https://builds.apache.org/job/Kafka-trunk/591/console
 kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
 junit.framework.AssertionFailedError: Should have been throttled
 at junit.framework.Assert.fail(Assert.java:47)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at 
 kafka.api.QuotasTest.testThrottledProducerConsumer(QuotasTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2339) broker becomes unavailable if bad data is passed through the protocol

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2339:

Fix Version/s: (was: 0.8.3)

 broker becomes unavailable if bad data is passed through the protocol
 -

 Key: KAFKA-2339
 URL: https://issues.apache.org/jira/browse/KAFKA-2339
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Timothy Chen
Priority: Critical

 I ran into a situation that a non integer value got past for the partition 
 and the brokers went bonkers.
 reproducible
 {code}
 ah=1..2
 echo don't do this in production|kafkacat -b localhost:9092 -p $ah
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1685) Implement TLS/SSL tests

2015-08-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703132#comment-14703132
 ] 

Jun Rao commented on KAFKA-1685:


This is done as part of KAFKA-1690.

 Implement TLS/SSL tests
 ---

 Key: KAFKA-1685
 URL: https://issues.apache.org/jira/browse/KAFKA-1685
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.8.2.1
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


 We need to write a suite of unit tests for TLS authentication. This should be 
 doable with a junit integration test. We can use the simple authorization 
 plugin with only a single user whitelisted. The test can start the server and 
 then connects with and without TLS and validates that access is only possible 
 when authenticated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Manikumar Reddy
yes. we can not.  we need two separate github PRs for code and doc changes.

On Wed, Aug 19, 2015 at 9:35 PM, Guozhang Wang wangg...@gmail.com wrote:

 Even under the second option, it sounds like we still cannot include the
 code and doc changes in one commit?

 Guozhang

 On Wed, Aug 19, 2015 at 8:56 AM, Manikumar Reddy ku...@nmsworks.co.in
 wrote:

  oops.. i did not check Ismail's mail.
 
  On Wed, Aug 19, 2015 at 9:25 PM, Manikumar Reddy ku...@nmsworks.co.in
  wrote:
 
   Hi,
  
 We have raised a Apache Infra ticket for migrating site docs from svn
- git .
 Currently, the gitwcsub client only supports using the asf-site
   branch for site docs.
 Infra team is suggesting to create  new git repo for site docs.
  
  Infra ticket here:
  https://issues.apache.org/jira/browse/INFRA-10143
  
  Possible Options:
  1. Maintain code and docs in same repo, but on different branches
   (trunk and asf-site)
  2. Create a new git repo for docs and integrate with gitwcsub.
  
  I vote for second option.
  
  
   Kumar
  
   On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro 
  edward.ribe...@gmail.com
   wrote:
  
   FYI, I created a tiny trivial patch to address a typo in the web site
   (KAFKA-2418), so maybe you can review it and eventually commit before
   moving to github. ;)
  
   Cheers,
   Eddie
   Em 12/08/2015 06:01, Ismael Juma ism...@juma.me.uk escreveu:
  
Hi Gwen,
   
I filed KAFKA-2425 as KAFKA-2364 is about improving the website
documentation. Aseem Bansal seemed interested in helping us with the
   move
so I pinged him in the issue.
   
Best,
Ismael
   
On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
   wrote:
   
 Ah, there is already a JIRA in the title. Never mind :)

 On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
   wrote:

  The vote opened 5 days ago. I believe we can conclude with 3
  binding
+1,
 3
  non-binding +1 and no -1.
 
  Ismael, are you opening and JIRA and migrating? Or are we
 looking
   for a
  volunteer?
 
  On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
  asi...@cloudera.com
 wrote:
 
  +1 on same repo.
 
  On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
  edward.ribe...@gmail.com
  wrote:
 
   +1. As soon as possible, please. :)
  
   On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
  n...@confluent.io
   
  wrote:
  
+1 on the same repo for code and website. It helps to keep
   both in
  sync.
   
On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
   ghe...@cloudera.com
  wrote:
   
 +1 for the same repo. The closer docs can be to code the
  more
  accurate
they
 are likely to be. The same way we encourage unit tests
 for
  a
   new
 feature/patch. Updating the docs can be the same.

 If we follow Sqoop's process for example, how would small
 fixes/adjustments/additions to the live documentation
 occur
 without
  a
   new
 release?

 On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
wangg...@gmail.com
 
wrote:

  I am +1 on same repo too. I think keeping one git
 history
   of
 code
  /
   doc
  change may actually be beneficial for this approach as
   well.
 
  Guozhang
 
  On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
g...@confluent.io
 
wrote:
 
   I prefer same repo for one-commit / lower-barrier
   benefits.
  
   Sqoop has the following process, which decouples
documentation
changes
  from
   website changes:
  
   1. Code github repo contains a doc directory, with
 the
   documentation
   written and maintained in AsciiDoc. Only one version
 of
   the
  documentation,
   since it is source controlled with the code. (unlike
   current
 SVN
where
 we
   have directories per version)
  
   2. Build process compiles the AsciiDoc to HTML and
 PDF
  
   3. When releasing, we post the documentation of the
 new
 release
  to
the
   website
  
   Gwen
  
   On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
 ism...@juma.me.uk
  
 wrote:
  
Hi,
   
For reference, here is the previous discussion on
   moving
the
website
 to
Git:
   
http://search-hadoop.com/m/uyzND11JliU1E8QU92
   
People were positive to the idea as Jay said. I
 would
   like
 to
   see a
 bit
   of
a discussion around whether the website should be
  part
   of
 the
   same
 repo
   as
the 

[jira] [Comment Edited] (KAFKA-2425) Migrate website from SVN to Git

2015-08-19 Thread Aseem Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703327#comment-14703327
 ] 

Aseem Bansal edited comment on KAFKA-2425 at 8/19/15 4:47 PM:
--

Sorry [~omkreddy] but not getting the time currently. Been busy for past some 
days. I checked INFRA team ticket and it says that only asf-site branch is 
supported. I understand that it is definitely a bummer. Just thinking whether 
it would be possible to use travis or something else to auto cherry pick from 
trunk/master to this branch? Then the commits can be done to master and let the 
script do the cherry picks. Don't know how to do it but will look if it is 
possible.

Something like 
http://lea.verou.me/2011/10/easily-keep-gh-pages-in-sync-with-master/


was (Author: anshbansal):
Sorry [~omkreddy] but not getting the time currently. Been busy for past some 
days. I checked INFRA team ticket and it says that only asf-site branch is 
supported. I understand that it is definitely a bummer. Just thinking whether 
it would be possible to use travis or something else to auto cherry pick from 
trunk/master to this branch? Then the commits can be done to master and let the 
script do the cherry picks. Don't know how to do it but will look if it is 
possible.

 Migrate website from SVN to Git 
 

 Key: KAFKA-2425
 URL: https://issues.apache.org/jira/browse/KAFKA-2425
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Manikumar Reddy

 The preference is to share the same Git repo for the code and website as per 
 discussion in the mailing list:
 http://search-hadoop.com/m/uyzND1Dux842dm7vg2
 Useful reference:
 https://blogs.apache.org/infra/entry/git_based_websites_available



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Problem with Kafka Review Board

2015-08-19 Thread Mayuresh Gharat
The kafka review board returns a 500.


https://reviews.apache.org/r/36652

returns this :
Something broke! (Error 500) It appears something broke when you tried to
go to here. This is either a bug in Review Board or a server configuration
error. Please report this to your administrator


-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Created] (KAFKA-2447) Add capability to KafkaLog4jAppender to be able to use SSL

2015-08-19 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2447:
-

 Summary: Add capability to KafkaLog4jAppender to be able to use SSL
 Key: KAFKA-2447
 URL: https://issues.apache.org/jira/browse/KAFKA-2447
 Project: Kafka
  Issue Type: Improvement
Reporter: Ashish K Singh
Assignee: Ashish K Singh


With Kafka supporting SSL, it makes sense to augment KafkaLog4jAppender to be 
able to use SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36652: Patch for KAFKA-2351

2015-08-19 Thread Mayuresh Gharat


 On Aug. 19, 2015, 5:48 a.m., Joel Koshy wrote:
  core/src/main/scala/kafka/network/SocketServer.scala, line 265
  https://reviews.apache.org/r/36652/diff/4/?file=1039545#file1039545line265
 
  I'm also unclear at this point on what the right thing to do here would 
  be - i.e., log and continue or make it fatal as Becket suggested. I'm 
  leaning toward the latter but I agree we could revisit this.

By fatal do you mean that we should shutdown the broker?


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review95824
---


On Aug. 13, 2015, 8:10 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated Aug. 13, 2015, 8:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Addressed comments on the Jira ticket
 
 
 Addressed Jun's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 dbe784b63817fd94e1593136926db17fac6fa3d7 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




Re: Review Request 36652: Patch for KAFKA-2351

2015-08-19 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review95868
---



core/src/main/scala/kafka/network/SocketServer.scala (line 264)
https://reviews.apache.org/r/36652/#comment151020

I referred this : 
https://www.sumologic.com/2014/05/05/why-you-should-never-catch-throwable-in-scala/
 from the kafka 2353 patch.

http://www.tzavellas.com/techblog/2010/09/20/catching-throwable-in-scala/


- Mayuresh Gharat


On Aug. 13, 2015, 8:10 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated Aug. 13, 2015, 8:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Addressed comments on the Jira ticket
 
 
 Addressed Jun's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 dbe784b63817fd94e1593136926db17fac6fa3d7 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




[jira] [Commented] (KAFKA-2425) Migrate website from SVN to Git

2015-08-19 Thread Aseem Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703327#comment-14703327
 ] 

Aseem Bansal commented on KAFKA-2425:
-

Sorry [~omkreddy] but not getting the time currently. Been busy for past some 
days. I checked INFRA team ticket and it says that only asf-site branch is 
supported. I understand that it is definitely a bummer. Just thinking whether 
it would be possible to use travis or something else to auto cherry pick from 
trunk/master to this branch? Then the commits can be done to master and let the 
script do the cherry picks. Don't know how to do it but will look if it is 
possible.

 Migrate website from SVN to Git 
 

 Key: KAFKA-2425
 URL: https://issues.apache.org/jira/browse/KAFKA-2425
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Manikumar Reddy

 The preference is to share the same Git repo for the code and website as per 
 discussion in the mailing list:
 http://search-hadoop.com/m/uyzND1Dux842dm7vg2
 Useful reference:
 https://blogs.apache.org/infra/entry/git_based_websites_available



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-28 - Add a transform client for data processing

2015-08-19 Thread Yan Fang
Hi Guozhang,

Thank you for writing the KIP-28 up. (Hope this is the right thread for me to 
post some comments. :) 

I still have some confusing about the implementation of the Processor:

1. why do we maintain a separate consumer and producer for each worker thread?
— from my understanding, the new consumer api will be able to fetch certain 
topic-partition. Is one consumer enough for one Kafka.process (it is shared 
among work threads)? The same thing for the producer, is one producer enough 
for sending out messages to the brokers? Will this have better performance?

2. how is the “Stream Synchronization” achieved?
— you talked about “pause” and “notify” the consumer. Still not very clear. 
If worker thread has group_1 {topicA-0, topicB-0} and group_2 {topicA-1, 
topicB-1}, and topicB is much slower. How can we pause the consumer to sync 
topicA and topicB if there is only one consumer?

3. how does the partition timestamp monotonically increase?
— “When the lowest timestamp corresponding record gets processed by the 
thread, the partition time possibly gets advanced.” How does the “gets 
advanced” work? Do we get another “lowest message timestamp value”? But doing 
this, may not get an “advanced” timestamp.

4. thoughts about the local state management.
— from the description, I think there is one kv store per partition-group. 
That means if one work thread is assigned more than one partition groups, it 
will have more than one kv-store connections. How can we avoid mis-operation? 
Because one partition group can easily write to another partition group’s kv 
store (they are in the same thread). 

5. do we plan to implement the throttle ?
— since we are “forwarding” the messages. It is very possible that, 
upstream-processor is much faster than the downstream-processor, how do we plan 
to deal with this?

6. how does the parallelism work?
— we achieve this by simply adding more threads? Or we plan to have the 
mechanism which can deploy different threads to different machines? It is easy 
to image that we can deploy different processors to different machines, then 
how about the work threads? Then how is the fault-tolerance? Maybe this is 
out-of-scope of the KIP?

Two nits in the KIP-28 doc:

1. miss the “close” method interfaceProcessorK1,V1,K2,V2. We have the 
“override close()” in KafkaProcessor.

2. “punctuate” does not accept “parameter”, while StatefulProcessJob has a 
punctuate method that accepts parameter.

Thanks,
Yan



[jira] [Created] (KAFKA-2446) KAFKA-2205 causes existing Topic config changes to be lost

2015-08-19 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2446:
--

 Summary: KAFKA-2205 causes existing Topic config changes to be lost
 Key: KAFKA-2446
 URL: https://issues.apache.org/jira/browse/KAFKA-2446
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


The path was changed from /config/topics/ to /config/topic. This causes 
existing config overrides to not get read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Ashish
+1 to what Gwen has suggested. This is what we follow in Flume.

All the latest doc changes are in git, once ready you move changes to
svn to update website.
The only catch is, when you need to update specific changes to website
outside release cycle, need to be a bit careful :)

On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira g...@confluent.io wrote:
 Yeah, so the way this works in few other projects I worked on is:

 * The code repo has a /docs directory with the latest revision of the docs
 (not multiple versions, just one that matches the latest state of code)
 * When you submit a patch that requires doc modification, you modify all
 relevant files in same patch and they get reviewed and committed together
 (ideally)
 * When we release, we copy the docs matching the release and commit to SVN
 website. We also do this occasionally to fix bugs in earlier docs.
 * Release artifacts include a copy of the docs

 Nice to have:
 * Docs are in Asciidoc and build generates the HTML. Asciidoc is easier to
 edit and review.

 I suggest a similar process for Kafka.

 On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma ism...@juma.me.uk wrote:

 I should clarify: it's not possible unless we add an additional step that
 moves the docs from the code repo to the website repo.

 Ismael

 On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk wrote:

  Hi all,
 
  It looks like it's not feasible to update the code and website in the
 same
  commit given existing limitations of the Apache infra:
 
 
 
 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk wrote:
 
  Hi Gwen,
 
  I filed KAFKA-2425 as KAFKA-2364 is about improving the website
  documentation. Aseem Bansal seemed interested in helping us with the
 move
  so I pinged him in the issue.
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
 wrote:
 
  Ah, there is already a JIRA in the title. Never mind :)
 
  On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
 wrote:
 
   The vote opened 5 days ago. I believe we can conclude with 3 binding
  +1, 3
   non-binding +1 and no -1.
  
   Ismael, are you opening and JIRA and migrating? Or are we looking
 for a
   volunteer?
  
   On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh asi...@cloudera.com
  wrote:
  
   +1 on same repo.
  
   On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
   edward.ribe...@gmail.com
   wrote:
  
+1. As soon as possible, please. :)
   
On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede n...@confluent.io
   wrote:
   
 +1 on the same repo for code and website. It helps to keep both
 in
   sync.

 On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
 ghe...@cloudera.com
   wrote:

  +1 for the same repo. The closer docs can be to code the more
   accurate
 they
  are likely to be. The same way we encourage unit tests for a
 new
  feature/patch. Updating the docs can be the same.
 
  If we follow Sqoop's process for example, how would small
  fixes/adjustments/additions to the live documentation occur
  without
   a
new
  release?
 
  On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
  wangg...@gmail.com
 wrote:
 
   I am +1 on same repo too. I think keeping one git history of
  code
   /
doc
   change may actually be beneficial for this approach as well.
  
   Guozhang
  
   On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
  g...@confluent.io
 wrote:
  
I prefer same repo for one-commit / lower-barrier
 benefits.
   
Sqoop has the following process, which decouples
  documentation
 changes
   from
website changes:
   
1. Code github repo contains a doc directory, with the
documentation
written and maintained in AsciiDoc. Only one version of
 the
   documentation,
since it is source controlled with the code. (unlike
  current SVN
 where
  we
have directories per version)
   
2. Build process compiles the AsciiDoc to HTML and PDF
   
3. When releasing, we post the documentation of the new
  release
   to
 the
website
   
Gwen
   
On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma 
  ism...@juma.me.uk
   
  wrote:
   
 Hi,

 For reference, here is the previous discussion on moving
  the
 website
  to
 Git:

 http://search-hadoop.com/m/uyzND11JliU1E8QU92

 People were positive to the idea as Jay said. I would
  like to
see a
  bit
of
 a discussion around whether the website should be part
 of
  the
same
  repo
as
 the code or not. I'll get the ball rolling.

 Pros for same repo:
 

[jira] [Commented] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-08-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703437#comment-14703437
 ] 

Ismael Juma commented on KAFKA-2417:


I'm listing what is already tested below:

* SSLSelectorTest 
(https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java):
** testSendLargeRequest
** testServerDisconnect
** testClientDisconnect
** testLargeMessageSequence
** testEmptyRequest
** testMute
** testRenegotiation

* SSLProducerSendTest 
(https://github.com/apache/kafka/blob/trunk/core/src/test/scala/integration/kafka/api/SSLProducerSendTest.scala)
** testSendOffset
** testClose
** testSendToPartition

* SSLConsumerTest 
(https://github.com/apache/kafka/blob/trunk/core/src/test/scala/integration/kafka/api/SSLConsumerTest.scala)
** testSimpleConsumption
** testAutoOffsetReset
** testSeek
** testGroupConsumption
** testPositionAndCommit
** testPartitionsFor

* SocketServerTest 
(https://github.com/apache/kafka/blob/trunk/core/src/test/scala/unit/kafka/network/SocketServerTest.scala)
** testSSLSocketServer

 Ducktape tests for SSL/TLS
 --

 Key: KAFKA-2417
 URL: https://issues.apache.org/jira/browse/KAFKA-2417
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Ismael Juma
 Fix For: 0.8.3


 The tests should be complementary to the unit/integration tests written as 
 part of KAFKA-1685.
 Things to consider:
 * Upgrade/downgrade to turning on/off SSL
 * Failure testing
 * Expired/revoked certificates
 * Renegotiation
 Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2145) An option to add topic owners.

2015-08-19 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703539#comment-14703539
 ] 

Parth Brahmbhatt commented on KAFKA-2145:
-

[~junrao] 3 week reminder ping to request review.

 An option to add topic owners. 
 ---

 Key: KAFKA-2145
 URL: https://issues.apache.org/jira/browse/KAFKA-2145
 Project: Kafka
  Issue Type: Improvement
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt

 We need to expose a way so users can identify users/groups that share 
 ownership of topic. We discussed adding this as part of 
 https://issues.apache.org/jira/browse/KAFKA-2035 and agreed that it will be 
 simpler to add owner as a logconfig. 
 The owner field can be used for auditing and also by authorization layer to 
 grant access without having to explicitly configure acls. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703691#comment-14703691
 ] 

Guozhang Wang commented on KAFKA-2084:
--

[~auradkar] [~jjkoshy] ClientQuotaManagerTest and 
ThrottledResponseExpirationTest re-introduce the 
org.scalatest.junit.JUnit3Suite, which we removed in KAFKA-1782. Could you 
submit a follow-up patch to remove them?

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
 KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
 KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
 KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
 KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
 KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
 KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
 KAFKA-2084_2015-08-14_17:43:00.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-19 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703537#comment-14703537
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

[~junrao] [~ijuma] [~eribeiro] One week reminder ping for review request. 

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
 KAFKA-2210_2015-08-10_18:31:54.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703532#comment-14703532
 ] 

Guozhang Wang commented on KAFKA-1387:
--

[~jwlent55] I agree that this fix may be just for broker / consumer 
registration, i.e. ZK should not be used to detect mis-configuration that two 
brokers / clients use the same Id. Hence for that case, in the new approach 
they may end-up in a delete-and-write war. We should consider fixing such 
mis-operation in a different manner which is orthogonal to this JIRA. For 
leader election, one should not simply delete the path upon conflict, we should 
leave it as is. In the future, we should either fix the root cause in ZkClient 
or move on to use a different client as KIP-30 is current discussing about.

If you do not have time this week and feel it is OK, [~fpj] could you help 
taking it over?

 Kafka getting stuck creating ephemeral node it has already created when two 
 zookeeper sessions are established in a very short period of time
 -

 Key: KAFKA-1387
 URL: https://issues.apache.org/jira/browse/KAFKA-1387
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Fedor Korotkiy
Priority: Blocker
  Labels: newbie, patch, zkclient-problems
 Attachments: kafka-1387.patch


 Kafka broker re-registers itself in zookeeper every time handleNewSession() 
 callback is invoked.
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
  
 Now imagine the following sequence of events.
 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
 the zkClient, but not invoked yet.
 2) Zookeeper session reestablishes again, queueing callback second time.
 3) First callback is invoked, creating /broker/[id] ephemeral path.
 4) Second callback is invoked and it tries to create /broker/[id] path using 
 createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
 already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
 stuck in the infinite loop.
 Seems like controller election code have the same issue.
 I'am able to reproduce this issue on the 0.8.1 branch from github using the 
 following configs.
 # zookeeper
 tickTime=10
 dataDir=/tmp/zk/
 clientPort=2101
 maxClientCnxns=0
 # kafka
 broker.id=1
 log.dir=/tmp/kafka
 zookeeper.connect=localhost:2101
 zookeeper.connection.timeout.ms=100
 zookeeper.sessiontimeout.ms=100
 Just start kafka and zookeeper and then pause zookeeper several times using 
 Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-19 Thread Jun Rao
Ok, thanks everyone for the feedback. Based on the feedback, I'd recommend
that we do an 0.8.2.2 release including just the two snappy bug fixes.

Jun

On Tue, Aug 18, 2015 at 12:59 PM, Gwen Shapira g...@confluent.io wrote:

 Any objections if I leave KAFKA-2114 (setting min.insync.replicas default)
 out?

 The test code is using changes that were done after 0.8.2.x cut-off, which
 makes it difficult to cherry-pick.

 Gwen



 On Tue, Aug 18, 2015 at 12:16 PM, Gwen Shapira g...@confluent.io wrote:

  Jun,
 
  KAFKA-2147 doesn't seem to have a commit associated with it, so I can't
  cherrypick just this fix.
  I suggest leaving this out since there is a 0.8.2.x workaround in the
 JIRA.
 
  Gwen
 
  On Mon, Aug 17, 2015 at 5:24 PM, Jun Rao j...@confluent.io wrote:
 
  Gwen,
 
  Thanks for putting the list together.
 
  I'd recommend that we exclude the following:
  KAFKA-1702: This is for the old producer and is only a problem if there
  are
  some unexpected exceptions (e.g. UnknownClass).
  KAFKA-2336: Most people don't change offsets.topic.num.partitions.
  KAFKA-1724: The patch there is never committed since the fix is included
  in
  another jira (a much larger patch).
  KAFKA-2241: This doesn't seem be a common problem. It only happens when
  the
  fetch request blocks on the broker for an extended period of time, which
  should be rare.
 
  I'd also recommend that we include the following:
  KAFKA-2147: This impacts the memory size of the purgatory and a number
 of
  people have experienced that. The fix is small and has been tested in
  production usage. It hasn't been committed though since the issue is
  already fixed in trunk and we weren't planning for an 0.8.2.2 release
  then.
 
  Thanks,
 
  Jun
 
  On Mon, Aug 17, 2015 at 2:56 PM, Gwen Shapira g...@confluent.io
 wrote:
 
   Thanks for creating a list, Grant!
  
   I placed it on the wiki with a quick evaluation of the content and
  whether
   it should be in 0.8.2.2:
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
  
   I'm attempting to only cherrypick fixes that are both important for
  large
   number of users (or very critical to some users) and very safe (mostly
   judged by the size of the change, but not only)
  
   If your favorite bugfix is missing from the list, or is there but
 marked
   No, please let us know (in this thread) what we are missing and why
  it is
   both important and safe.
   Also, if I accidentally included something you consider unsafe, speak
  up!
  
   Gwen
  
   On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
  wrote:
  
+dev
   
Adding dev list back in. Somehow it got dropped.
   
   
On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
   wrote:
   
 Below is a list of candidate bug fix jiras marked fixed for
 0.8.3. I
don't
 suspect all of these will (or should) make it into the release but
  this
 should be a relatively complete list to work from:

- KAFKA-2114 https://issues.apache.org/jira/browse/KAFKA-2114
 :
Unable
to change min.insync.replicas default
- KAFKA-1702 https://issues.apache.org/jira/browse/KAFKA-1702
 :
Messages silently Lost by producer
- KAFKA-2012 https://issues.apache.org/jira/browse/KAFKA-2012
 :
Broker should automatically handle corrupt index files
- KAFKA-2406 https://issues.apache.org/jira/browse/KAFKA-2406
 :
   ISR
propagation should be throttled to avoid overwhelming
 controller.
- KAFKA-2336 https://issues.apache.org/jira/browse/KAFKA-2336
 :
Changing offsets.topic.num.partitions after the offset topic is
created
breaks consumer group partition assignment
- KAFKA-2337 https://issues.apache.org/jira/browse/KAFKA-2337
 :
Verify
that metric names will not collide when creating new topics
- KAFKA-2393 https://issues.apache.org/jira/browse/KAFKA-2393
 :
Correctly Handle InvalidTopicException in
   KafkaApis.getTopicMetadata()
- KAFKA-2189 https://issues.apache.org/jira/browse/KAFKA-2189
 :
Snappy
compression of message batches less efficient in 0.8.2.1
- KAFKA-2308 https://issues.apache.org/jira/browse/KAFKA-2308
 :
   New
producer + Snappy face un-compression errors after broker
 restart
- KAFKA-2042 https://issues.apache.org/jira/browse/KAFKA-2042
 :
   New
producer metadata update always get all topics.
- KAFKA-1367 https://issues.apache.org/jira/browse/KAFKA-1367
 :
Broker
topic metadata not kept in sync with ZooKeeper
- KAFKA-972 https://issues.apache.org/jira/browse/KAFKA-972:
MetadataRequest
returns stale list of brokers
- KAFKA-1867 https://issues.apache.org/jira/browse/KAFKA-1867
 :
liveBroker
list not updated on a cluster with no topics
- KAFKA-1650 https://issues.apache.org/jira/browse/KAFKA-1650
 :
Mirror

[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703701#comment-14703701
 ] 

Aditya Auradkar commented on KAFKA-2084:


[~guozhang] Will do

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
 KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
 KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
 KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
 KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
 KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
 KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
 KAFKA-2084_2015-08-14_17:43:00.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2444) Fail test: kafka.api.QuotasTest testThrottledProducerConsumer FAILED

2015-08-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2444.
--
Resolution: Duplicate

Other one is already assigned, not sure if anyone was already actively working 
on either of these yet

 Fail test: kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
 --

 Key: KAFKA-2444
 URL: https://issues.apache.org/jira/browse/KAFKA-2444
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Aditya Auradkar

 This test has been failing on Jenkins builds several times in the last few 
 days. For example: https://builds.apache.org/job/Kafka-trunk/591/console
 kafka.api.QuotasTest  testThrottledProducerConsumer FAILED
 junit.framework.AssertionFailedError: Should have been throttled
 at junit.framework.Assert.fail(Assert.java:47)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at 
 kafka.api.QuotasTest.testThrottledProducerConsumer(QuotasTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703751#comment-14703751
 ] 

Jun Rao commented on KAFKA-1690:


[~sriharsha], a couple of other followup items. (1) 
SSLTransportLayer.handshake(): For the NEED_UNWRAP case, we want to assert that 
Status.BUFFER_OVERFLOW can never happen. (2) We want to add a comment that 
renegotiation doesn't fully work yet.

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-19 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703767#comment-14703767
 ] 

Flavio Junqueira commented on KAFKA-1387:
-

I'm indeed proposing to get rid of 
createEphemeralPathExpectConflictHandleZKBug. I can investigate the impact to 
leadership election.

 Kafka getting stuck creating ephemeral node it has already created when two 
 zookeeper sessions are established in a very short period of time
 -

 Key: KAFKA-1387
 URL: https://issues.apache.org/jira/browse/KAFKA-1387
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Fedor Korotkiy
Priority: Blocker
  Labels: newbie, patch, zkclient-problems
 Attachments: kafka-1387.patch


 Kafka broker re-registers itself in zookeeper every time handleNewSession() 
 callback is invoked.
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
  
 Now imagine the following sequence of events.
 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
 the zkClient, but not invoked yet.
 2) Zookeeper session reestablishes again, queueing callback second time.
 3) First callback is invoked, creating /broker/[id] ephemeral path.
 4) Second callback is invoked and it tries to create /broker/[id] path using 
 createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
 already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
 stuck in the infinite loop.
 Seems like controller election code have the same issue.
 I'am able to reproduce this issue on the 0.8.1 branch from github using the 
 following configs.
 # zookeeper
 tickTime=10
 dataDir=/tmp/zk/
 clientPort=2101
 maxClientCnxns=0
 # kafka
 broker.id=1
 log.dir=/tmp/kafka
 zookeeper.connect=localhost:2101
 zookeeper.connection.timeout.ms=100
 zookeeper.sessiontimeout.ms=100
 Just start kafka and zookeeper and then pause zookeeper several times using 
 Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703771#comment-14703771
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

[~junrao] do you want send that patch with comments on this JIRA?

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703881#comment-14703881
 ] 

Gwen Shapira commented on KAFKA-2367:
-

We haven't really seen an alternative to Avro. Currently all we have is an Avro 
copy-paste (an interim phase which we clearly don't want) or Avro itself.

If [~ewencp] prefers not to go with Avro, then the next phase is to review the 
proposed solution. Even though I have concerns regarding re-solving a problem 
that is already solved, I will not downvote a reasonable proposal.

 Add Copycat runtime data API
 

 Key: KAFKA-2367
 URL: https://issues.apache.org/jira/browse/KAFKA-2367
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 Design the API used for runtime data in Copycat. This API is used to 
 construct schemas and records that Copycat processes. This needs to be a 
 fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
 support complex, varied data types that may be input from/output to many data 
 systems.
 This should issue should also address the serialization interfaces used 
 within Copycat, which translate the runtime data into serialized byte[] form. 
 It is important that these be considered together because the data format can 
 be used in multiple ways (records, partition IDs, partition offsets), so it 
 and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-19 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703890#comment-14703890
 ] 

Jay Kreps commented on KAFKA-2367:
--

Yeah makes sense. Obviously no one is advocating the current kludge. [~ewencp] 
does that seem reasonable?

 Add Copycat runtime data API
 

 Key: KAFKA-2367
 URL: https://issues.apache.org/jira/browse/KAFKA-2367
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 Design the API used for runtime data in Copycat. This API is used to 
 construct schemas and records that Copycat processes. This needs to be a 
 fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
 support complex, varied data types that may be input from/output to many data 
 systems.
 This should issue should also address the serialization interfaces used 
 within Copycat, which translate the runtime data into serialized byte[] form. 
 It is important that these be considered together because the data format can 
 be used in multiple ways (records, partition IDs, partition offsets), so it 
 and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-19 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703921#comment-14703921
 ] 

Joel Koshy commented on KAFKA-2084:
---

I pushed a trivial commit to address this. However, it would be best if 
KAFKA-1782 is followed-up with a checkstyle type check to prevent people from 
extending from or using JUnit3Suite.

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
 KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
 KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
 KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
 KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
 KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
 KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
 KAFKA-2084_2015-08-14_17:43:00.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703940#comment-14703940
 ] 

Guozhang Wang commented on KAFKA-2084:
--

That would be best. Problem is that we currently do not have checkstyle for 
Scala and from what people have been discussed it seems a bit tricky to add.

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
 KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
 KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
 KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
 KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
 KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
 KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
 KAFKA-2084_2015-08-14_17:43:00.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-19 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703965#comment-14703965
 ] 

Joel Koshy commented on KAFKA-2084:
---

Actually I did not mean full-fledged 
[scalastyle|https://github.com/ngbinh/gradle-scalastyle-plugin], but a very 
simple inline gradle plugin (in our build.gradle) that does these sort of 
checks.

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
 KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
 KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
 KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
 KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
 KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
 KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
 KAFKA-2084_2015-08-14_17:43:00.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2085) Return delay time in QuotaViolationException

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2085.

Resolution: Fixed

Resolved in KAFKA-2084

 Return delay time in QuotaViolationException
 

 Key: KAFKA-2085
 URL: https://issues.apache.org/jira/browse/KAFKA-2085
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar

 As described in KIP-13, we need to be able to return a delay in 
 QuotaViolationException. Compute delay in Sensor and return in the thrown 
 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2097) Implement request delays for quota violations

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2097 started by Aditya Auradkar.
--
 Implement request delays for quota violations
 -

 Key: KAFKA-2097
 URL: https://issues.apache.org/jira/browse/KAFKA-2097
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar

 As defined in the KIP, implement delays on a per-request basis for both 
 producer and consumer. This involves either modifying the existing purgatory 
 or adding a new delay queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-19 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704002#comment-14704002
 ] 

Geoff Anderson commented on KAFKA-2330:
---

rsync_excludes should maybe include 'tests/results'

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2449) Update mirror maker (MirrorMaker) docs

2015-08-19 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2449:
-

 Summary: Update mirror maker (MirrorMaker) docs
 Key: KAFKA-2449
 URL: https://issues.apache.org/jira/browse/KAFKA-2449
 Project: Kafka
  Issue Type: Bug
Reporter: Geoff Anderson
 Fix For: 0.8.3


The Kafka docs on Mirror Maker state that it mirrors from N source clusters to 
1 destination, but this is no longer the case. Docs should be updated to 
reflect that it mirrors from single source cluster to single target cluster.

Docs I've found where this should be updated:
http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+(MirrorMaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Guozhang Wang
Gwen: I remembered it wrong. We would not need another round of voting.

On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira g...@confluent.io wrote:

 Looking back at this thread, the +1 mention same repo, so I'm not sure a
 new vote is required.

 On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang wangg...@gmail.com wrote:

  So I think we have two different approaches here. The original proposal
  from Aseem is to move website from SVN to a separate Git repo, and hence
  have separate commits on code / doc changes. For that we have accumulated
  enough binging +1s to move on; Gwen's proposal is to move website into
 the
  same repo under a different folder. If people feel they prefer this over
  the previous approach I would like to call for another round of voting.
 
  Guozhang
 
  On Wed, Aug 19, 2015 at 10:24 AM, Ashish paliwalash...@gmail.com
 wrote:
 
   +1 to what Gwen has suggested. This is what we follow in Flume.
  
   All the latest doc changes are in git, once ready you move changes to
   svn to update website.
   The only catch is, when you need to update specific changes to website
   outside release cycle, need to be a bit careful :)
  
   On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira g...@confluent.io
 wrote:
Yeah, so the way this works in few other projects I worked on is:
   
* The code repo has a /docs directory with the latest revision of the
   docs
(not multiple versions, just one that matches the latest state of
 code)
* When you submit a patch that requires doc modification, you modify
  all
relevant files in same patch and they get reviewed and committed
  together
(ideally)
* When we release, we copy the docs matching the release and commit
 to
   SVN
website. We also do this occasionally to fix bugs in earlier docs.
* Release artifacts include a copy of the docs
   
Nice to have:
* Docs are in Asciidoc and build generates the HTML. Asciidoc is
 easier
   to
edit and review.
   
I suggest a similar process for Kafka.
   
On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma ism...@juma.me.uk
  wrote:
   
I should clarify: it's not possible unless we add an additional step
   that
moves the docs from the code repo to the website repo.
   
Ismael
   
On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk
  wrote:
   
 Hi all,

 It looks like it's not feasible to update the code and website in
  the
same
 commit given existing limitations of the Apache infra:



   
  
 
 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175

 Best,
 Ismael

 On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk
   wrote:

 Hi Gwen,

 I filed KAFKA-2425 as KAFKA-2364 is about improving the website
 documentation. Aseem Bansal seemed interested in helping us with
  the
move
 so I pinged him in the issue.

 Best,
 Ismael

 On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
 
wrote:

 Ah, there is already a JIRA in the title. Never mind :)

 On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira 
 g...@confluent.io
wrote:

  The vote opened 5 days ago. I believe we can conclude with 3
   binding
 +1, 3
  non-binding +1 and no -1.
 
  Ismael, are you opening and JIRA and migrating? Or are we
  looking
for a
  volunteer?
 
  On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
   asi...@cloudera.com
 wrote:
 
  +1 on same repo.
 
  On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
  edward.ribe...@gmail.com
  wrote:
 
   +1. As soon as possible, please. :)
  
   On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
   n...@confluent.io
  wrote:
  
+1 on the same repo for code and website. It helps to
 keep
   both
in
  sync.
   
On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
ghe...@cloudera.com
  wrote:
   
 +1 for the same repo. The closer docs can be to code
 the
   more
  accurate
they
 are likely to be. The same way we encourage unit tests
  for
   a
new
 feature/patch. Updating the docs can be the same.

 If we follow Sqoop's process for example, how would
 small
 fixes/adjustments/additions to the live documentation
  occur
 without
  a
   new
 release?

 On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
 wangg...@gmail.com
wrote:

  I am +1 on same repo too. I think keeping one git
   history of
 code
  /
   doc
  change may actually be beneficial for this approach
 as
   well.
 
  Guozhang
 
  On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
 g...@confluent.io
wrote:
 
   I prefer 

[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-19 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703871#comment-14703871
 ] 

Jay Kreps commented on KAFKA-2367:
--

Okay there are clearly divergent opinions here. Presumably everyone prefers a 
copycat that exists with either API to one that doesn't. How do we move forward 
and make a decision?

 Add Copycat runtime data API
 

 Key: KAFKA-2367
 URL: https://issues.apache.org/jira/browse/KAFKA-2367
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 Design the API used for runtime data in Copycat. This API is used to 
 construct schemas and records that Copycat processes. This needs to be a 
 fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
 support complex, varied data types that may be input from/output to many data 
 systems.
 This should issue should also address the serialization interfaces used 
 within Copycat, which translate the runtime data into serialized byte[] form. 
 It is important that these be considered together because the data format can 
 be used in multiple ways (records, partition IDs, partition offsets), so it 
 and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2293) IllegalFormatConversionException in Partition.scala

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2293.

Resolution: Fixed

Committed to trunk

 IllegalFormatConversionException in Partition.scala
 ---

 Key: KAFKA-2293
 URL: https://issues.apache.org/jira/browse/KAFKA-2293
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2293.patch


 ERROR [KafkaApis] [kafka-request-handler-9] [kafka-server] [] [KafkaApi-306] 
 error when handling request Name: 
 java.util.IllegalFormatConversionException: d != 
 kafka.server.LogOffsetMetadata
 at 
 java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4302)
 at 
 java.util.Formatter$FormatSpecifier.printInteger(Formatter.java:2793)
 at java.util.Formatter$FormatSpecifier.print(Formatter.java:2747)
 at java.util.Formatter.format(Formatter.java:2520)
 at java.util.Formatter.format(Formatter.java:2455)
 at java.lang.String.format(String.java:2925)
 at 
 scala.collection.immutable.StringLike$class.format(StringLike.scala:266)
 at scala.collection.immutable.StringOps.format(StringOps.scala:31)
 at 
 kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:253)
 at 
 kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:791)
 at 
 kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:788)
 at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
 at 
 kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:788)
 at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:433)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2097) Implement request delays for quota violations

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2097.

Resolution: Fixed

Resolved in KAFKA-2084

 Implement request delays for quota violations
 -

 Key: KAFKA-2097
 URL: https://issues.apache.org/jira/browse/KAFKA-2097
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar

 As defined in the KIP, implement delays on a per-request basis for both 
 producer and consumer. This involves either modifying the existing purgatory 
 or adding a new delay queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703901#comment-14703901
 ] 

Ewen Cheslack-Postava commented on KAFKA-2367:
--

Yes.

 Add Copycat runtime data API
 

 Key: KAFKA-2367
 URL: https://issues.apache.org/jira/browse/KAFKA-2367
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 Design the API used for runtime data in Copycat. This API is used to 
 construct schemas and records that Copycat processes. This needs to be a 
 fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
 support complex, varied data types that may be input from/output to many data 
 systems.
 This should issue should also address the serialization interfaces used 
 within Copycat, which translate the runtime data into serialized byte[] form. 
 It is important that these be considered together because the data format can 
 be used in multiple ways (records, partition IDs, partition offsets), so it 
 and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-19 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703763#comment-14703763
 ] 

Flavio Junqueira commented on KAFKA-873:


hey [~granthenke], I was thinking we could try to have this for 0.8.3, what do 
you think? If we use the bridge, I wanted to check if we can solve the problem 
I'm reporting in KAFKA-1387. It is related to the fact that zkclient retries 
the creation of ephemerals across sessions, which can be problematic. In any 
case, I can definitely help here.

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Interface change notes in commit messages?

2015-08-19 Thread Mayuresh Gharat
+1 for this. Also we can put description of what changes might be required
to make in the current running release. But that might be cumbersome. Just
a thought.


Thanks,

Mayuresh

On Wed, Aug 19, 2015 at 2:29 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 I am thinking can we put some notes in commit message when commit a patch
 which introduces introduce some API change or backward compatible change?

 It mainly serves two purposes:
 1. Easier for people to track the changes they need to make to run a new
 version
 2. Easier for us to write the release note.

 If we assume all the API change or backward incompatible change are going
 through a KIP then we can put the KIP number in the commit message.
 Otherwise, maybe we can put [INTERFACE CHANGE] in the commit message?

 Thoughts?

 Jiangjie (Becket) Qin




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Gwen Shapira
Looking back at this thread, the +1 mention same repo, so I'm not sure a
new vote is required.

On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang wangg...@gmail.com wrote:

 So I think we have two different approaches here. The original proposal
 from Aseem is to move website from SVN to a separate Git repo, and hence
 have separate commits on code / doc changes. For that we have accumulated
 enough binging +1s to move on; Gwen's proposal is to move website into the
 same repo under a different folder. If people feel they prefer this over
 the previous approach I would like to call for another round of voting.

 Guozhang

 On Wed, Aug 19, 2015 at 10:24 AM, Ashish paliwalash...@gmail.com wrote:

  +1 to what Gwen has suggested. This is what we follow in Flume.
 
  All the latest doc changes are in git, once ready you move changes to
  svn to update website.
  The only catch is, when you need to update specific changes to website
  outside release cycle, need to be a bit careful :)
 
  On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira g...@confluent.io wrote:
   Yeah, so the way this works in few other projects I worked on is:
  
   * The code repo has a /docs directory with the latest revision of the
  docs
   (not multiple versions, just one that matches the latest state of code)
   * When you submit a patch that requires doc modification, you modify
 all
   relevant files in same patch and they get reviewed and committed
 together
   (ideally)
   * When we release, we copy the docs matching the release and commit to
  SVN
   website. We also do this occasionally to fix bugs in earlier docs.
   * Release artifacts include a copy of the docs
  
   Nice to have:
   * Docs are in Asciidoc and build generates the HTML. Asciidoc is easier
  to
   edit and review.
  
   I suggest a similar process for Kafka.
  
   On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma ism...@juma.me.uk
 wrote:
  
   I should clarify: it's not possible unless we add an additional step
  that
   moves the docs from the code repo to the website repo.
  
   Ismael
  
   On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk
 wrote:
  
Hi all,
   
It looks like it's not feasible to update the code and website in
 the
   same
commit given existing limitations of the Apache infra:
   
   
   
  
 
 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
   
Best,
Ismael
   
On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk
  wrote:
   
Hi Gwen,
   
I filed KAFKA-2425 as KAFKA-2364 is about improving the website
documentation. Aseem Bansal seemed interested in helping us with
 the
   move
so I pinged him in the issue.
   
Best,
Ismael
   
On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
   wrote:
   
Ah, there is already a JIRA in the title. Never mind :)
   
On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
   wrote:
   
 The vote opened 5 days ago. I believe we can conclude with 3
  binding
+1, 3
 non-binding +1 and no -1.

 Ismael, are you opening and JIRA and migrating? Or are we
 looking
   for a
 volunteer?

 On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
  asi...@cloudera.com
wrote:

 +1 on same repo.

 On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
 edward.ribe...@gmail.com
 wrote:

  +1. As soon as possible, please. :)
 
  On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
  n...@confluent.io
 wrote:
 
   +1 on the same repo for code and website. It helps to keep
  both
   in
 sync.
  
   On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
   ghe...@cloudera.com
 wrote:
  
+1 for the same repo. The closer docs can be to code the
  more
 accurate
   they
are likely to be. The same way we encourage unit tests
 for
  a
   new
feature/patch. Updating the docs can be the same.
   
If we follow Sqoop's process for example, how would small
fixes/adjustments/additions to the live documentation
 occur
without
 a
  new
release?
   
On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
wangg...@gmail.com
   wrote:
   
 I am +1 on same repo too. I think keeping one git
  history of
code
 /
  doc
 change may actually be beneficial for this approach as
  well.

 Guozhang

 On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
g...@confluent.io
   wrote:

  I prefer same repo for one-commit / lower-barrier
   benefits.
 
  Sqoop has the following process, which decouples
documentation
   changes
 from
  website changes:
 
  1. Code github repo contains a doc directory, with
 the
  documentation
  written and maintained 

[jira] [Updated] (KAFKA-2448) BrokerChangeListener missed broker id path ephemeral node deletion event.

2015-08-19 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2448:

Assignee: Flavio Junqueira

 BrokerChangeListener missed broker id path ephemeral node deletion event.
 -

 Key: KAFKA-2448
 URL: https://issues.apache.org/jira/browse/KAFKA-2448
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Flavio Junqueira

 When a broker get bounced, ideally the sequence should be like this:
 1.1. Broker shutdown resources.
 1.2. Broker close zkClient (this will cause the ephemeral node of 
 /brokers/ids/BROKER_ID to be deleted)
 1.3. Broker restart and load the log segment
 1.4. Broker create ephemeral node /brokers/ids/BROKER_ID
 The broker side log s are:
 {noformat}
 ...
 2015/08/17 22:42:37.663 INFO [SocketServer] [Thread-1] [kafka-server] [] 
 [Socket Server on Broker 1140], Shutting down
 2015/08/17 22:42:37.735 INFO [SocketServer] [Thread-1] [kafka-server] [] 
 [Socket Server on Broker 1140], Shutdown completed
 ...
 2015/08/17 22:42:53.898 INFO [ZooKeeper] [Thread-1] [kafka-server] [] 
 Session: 0x14d43fd905f68d7 closed
 2015/08/17 22:42:53.898 INFO [ClientCnxn] [main-EventThread] [kafka-server] 
 [] EventThread shut down
 2015/08/17 22:42:53.898 INFO [KafkaServer] [Thread-1] [kafka-server] [] 
 [Kafka Server 1140], shut down completed
 ...
 2015/08/17 22:43:03.306 INFO [ClientCnxn] 
 [main-SendThread(zk-ei1-kafkatest.stg.linkedin.com:12913)] [kafka-server] [] 
 Session establishment complete on server zk-ei1-kafkatest.stg.linkedin
 .com/172.20.73.211:12913, sessionid = 0x24d43fd93d96821, negotiated timeout = 
 12000
 2015/08/17 22:43:03.306 INFO [ZkClient] [main-EventThread] [kafka-server] [] 
 zookeeper state changed (SyncConnected)
 ...
 {noformat}
 On the controller side, the sequence should be:
 2.1. Controlled shutdown the broker
 2.2. BrokerChangeListener fired for /brokers/ids child change because 
 ephemeral node is deleted in step 1.2
 2.3. BrokerChangeListener fired again for /borkers/ids child change because 
 the ephemeral node is created in 1.4
 The issue I saw was on controller side, the broker change listener only fired 
 once after step 1.4. So the controller did not see any broker change.
 {noformat}
 2015/08/17 22:41:46.189 [KafkaController] [Controller 1507]: Shutting down 
 broker 1140
 ...
 2015/08/17 22:42:38.031 [RequestSendThread] 
 [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
 to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
 ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
 broker.
 java.nio.channels.ClosedChannelException
 at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
 at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
 at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 2015/08/17 22:42:38.031 [RequestSendThread] 
 [Controller-1507-to-broker-1140-send-thread], Controller 1507 connected to 
 1140 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)) for sending 
 state change requests
 2015/08/17 22:42:38.332 [RequestSendThread] 
 [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
 to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
 ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
 broker.
 java.nio.channels.ClosedChannelException
 at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
 at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
 at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 
 2015/08/17 22:43:09.035 [ReplicaStateMachine$BrokerChangeListener] 
 [BrokerChangeListener on Controller 1507]: Broker change listener fired for 
 path /brokers/ids with children 
 1140,1282,1579,871,1556,872,1511,873,874,852,1575,875,1574,1530,854,857,858,859,1493,1272,880,1547,1568,1500,1521,863,864,865,867,1507
 2015/08/17 22:43:09.082 [ReplicaStateMachine$BrokerChangeListener] 
 [BrokerChangeListener on Controller 1507]: Newly added brokers: , deleted 
 brokers: , all live brokers: 
 

[jira] [Updated] (KAFKA-2448) BrokerChangeListener missed broker id path ephemeral node deletion event.

2015-08-19 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2448:

Description: 
When a broker get bounced, ideally the sequence should be like this:
1.1. Broker shutdown resources.
1.2. Broker close zkClient (this will cause the ephemeral node of 
/brokers/ids/BROKER_ID to be deleted)
1.3. Broker restart and load the log segment
1.4. Broker create ephemeral node /brokers/ids/BROKER_ID
The broker side log s are:
{noformat}
...
2015/08/17 22:42:37.663 INFO [SocketServer] [Thread-1] [kafka-server] [] 
[Socket Server on Broker 1140], Shutting down
2015/08/17 22:42:37.735 INFO [SocketServer] [Thread-1] [kafka-server] [] 
[Socket Server on Broker 1140], Shutdown completed
...
2015/08/17 22:42:53.898 INFO [ZooKeeper] [Thread-1] [kafka-server] [] Session: 
0x14d43fd905f68d7 closed
2015/08/17 22:42:53.898 INFO [ClientCnxn] [main-EventThread] [kafka-server] [] 
EventThread shut down
2015/08/17 22:42:53.898 INFO [KafkaServer] [Thread-1] [kafka-server] [] [Kafka 
Server 1140], shut down completed
...
2015/08/17 22:43:03.306 INFO [ClientCnxn] 
[main-SendThread(zk-ei1-kafkatest.stg.linkedin.com:12913)] [kafka-server] [] 
Session establishment complete on server zk-ei1-kafkatest.stg.linkedin
.com/172.20.73.211:12913, sessionid = 0x24d43fd93d96821, negotiated timeout = 
12000
2015/08/17 22:43:03.306 INFO [ZkClient] [main-EventThread] [kafka-server] [] 
zookeeper state changed (SyncConnected)
...
{noformat}


On the controller side, the sequence should be:
2.1. Controlled shutdown the broker
2.2. BrokerChangeListener fired for /brokers/ids child change because ephemeral 
node is deleted in step 1.2
2.3. BrokerChangeListener fired again for /borkers/ids child change because the 
ephemeral node is created in 1.4

The issue I saw was on controller side, the broker change listener only fired 
once after step 1.4. So the controller did not see any broker change.

{noformat}
2015/08/17 22:41:46.189 [KafkaController] [Controller 1507]: Shutting down 
broker 1140
...
2015/08/17 22:42:38.031 [RequestSendThread] 
[Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 799; 
Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 : 
(EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
broker.
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at 
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
2015/08/17 22:42:38.031 [RequestSendThread] 
[Controller-1507-to-broker-1140-send-thread], Controller 1507 connected to 1140 
: (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)) for sending state 
change requests
2015/08/17 22:42:38.332 [RequestSendThread] 
[Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 799; 
Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 : 
(EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
broker.
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at 
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)


2015/08/17 22:43:09.035 [ReplicaStateMachine$BrokerChangeListener] 
[BrokerChangeListener on Controller 1507]: Broker change listener fired for 
path /brokers/ids with children 
1140,1282,1579,871,1556,872,1511,873,874,852,1575,875,1574,1530,854,857,858,859,1493,1272,880,1547,1568,1500,1521,863,864,865,867,1507
2015/08/17 22:43:09.082 [ReplicaStateMachine$BrokerChangeListener] 
[BrokerChangeListener on Controller 1507]: Newly added brokers: , deleted 
brokers: , all live brokers: 
873,1507,1511,1568,1521,852,874,857,1493,1530,875,1282,1574,880,863,858,1556,1547,872,1579,864,1272,859,1575,854,867,865,1500,871
{noformat}

From ZK transaction log, the zk session in step 1.4 has already be closed.
{noformat}
2015-08-17T22:42:53.899Z, s:0x14d43fd905f68d7, zx:0x26088e0cda UNKNOWN(null)
{noformat}

In this case, there are 10 seconds between step 1.2 and step 1.4
ZK session timeout was set to 12 seconds.
According to our test, the ephemeral node in zookeeper will be deleted after 
the session is explicitly closed. But it 

[DISCUSSION] Interface change notes in commit messages?

2015-08-19 Thread Jiangjie Qin
I am thinking can we put some notes in commit message when commit a patch
which introduces introduce some API change or backward compatible change?

It mainly serves two purposes:
1. Easier for people to track the changes they need to make to run a new
version
2. Easier for us to write the release note.

If we assume all the API change or backward incompatible change are going
through a KIP then we can put the KIP number in the commit message.
Otherwise, maybe we can put [INTERFACE CHANGE] in the commit message?

Thoughts?

Jiangjie (Becket) Qin


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Guozhang Wang
So I think we have two different approaches here. The original proposal
from Aseem is to move website from SVN to a separate Git repo, and hence
have separate commits on code / doc changes. For that we have accumulated
enough binging +1s to move on; Gwen's proposal is to move website into the
same repo under a different folder. If people feel they prefer this over
the previous approach I would like to call for another round of voting.

Guozhang

On Wed, Aug 19, 2015 at 10:24 AM, Ashish paliwalash...@gmail.com wrote:

 +1 to what Gwen has suggested. This is what we follow in Flume.

 All the latest doc changes are in git, once ready you move changes to
 svn to update website.
 The only catch is, when you need to update specific changes to website
 outside release cycle, need to be a bit careful :)

 On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira g...@confluent.io wrote:
  Yeah, so the way this works in few other projects I worked on is:
 
  * The code repo has a /docs directory with the latest revision of the
 docs
  (not multiple versions, just one that matches the latest state of code)
  * When you submit a patch that requires doc modification, you modify all
  relevant files in same patch and they get reviewed and committed together
  (ideally)
  * When we release, we copy the docs matching the release and commit to
 SVN
  website. We also do this occasionally to fix bugs in earlier docs.
  * Release artifacts include a copy of the docs
 
  Nice to have:
  * Docs are in Asciidoc and build generates the HTML. Asciidoc is easier
 to
  edit and review.
 
  I suggest a similar process for Kafka.
 
  On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma ism...@juma.me.uk wrote:
 
  I should clarify: it's not possible unless we add an additional step
 that
  moves the docs from the code repo to the website repo.
 
  Ismael
 
  On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk wrote:
 
   Hi all,
  
   It looks like it's not feasible to update the code and website in the
  same
   commit given existing limitations of the Apache infra:
  
  
  
 
 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
  
   Best,
   Ismael
  
   On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma ism...@juma.me.uk
 wrote:
  
   Hi Gwen,
  
   I filed KAFKA-2425 as KAFKA-2364 is about improving the website
   documentation. Aseem Bansal seemed interested in helping us with the
  move
   so I pinged him in the issue.
  
   Best,
   Ismael
  
   On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira g...@confluent.io
  wrote:
  
   Ah, there is already a JIRA in the title. Never mind :)
  
   On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira g...@confluent.io
  wrote:
  
The vote opened 5 days ago. I believe we can conclude with 3
 binding
   +1, 3
non-binding +1 and no -1.
   
Ismael, are you opening and JIRA and migrating? Or are we looking
  for a
volunteer?
   
On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
 asi...@cloudera.com
   wrote:
   
+1 on same repo.
   
On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
edward.ribe...@gmail.com
wrote:
   
 +1. As soon as possible, please. :)

 On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
 n...@confluent.io
wrote:

  +1 on the same repo for code and website. It helps to keep
 both
  in
sync.
 
  On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
  ghe...@cloudera.com
wrote:
 
   +1 for the same repo. The closer docs can be to code the
 more
accurate
  they
   are likely to be. The same way we encourage unit tests for
 a
  new
   feature/patch. Updating the docs can be the same.
  
   If we follow Sqoop's process for example, how would small
   fixes/adjustments/additions to the live documentation occur
   without
a
 new
   release?
  
   On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
   wangg...@gmail.com
  wrote:
  
I am +1 on same repo too. I think keeping one git
 history of
   code
/
 doc
change may actually be beneficial for this approach as
 well.
   
Guozhang
   
On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira 
   g...@confluent.io
  wrote:
   
 I prefer same repo for one-commit / lower-barrier
  benefits.

 Sqoop has the following process, which decouples
   documentation
  changes
from
 website changes:

 1. Code github repo contains a doc directory, with the
 documentation
 written and maintained in AsciiDoc. Only one version of
  the
documentation,
 since it is source controlled with the code. (unlike
   current SVN
  where
   we
 have directories per version)

 2. Build process compiles the AsciiDoc to HTML and PDF

 3. When releasing, we post the documentation of 

[jira] [Created] (KAFKA-2448) BrokerChangeListener missed broker id path ephemeral node deletion event.

2015-08-19 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2448:
---

 Summary: BrokerChangeListener missed broker id path ephemeral node 
deletion event.
 Key: KAFKA-2448
 URL: https://issues.apache.org/jira/browse/KAFKA-2448
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin


When a broker get bounced, ideally the sequence should be like this:
1.1. Broker shutdown resources.
1.2. Broker close zkClient (this will cause the ephemeral node of 
/brokers/ids/BROKER_ID to be deleted)
1.3. Broker restart and load the log segment
1.4. Broker create ephemeral node /brokers/ids/BROKER_ID
The broker side log s are:
{noformat}
...
2015/08/17 22:42:37.663 INFO [SocketServer] [Thread-1] [kafka-server] [] 
[Socket Server on Broker 1140], Shutting down
2015/08/17 22:42:37.735 INFO [SocketServer] [Thread-1] [kafka-server] [] 
[Socket Server on Broker 1140], Shutdown completed
...
2015/08/17 22:42:53.898 INFO [ZooKeeper] [Thread-1] [kafka-server] [] Session: 
0x14d43fd905f68d7 closed
2015/08/17 22:42:53.898 INFO [ClientCnxn] [main-EventThread] [kafka-server] [] 
EventThread shut down
2015/08/17 22:42:53.898 INFO [KafkaServer] [Thread-1] [kafka-server] [] [Kafka 
Server 1140], shut down completed
...
2015/08/17 22:43:03.306 INFO [ClientCnxn] 
[main-SendThread(zk-ei1-kafkatest.stg.linkedin.com:12913)] [kafka-server] [] 
Session establishment complete on server zk-ei1-kafkatest.stg.linkedin
.com/172.20.73.211:12913, sessionid = 0x24d43fd93d96821, negotiated timeout = 
12000
2015/08/17 22:43:03.306 INFO [ZkClient] [main-EventThread] [kafka-server] [] 
zookeeper state changed (SyncConnected)
...
{noformat}


On the controller side, the sequence should be:
2.1. Controlled shutdown the broker
2.2. BrokerChangeListener fired for /brokers/ids child change because ephemeral 
node is deleted in step 1.2
2.3. BrokerChangeListener fired again for /borkers/ids child change because the 
ephemeral node is created in 1.4

The issue I saw was on controller side, the broker change listener only fired 
once after step 1.4. So the controller did not see any broker change.

{noformat}
2015/08/17 22:41:46.189 [KafkaController] [Controller 1507]: Shutting down 
broker 1140
...
2015/08/17 22:42:38.031 [RequestSendThread] 
[Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 799; 
Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 : 
(EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
broker.
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at 
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
2015/08/17 22:42:38.031 [RequestSendThread] 
[Controller-1507-to-broker-1140-send-thread], Controller 1507 connected to 1140 
: (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)) for sending state 
change requests
2015/08/17 22:42:38.332 [RequestSendThread] 
[Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 799; 
Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 : 
(EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
broker.
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at 
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)


2015/08/17 22:43:09.035 [ReplicaStateMachine$BrokerChangeListener] 
[BrokerChangeListener on Controller 1507]: Broker change listener fired for 
path /brokers/ids with children 
1140,1282,1579,871,1556,872,1511,873,874,852,1575,875,1574,1530,854,857,858,859,1493,1272,880,1547,1568,1500,1521,863,864,865,867,1507
2015/08/17 22:43:09.082 [ReplicaStateMachine$BrokerChangeListener] 
[BrokerChangeListener on Controller 1507]: Newly added brokers: , deleted 
brokers: , all live brokers: 
873,1507,1511,1568,1521,852,874,857,1493,1530,875,1282,1574,880,863,858,1556,1547,872,1579,864,1272,859,1575,854,867,865,1500,871
{noformat}

From ZK transaction log, the zk session in step 1.4 has already be closed.
{noformat}
2015-08-17T22:42:53.899Z, s:0x14d43fd905f68d7, zx:0x26088e0cda UNKNOWN(null)
{noformat}

In this case, there are 10 seconds between step 1.2 and step 1.4
ZK session 

[jira] [Commented] (KAFKA-2448) BrokerChangeListener missed broker id path ephemeral node deletion event.

2015-08-19 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703831#comment-14703831
 ] 

Jiangjie Qin commented on KAFKA-2448:
-

[~fpj] This issue is potentially related to zookeeper. Will you be able to take 
a look? Thanks.

 BrokerChangeListener missed broker id path ephemeral node deletion event.
 -

 Key: KAFKA-2448
 URL: https://issues.apache.org/jira/browse/KAFKA-2448
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Flavio Junqueira

 When a broker get bounced, ideally the sequence should be like this:
 1.1. Broker shutdown resources.
 1.2. Broker close zkClient (this will cause the ephemeral node of 
 /brokers/ids/BROKER_ID to be deleted)
 1.3. Broker restart and load the log segment
 1.4. Broker create ephemeral node /brokers/ids/BROKER_ID
 The broker side log s are:
 {noformat}
 ...
 2015/08/17 22:42:37.663 INFO [SocketServer] [Thread-1] [kafka-server] [] 
 [Socket Server on Broker 1140], Shutting down
 2015/08/17 22:42:37.735 INFO [SocketServer] [Thread-1] [kafka-server] [] 
 [Socket Server on Broker 1140], Shutdown completed
 ...
 2015/08/17 22:42:53.898 INFO [ZooKeeper] [Thread-1] [kafka-server] [] 
 Session: 0x14d43fd905f68d7 closed
 2015/08/17 22:42:53.898 INFO [ClientCnxn] [main-EventThread] [kafka-server] 
 [] EventThread shut down
 2015/08/17 22:42:53.898 INFO [KafkaServer] [Thread-1] [kafka-server] [] 
 [Kafka Server 1140], shut down completed
 ...
 2015/08/17 22:43:03.306 INFO [ClientCnxn] 
 [main-SendThread(zk-ei1-kafkatest.stg.linkedin.com:12913)] [kafka-server] [] 
 Session establishment complete on server zk-ei1-kafkatest.stg.linkedin
 .com/172.20.73.211:12913, sessionid = 0x24d43fd93d96821, negotiated timeout = 
 12000
 2015/08/17 22:43:03.306 INFO [ZkClient] [main-EventThread] [kafka-server] [] 
 zookeeper state changed (SyncConnected)
 ...
 {noformat}
 On the controller side, the sequence should be:
 2.1. Controlled shutdown the broker
 2.2. BrokerChangeListener fired for /brokers/ids child change because 
 ephemeral node is deleted in step 1.2
 2.3. BrokerChangeListener fired again for /borkers/ids child change because 
 the ephemeral node is created in 1.4
 The issue I saw was on controller side, the broker change listener only fired 
 once after step 1.4. So the controller did not see any broker change.
 {noformat}
 2015/08/17 22:41:46.189 [KafkaController] [Controller 1507]: Shutting down 
 broker 1140
 ...
 2015/08/17 22:42:38.031 [RequestSendThread] 
 [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
 to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
 ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
 broker.
 java.nio.channels.ClosedChannelException
 at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
 at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
 at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 2015/08/17 22:42:38.031 [RequestSendThread] 
 [Controller-1507-to-broker-1140-send-thread], Controller 1507 connected to 
 1140 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)) for sending 
 state change requests
 2015/08/17 22:42:38.332 [RequestSendThread] 
 [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
 to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
 ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
 broker.
 java.nio.channels.ClosedChannelException
 at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
 at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
 at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 
 2015/08/17 22:43:09.035 [ReplicaStateMachine$BrokerChangeListener] 
 [BrokerChangeListener on Controller 1507]: Broker change listener fired for 
 path /brokers/ids with children 
 1140,1282,1579,871,1556,872,1511,873,874,852,1575,875,1574,1530,854,857,858,859,1493,1272,880,1547,1568,1500,1521,863,864,865,867,1507
 2015/08/17 22:43:09.082 [ReplicaStateMachine$BrokerChangeListener] 
 [BrokerChangeListener on Controller 1507]: Newly 

[jira] [Commented] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704011#comment-14704011
 ] 

Gwen Shapira commented on KAFKA-2330:
-

Please add to the README that we need HostManager 1.5.0 (but no higher):
vagrant plugin install vagrant-hostmanager --plugin-version 1.5.0

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704112#comment-14704112
 ] 

Gwen Shapira commented on KAFKA-2330:
-

+1 and pushed to trunk.

Thanks for the improvement, [~ewencp]

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch, KAFKA-2330_2015-08-19_17:50:17.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2330:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch, KAFKA-2330_2015-08-19_17:50:17.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1688) Add authorization interface and naive implementation

2015-08-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-1688.

Resolution: Duplicate

Closing this since it's being handled in KAFKA-2210, KAFKA-2211, and KAFKA-2212.

 Add authorization interface and naive implementation
 

 Key: KAFKA-1688
 URL: https://issues.apache.org/jira/browse/KAFKA-1688
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-1688.patch, KAFKA-1688_2015-04-10_11:08:39.patch


 Add a PermissionManager interface as described here:
 https://cwiki.apache.org/confluence/display/KAFKA/Security
 (possibly there is a better name?)
 Implement calls to the PermissionsManager in KafkaApis for the main requests 
 (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
 exception to the protocol to indicate permission denied.
 Add a server configuration to give the class you want to instantiate that 
 implements that interface. That class can define its own configuration 
 properties from the main config file.
 Provide a simple implementation of this interface which just takes a user and 
 ip whitelist and permits those in either of the whitelists to do anything, 
 and denies all others.
 Rather than writing an integration test for this class we can probably just 
 use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35867: Patch for KAFKA-1901

2015-08-19 Thread Joel Koshy


 On July 21, 2015, 1:58 p.m., Ismael Juma wrote:
  clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java, 
  line 27
  https://reviews.apache.org/r/35867/diff/4/?file=1011317#file1011317line27
 
  Why isn't this unknown like `version`?
 
 Manikumar Reddy O wrote:
 yes we can set unknown here.  Joel suggested to use  as the 
 unknown commit ID. Joel, can we change this?

Sure - I originally thought we could make the fingerprint == the numeric value 
associated with the leading bytes. That would have made it easy to just take 
the fingerprint - convert to hex and get the git hash prefix. However, I think 
we can just drop the fingerprint altogether for now.


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35867/#review92404
---


On Aug. 9, 2015, 9:37 a.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35867/
 ---
 
 (Updated Aug. 9, 2015, 9:37 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1901
 https://issues.apache.org/jira/browse/KAFKA-1901
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Joel's comments
 
 
 Diffs
 -
 
   build.gradle 1b67e628c2fca897177c12b6afad9a8700fffd1f 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 ed99e9bdf7c4ea7a6d4555d4488cf8ed0b80641b 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
   clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java 
 PRE-CREATION 
   core/src/main/scala/kafka/common/AppInfo.scala 
 d642ca555f83c41451d4fcaa5c01a1f86eff0a1c 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 84d4730ac634f9a5bf12a656e422fea03ad72da8 
   core/src/main/scala/kafka/server/KafkaServerStartable.scala 
 1c1b75b4137a8b233b61739018e9cebcc3a34343 
 
 Diff: https://reviews.apache.org/r/35867/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




Re: Review Request 35867: Patch for KAFKA-1901

2015-08-19 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35867/#review95902
---


Can you rebase?


build.gradle (line 389)
https://reviews.apache.org/r/35867/#comment151107

This gives an error in detached mode (i.e., not on any branch).



clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java (line 75)
https://reviews.apache.org/r/35867/#comment151102

Sorry about the misdirection here, but I think we may as well drop this 
fingerprint. If we want to add it later, we can - and it may be useful to take 
the actual numeric value of the leading eight bytes since it makes it easier to 
instantly associate with a commit hash (otherwise we would need to tabulate 
commits to their hashCode for easy lookup).


- Joel Koshy


On Aug. 9, 2015, 9:37 a.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35867/
 ---
 
 (Updated Aug. 9, 2015, 9:37 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1901
 https://issues.apache.org/jira/browse/KAFKA-1901
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Joel's comments
 
 
 Diffs
 -
 
   build.gradle 1b67e628c2fca897177c12b6afad9a8700fffd1f 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 ed99e9bdf7c4ea7a6d4555d4488cf8ed0b80641b 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
   clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java 
 PRE-CREATION 
   core/src/main/scala/kafka/common/AppInfo.scala 
 d642ca555f83c41451d4fcaa5c01a1f86eff0a1c 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 84d4730ac634f9a5bf12a656e422fea03ad72da8 
   core/src/main/scala/kafka/server/KafkaServerStartable.scala 
 1c1b75b4137a8b233b61739018e9cebcc3a34343 
 
 Diff: https://reviews.apache.org/r/35867/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Commented] (KAFKA-2446) KAFKA-2205 causes existing Topic config changes to be lost

2015-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704033#comment-14704033
 ] 

ASF GitHub Bot commented on KAFKA-2446:
---

GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/152

Fix for KAFKA-2446

This bug was introduced while committing KAFKA-2205. Basically, the path 
for topic overrides was renamed to topic from topics. However, this causes 
existing topic config overrides to break because they will not be read from ZK 
anymore since the path is different.

https://reviews.apache.org/r/34554/

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka 2446

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/152.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #152


commit 4bb7adeb27673145dcb735f9e2039a05d94faea8
Author: Aditya Auradkar aaurad...@linkedin.com
Date:   2015-08-20T00:12:04Z

Fix for 2446




 KAFKA-2205 causes existing Topic config changes to be lost
 --

 Key: KAFKA-2446
 URL: https://issues.apache.org/jira/browse/KAFKA-2446
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar

 The path was changed from /config/topics/ to /config/topic. This causes 
 existing config overrides to not get read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Fix for KAFKA-2446

2015-08-19 Thread auradkar
GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/152

Fix for KAFKA-2446

This bug was introduced while committing KAFKA-2205. Basically, the path 
for topic overrides was renamed to topic from topics. However, this causes 
existing topic config overrides to break because they will not be read from ZK 
anymore since the path is different.

https://reviews.apache.org/r/34554/

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka 2446

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/152.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #152


commit 4bb7adeb27673145dcb735f9e2039a05d94faea8
Author: Aditya Auradkar aaurad...@linkedin.com
Date:   2015-08-20T00:12:04Z

Fix for 2446




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Review Request 36427: Patch for KAFKA-2330

2015-08-19 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36427/
---

(Updated Aug. 20, 2015, 12:50 a.m.)


Review request for kafka.


Bugs: KAFKA-2330
https://issues.apache.org/jira/browse/KAFKA-2330


Repository: kafka


Description
---

KAFKA-2330: Make provider-specific overrides in Vagrantfile apply only to the 
targeted provider.


Diffs (updated)
-

  Vagrantfile 28bf24ae8b59d18e8f1574c4b78d18863c00398f 
  vagrant/README.md 73cf0390bc4c76be310c09ddedf91de5ddf1b473 

Diff: https://reviews.apache.org/r/36427/diff/


Testing
---


Thanks,

Ewen Cheslack-Postava



[jira] [Commented] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704066#comment-14704066
 ] 

Ewen Cheslack-Postava commented on KAFKA-2330:
--

Updated reviewboard https://reviews.apache.org/r/36427/diff/
 against branch origin/trunk

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch, KAFKA-2330_2015-08-19_17:50:17.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2330:
-
Attachment: KAFKA-2330_2015-08-19_17:50:17.patch

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch, KAFKA-2330_2015-08-19_17:50:17.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >