Re: Review Request 35867: Patch for KAFKA-1901

2015-08-20 Thread Manikumar Reddy O


 On Aug. 20, 2015, 2:03 a.m., Joel Koshy wrote:
  build.gradle, line 389
  https://reviews.apache.org/r/35867/diff/5/?file=1035356#file1035356line389
 
  This gives an error in detached mode (i.e., not on any branch).

Updated code to handle detached mode. thanks for the review.


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35867/#review95902
---


On Aug. 20, 2015, 7:08 a.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35867/
 ---
 
 (Updated Aug. 20, 2015, 7:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1901
 https://issues.apache.org/jira/browse/KAFKA-1901
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Joel's comments, rebase
 
 
 Diffs
 -
 
   build.gradle 17fc223907f6b55f2d730be81ddb8d8e07a2c7ad 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 3749880b765f74af117d6c44705daf170095a1b7 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 c4621e22c32c1a1fb23726d7f56004845def96ef 
   clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java 
 PRE-CREATION 
   core/src/main/scala/kafka/common/AppInfo.scala 
 d642ca555f83c41451d4fcaa5c01a1f86eff0a1c 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 0e7ba3ede78dbc995c404e0387a6be687703836a 
   core/src/main/scala/kafka/server/KafkaServerStartable.scala 
 1c1b75b4137a8b233b61739018e9cebcc3a34343 
 
 Diff: https://reviews.apache.org/r/35867/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704563#comment-14704563
 ] 

Ismael Juma commented on KAFKA-2084:


I noticed that a number of classes inherit from `JUnitSuite` and others don't 
inherit from anything. Is there a preferred style or is it decided on a case by 
case basis?

[~jjkoshy], I intend to look at scalastyle soon and see how well it works for 
our codebase (KAFKA-2423).

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
 KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
 KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
 KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
 KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
 KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
 KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
 KAFKA-2084_2015-08-14_17:43:00.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2015) Enable ConsoleConsumer to use new consumer

2015-08-20 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705310#comment-14705310
 ] 

Ben Stopford commented on KAFKA-2015:
-

I think we should move this to 0.8.3? It would mean users could test SSL easily 
using the ConsoleConsumer which is useful.

 Enable ConsoleConsumer to use new consumer
 --

 Key: KAFKA-2015
 URL: https://issues.apache.org/jira/browse/KAFKA-2015
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Guozhang Wang
Assignee: Ben Stopford
 Fix For: 0.9.0

 Attachments: KAFKA-2015.patch


 As titled, enable ConsoleConsumer to use new consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35867: Patch for KAFKA-1901

2015-08-20 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35867/
---

(Updated Aug. 20, 2015, 7:08 a.m.)


Review request for kafka.


Bugs: KAFKA-1901
https://issues.apache.org/jira/browse/KAFKA-1901


Repository: kafka


Description (updated)
---

Addresing Joel's comments, rebase


Diffs (updated)
-

  build.gradle 17fc223907f6b55f2d730be81ddb8d8e07a2c7ad 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
3749880b765f74af117d6c44705daf170095a1b7 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
c4621e22c32c1a1fb23726d7f56004845def96ef 
  clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java 
PRE-CREATION 
  core/src/main/scala/kafka/common/AppInfo.scala 
d642ca555f83c41451d4fcaa5c01a1f86eff0a1c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
0e7ba3ede78dbc995c404e0387a6be687703836a 
  core/src/main/scala/kafka/server/KafkaServerStartable.scala 
1c1b75b4137a8b233b61739018e9cebcc3a34343 

Diff: https://reviews.apache.org/r/35867/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Commented] (KAFKA-1901) Move Kafka version to be generated in code by build (instead of in manifest)

2015-08-20 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704414#comment-14704414
 ] 

Manikumar Reddy commented on KAFKA-1901:


Updated reviewboard https://reviews.apache.org/r/35867/diff/
 against branch origin/trunk

 Move Kafka version to be generated in code by build (instead of in manifest)
 

 Key: KAFKA-1901
 URL: https://issues.apache.org/jira/browse/KAFKA-1901
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Jason Rosenberg
Assignee: Manikumar Reddy
 Attachments: KAFKA-1901.patch, KAFKA-1901_2015-06-26_13:16:29.patch, 
 KAFKA-1901_2015-07-10_16:42:53.patch, KAFKA-1901_2015-07-14_17:59:56.patch, 
 KAFKA-1901_2015-08-09_15:04:39.patch, KAFKA-1901_2015-08-20_12:35:00.patch


 With 0.8.2 (rc2), I've started seeing this warning in the logs of apps 
 deployed to our staging (both server and client):
 {code}
 2015-01-23 00:55:25,273  WARN [async-message-sender-0] common.AppInfo$ - 
 Can't read Kafka version from MANIFEST.MF. Possible cause: 
 java.lang.NullPointerException
 {code}
 The issues is that in our deployment, apps are deployed with single 'shaded' 
 jars (e.g. using the maven shade plugin).  This means the MANIFEST.MF file 
 won't have a kafka version.  Instead, suggest the kafka build generate the 
 proper version in code, as part of the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1901) Move Kafka version to be generated in code by build (instead of in manifest)

2015-08-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1901:
---
Attachment: KAFKA-1901_2015-08-20_12:35:00.patch

 Move Kafka version to be generated in code by build (instead of in manifest)
 

 Key: KAFKA-1901
 URL: https://issues.apache.org/jira/browse/KAFKA-1901
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Jason Rosenberg
Assignee: Manikumar Reddy
 Attachments: KAFKA-1901.patch, KAFKA-1901_2015-06-26_13:16:29.patch, 
 KAFKA-1901_2015-07-10_16:42:53.patch, KAFKA-1901_2015-07-14_17:59:56.patch, 
 KAFKA-1901_2015-08-09_15:04:39.patch, KAFKA-1901_2015-08-20_12:35:00.patch


 With 0.8.2 (rc2), I've started seeing this warning in the logs of apps 
 deployed to our staging (both server and client):
 {code}
 2015-01-23 00:55:25,273  WARN [async-message-sender-0] common.AppInfo$ - 
 Can't read Kafka version from MANIFEST.MF. Possible cause: 
 java.lang.NullPointerException
 {code}
 The issues is that in our deployment, apps are deployed with single 'shaded' 
 jars (e.g. using the maven shade plugin).  This means the MANIFEST.MF file 
 won't have a kafka version.  Instead, suggest the kafka build generate the 
 proper version in code, as part of the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2015) Enable ConsoleConsumer to use new consumer

2015-08-20 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2015:

Status: Patch Available  (was: In Progress)

 Enable ConsoleConsumer to use new consumer
 --

 Key: KAFKA-2015
 URL: https://issues.apache.org/jira/browse/KAFKA-2015
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Guozhang Wang
Assignee: Ben Stopford
 Fix For: 0.9.0

 Attachments: KAFKA-2015.patch


 As titled, enable ConsoleConsumer to use new consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-08-20 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706019#comment-14706019
 ] 

Joel Koshy commented on KAFKA-2454:
---

I'm a bit unclear on the root cause that you describe. The thread-dump shows 
that the deletion task has not even entered the executor at this point. It is 
definitely blocked on entering the executor, but that means 
{{executor.awaitTermination}} must be from some other already executing task 
right?

 Dead lock between delete log segment and shutting down.
 ---

 Key: KAFKA-2454
 URL: https://issues.apache.org/jira/browse/KAFKA-2454
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin

 When the broker shutdown, it will shutdown scheduler which grabs the 
 scheduler lock then wait for all the threads in scheduler to shutdown.
 The dead lock will happen when the scheduled task try to delete old log 
 segment, it will schedule a log delete task which also needs to acquire the 
 scheduler lock. In this case the shutdown thread will hold scheduler lock and 
 wait for the the log deletion thread to finish, but the log deletion thread 
 will block on waiting for the scheduler lock.
 Related stack trace:
 {noformat}
 Thread-1 #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
 condition [0x7fe7cf698000]
java.lang.Thread.State: TIMED_WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x000640d53540 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at 
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
 java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
 at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
 - locked 0x000640b6d480 (a kafka.utils.KafkaScheduler)
 at 
 kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
 at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
 at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
 at kafka.utils.Logging$class.swallow(Logging.scala:94)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
 at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
 at 
 kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
 at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
 - locked 0x000640b77bb0 (a java.util.ArrayDeque)
 at com.linkedin.util.factory.Generator.stop(Generator.java:177)
 - locked 0x000640b77bc8 (a java.lang.Object)
 at 
 com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
 at 
 com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
 at 
 org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
 at 
 org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
 at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006400018b8 (a java.lang.Object)
 at 
 com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x000640001900 (a java.lang.Object)
 at 
 com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
 at 
 com.linkedin.emweb.WebappDeployerImpl.doStop(WebappDeployerImpl.java:414)
 - locked 0x0006400019c0 (a 
 com.linkedin.emweb.MapBasedHandlerImpl)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006404ee8e8 (a java.lang.Object)
 at 
 org.eclipse.jetty.util.component.AggregateLifeCycle.doStop(AggregateLifeCycle.java:107)
 at 
 org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:69)
   

[jira] [Commented] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-08-20 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706021#comment-14706021
 ] 

Jiangjie Qin commented on KAFKA-2454:
-

[~jjkoshy] Here are the two threads:
Thread 1: The log deletion task that periodically running to delete log 
segments. This task has been scheduled when broker starts up. The problem is it 
is deleting log asynchronously by submitting another file deletion task to the 
scheduler. The deadlock happens here.
Thread 2: It is just the thread executing KafkaServer.shutdown(), it calls 
KafkaScheduler.shutdown() which grabs the lock and executor.awaitTermination().

 Dead lock between delete log segment and shutting down.
 ---

 Key: KAFKA-2454
 URL: https://issues.apache.org/jira/browse/KAFKA-2454
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin

 When the broker shutdown, it will shutdown scheduler which grabs the 
 scheduler lock then wait for all the threads in scheduler to shutdown.
 The dead lock will happen when the scheduled task try to delete old log 
 segment, it will schedule a log delete task which also needs to acquire the 
 scheduler lock. In this case the shutdown thread will hold scheduler lock and 
 wait for the the log deletion thread to finish, but the log deletion thread 
 will block on waiting for the scheduler lock.
 Related stack trace:
 {noformat}
 Thread-1 #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
 condition [0x7fe7cf698000]
java.lang.Thread.State: TIMED_WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x000640d53540 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at 
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
 java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
 at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
 - locked 0x000640b6d480 (a kafka.utils.KafkaScheduler)
 at 
 kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
 at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
 at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
 at kafka.utils.Logging$class.swallow(Logging.scala:94)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
 at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
 at 
 kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
 at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
 - locked 0x000640b77bb0 (a java.util.ArrayDeque)
 at com.linkedin.util.factory.Generator.stop(Generator.java:177)
 - locked 0x000640b77bc8 (a java.lang.Object)
 at 
 com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
 at 
 com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
 at 
 org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
 at 
 org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
 at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006400018b8 (a java.lang.Object)
 at 
 com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x000640001900 (a java.lang.Object)
 at 
 com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
 at 
 com.linkedin.emweb.WebappDeployerImpl.doStop(WebappDeployerImpl.java:414)
 - locked 0x0006400019c0 (a 
 com.linkedin.emweb.MapBasedHandlerImpl)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006404ee8e8 (a java.lang.Object)
 at 
 

[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704649#comment-14704649
 ] 

Ismael Juma commented on KAFKA-1683:


[~gwenshap], KAFKA-1690 has been merged (as you know), so it may be time to 
start thinking about this again. :)

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2213) Log cleaner should write compacted messages using configured compression type

2015-08-20 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704713#comment-14704713
 ] 

Manikumar Reddy commented on KAFKA-2213:


Updated reviewboard https://reviews.apache.org/r/34805/diff/
 against branch origin/trunk

 Log cleaner should write compacted messages using configured compression type
 -

 Key: KAFKA-2213
 URL: https://issues.apache.org/jira/browse/KAFKA-2213
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
 Attachments: KAFKA-2213.patch, KAFKA-2213_2015-05-30_00:23:01.patch, 
 KAFKA-2213_2015-06-17_16:05:53.patch, KAFKA-2213_2015-07-10_20:18:06.patch, 
 KAFKA-2213_2015-08-20_17:04:28.patch


 In KAFKA-1374 the log cleaner was improved to handle compressed messages. 
 There were a couple of follow-ups from that:
 * We write compacted messages using the original compression type in the 
 compressed message-set. We should instead append all retained messages with 
 the configured broker compression type of the topic.
 * While compressing messages we should ideally do some batching before 
 compression.
 * Investigate the use of the client compressor. (See the discussion in the 
 RBs for KAFKA-1374)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-08-20 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review95943
---



core/src/main/scala/kafka/admin/TopicCommand.scala (line 89)
https://reviews.apache.org/r/36578/#comment151144

The warning message can be : WARNING: %s has been increased beyond the 
default max value of %d, update producer and consumer settings as well


Also do want to include patch for second point given in JIRA decription..

- Manikumar Reddy O


On July 21, 2015, 4:21 p.m., Edward Ribeiro wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36578/
 ---
 
 (Updated July 21, 2015, 4:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2338
 https://issues.apache.org/jira/browse/KAFKA-2338
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
 update broker and consumer settings
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 4e28bf1c08414e8e96e6ca639b927d51bfeb4616 
 
 Diff: https://reviews.apache.org/r/36578/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Edward Ribeiro
 




[jira] [Commented] (KAFKA-2295) Dynamically loaded classes (encoders, etc.) may not be found by Kafka Producer

2015-08-20 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704757#comment-14704757
 ] 

Manikumar Reddy commented on KAFKA-2295:


[~guozhang]  Can we go ahead and commit this minor patch?

 Dynamically loaded classes (encoders, etc.) may not be found by Kafka 
 Producer 
 ---

 Key: KAFKA-2295
 URL: https://issues.apache.org/jira/browse/KAFKA-2295
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Tathagata Das
Assignee: Manikumar Reddy
 Fix For: 0.9.0

 Attachments: KAFKA-2295.patch, KAFKA-2295_2015-07-06_11:32:58.patch, 
 KAFKA-2295_2015-08-20_17:44:56.patch


 Kafka Producer (via CoreUtils.createObject) effectively uses Class.forName to 
 load encoder classes. Class.forName is by design finds classes only in the 
 defining classloader of the enclosing class (which is often the bootstrap 
 class loader). It does not use the current thread context class loader. This 
 can lead to problems in environments where classes are dynamically loaded and 
 therefore may not be present in the bootstrap classloader.
 This leads to ClassNotFound Exceptions in environments like Spark where 
 classes are loaded dynamically using custom classloaders. Issues like this 
 have reported. E.g. - 
 https://www.mail-archive.com/user@spark.apache.org/msg30951.html
 Other references regarding this issue with Class.forName 
 http://stackoverflow.com/questions/21749741/though-my-class-was-loaded-class-forname-throws-classnotfoundexception
 This is a problem we have faced repeatedly in Apache Spark and we solved it 
 by explicitly specifying the class loader to use. See 
 https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L178



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-08-20 Thread Manikumar Reddy O


On Aug. 20, 2015, 11:07 a.m., Edward Ribeiro wrote:
  Also do want to include patch for second point given in JIRA decription..

just read previous reviews..ignore my comment.


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review95943
---


On July 21, 2015, 4:21 p.m., Edward Ribeiro wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36578/
 ---
 
 (Updated July 21, 2015, 4:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2338
 https://issues.apache.org/jira/browse/KAFKA-2338
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
 update broker and consumer settings
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 4e28bf1c08414e8e96e6ca639b927d51bfeb4616 
 
 Diff: https://reviews.apache.org/r/36578/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Edward Ribeiro
 




Re: Review Request 34805: Patch for KAFKA-2213

2015-08-20 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34805/
---

(Updated Aug. 20, 2015, 11:37 a.m.)


Review request for kafka.


Bugs: KAFKA-2213
https://issues.apache.org/jira/browse/KAFKA-2213


Repository: kafka


Description (updated)
---

Rebase


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/record/Compressor.java 
e570b29d5ffba5d3754c46670b708f7d511086f3 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
5f1b45c2970e4de53bd04595afe8486b5ec49e5d 
  core/src/main/scala/kafka/log/LogCleaner.scala 
b36ea0dd7f954c2a0eb3270535fc9a8df7488d59 
  core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
70beb5f71204ae9386484f6a559e0e7138497483 

Diff: https://reviews.apache.org/r/34805/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-2213) Log cleaner should write compacted messages using configured compression type

2015-08-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2213:
---
Attachment: KAFKA-2213_2015-08-20_17:04:28.patch

 Log cleaner should write compacted messages using configured compression type
 -

 Key: KAFKA-2213
 URL: https://issues.apache.org/jira/browse/KAFKA-2213
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
 Attachments: KAFKA-2213.patch, KAFKA-2213_2015-05-30_00:23:01.patch, 
 KAFKA-2213_2015-06-17_16:05:53.patch, KAFKA-2213_2015-07-10_20:18:06.patch, 
 KAFKA-2213_2015-08-20_17:04:28.patch


 In KAFKA-1374 the log cleaner was improved to handle compressed messages. 
 There were a couple of follow-ups from that:
 * We write compacted messages using the original compression type in the 
 compressed message-set. We should instead append all retained messages with 
 the configured broker compression type of the topic.
 * While compressing messages we should ideally do some batching before 
 compression.
 * Investigate the use of the client compressor. (See the discussion in the 
 RBs for KAFKA-1374)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704731#comment-14704731
 ] 

Flavio Junqueira commented on KAFKA-873:


[~granthenke] I'm more than happy to let you take care of this one, but I'd 
like to be involved because I want to make sure that we fix KAFKA-1387 properly 
and the switch to curator by simply using the bridge might either break kafka 
if the semantics of the API are different or not fix KAFKA-1387. What I might 
need to do is to wait until this issue here is fixed and then fix KAFKA-1387, 
so I'd like to have this one in soon if possible. I'll have a look at the 
Curator bridge today in any case.

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35880: Patch for KAFKA-2295

2015-08-20 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35880/
---

(Updated Aug. 20, 2015, 12:17 p.m.)


Review request for kafka.


Bugs: KAFKA-2295
https://issues.apache.org/jira/browse/KAFKA-2295


Repository: kafka


Description (updated)
---

Code Rebase


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
156ec14c9c099448cc1d8347191eef4c02591faa 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
4170bcc7def5b50d8aa20e8e84089c35b705b527 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
c58b74144ba01f04ec17d8ca7bd3d231fd6fb3a8 
  clients/src/test/java/org/apache/kafka/common/config/AbstractConfigTest.java 
db1b0ee9113215b5ad7fda0f93915f3bdd34ac55 
  core/src/main/scala/kafka/utils/CoreUtils.scala 
168a18d380c200ee566eccb6988dd1ae85ed5b09 

Diff: https://reviews.apache.org/r/35880/diff/


Testing
---


Thanks,

Manikumar Reddy O



Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review95934
---


Thanks for addressing the issues raised in the previous review. We are getting 
closer. I left a few comments (many of them minor style issues).

The important question for me is how can we make the authorization logic that 
is spread over many methods simpler by perhaps introducing some utility 
methods. I provided a suggestion for one of the cases, but I didn't spend the 
time to work it out for the `partition` case that is more important (and 
probably harder). I would be interested in your thoughts as you are more 
familiar with the code. I'd be happy to spend a bit more time on this to see if 
I can come up with something, if you prefer. Just let me know.


core/src/main/scala/kafka/network/RequestChannel.scala (line 48)
https://reviews.apache.org/r/34492/#comment151127

Normally one would use `Option[Session]` here. Are we using `null` due to 
efficiency concerns? Sorry if I am missing some context.



core/src/main/scala/kafka/security/auth/Acl.scala (line 116)
https://reviews.apache.org/r/34492/#comment151130

Nitpick: no need for `()` and space before `:` should be removed.



core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala (line 25)
https://reviews.apache.org/r/34492/#comment151131

Nitpick: space before `:` should be removed.



core/src/main/scala/kafka/security/auth/Operation.scala (line 42)
https://reviews.apache.org/r/34492/#comment151129

Generally a good idea to set the result type for public methods. This makes 
it possible to change the underlying implementation without affecting binary 
compatibility. For example, here we may set the result type as 
`Seq[Operation]`, which would give us the option of changing the underlying 
implementation to `Vector` if that turned out to be better. In `Scala`, `List` 
is a concrete type unlike `Java`.

Not sure what is the usual policy for Kafka though, would be useful to have 
some input from Jun. If we decide to change it, there are other places where 
the same comment would apply.



core/src/main/scala/kafka/security/auth/Resource.scala (line 24)
https://reviews.apache.org/r/34492/#comment151132

Nitpick: space before `:` should be removed.



core/src/main/scala/kafka/server/KafkaApis.scala (line 101)
https://reviews.apache.org/r/34492/#comment151134

This code exists in 4 places, how about we introduce an method like:

```
def authorizeClusterAction(authorizer, request): Unit = {
  if (authorizer.map(_.authorizer(request.session, ClusterAction, 
Resource.ClusterResource)).getOrElse(false))
throw new AuthorizationException(sRequest $request is not authorized.)
}
```

And then callers can just do (as an example):

`authorizeClusterAction(authorizer, leaderAndIsrRequest)`

Am I missing something?



core/src/main/scala/kafka/server/KafkaApis.scala (lines 186 - 189)
https://reviews.apache.org/r/34492/#comment151135

Nitpick: space after `case`. There are a number of other cases like this.



core/src/main/scala/kafka/server/KafkaApis.scala (line 282)
https://reviews.apache.org/r/34492/#comment151140

We have a lot of these `partition` calls like this. Is there no way to 
extract a utility method so that the call is simpler and there is less 
duplication?



core/src/main/scala/kafka/server/KafkaApis.scala (line 545)
https://reviews.apache.org/r/34492/#comment151141

Nitpick: val instead of var.



core/src/test/scala/unit/kafka/security/auth/AclTest.scala (line 24)
https://reviews.apache.org/r/34492/#comment151142

We don't use `JUnit3Suite` anymore. Either use `JUnitSuite` or don't 
inherit from anything (we have both examples in the codebase now). This applies 
to all the tests. I noticed that Jun already mentioned this in another test.



core/src/test/scala/unit/kafka/security/auth/AclTest.scala (line 30)
https://reviews.apache.org/r/34492/#comment151143

@Test annotation is needed after you remove `JUnit3Suite`. This applies to 
all the tests.


- Ismael Juma


On Aug. 11, 2015, 1:32 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated Aug. 11, 2015, 1:32 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining 

[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704662#comment-14704662
 ] 

Ismael Juma commented on KAFKA-2210:


[~parth.brahmbhatt], also reviewed it. In KAFKA-1683, you mentioned that the 
authorizer worked depended on that. Would it be correct to add a dependency on 
this JIRA to that one?

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
 KAFKA-2210_2015-08-10_18:31:54.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2295) Dynamically loaded classes (encoders, etc.) may not be found by Kafka Producer

2015-08-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2295:
---
Attachment: KAFKA-2295_2015-08-20_17:44:56.patch

 Dynamically loaded classes (encoders, etc.) may not be found by Kafka 
 Producer 
 ---

 Key: KAFKA-2295
 URL: https://issues.apache.org/jira/browse/KAFKA-2295
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Tathagata Das
Assignee: Manikumar Reddy
 Fix For: 0.9.0

 Attachments: KAFKA-2295.patch, KAFKA-2295_2015-07-06_11:32:58.patch, 
 KAFKA-2295_2015-08-20_17:44:56.patch


 Kafka Producer (via CoreUtils.createObject) effectively uses Class.forName to 
 load encoder classes. Class.forName is by design finds classes only in the 
 defining classloader of the enclosing class (which is often the bootstrap 
 class loader). It does not use the current thread context class loader. This 
 can lead to problems in environments where classes are dynamically loaded and 
 therefore may not be present in the bootstrap classloader.
 This leads to ClassNotFound Exceptions in environments like Spark where 
 classes are loaded dynamically using custom classloaders. Issues like this 
 have reported. E.g. - 
 https://www.mail-archive.com/user@spark.apache.org/msg30951.html
 Other references regarding this issue with Class.forName 
 http://stackoverflow.com/questions/21749741/though-my-class-was-loaded-class-forname-throws-classnotfoundexception
 This is a problem we have faced repeatedly in Apache Spark and we solved it 
 by explicitly specifying the class loader to use. See 
 https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L178



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2295) Dynamically loaded classes (encoders, etc.) may not be found by Kafka Producer

2015-08-20 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704749#comment-14704749
 ] 

Manikumar Reddy commented on KAFKA-2295:


Updated reviewboard https://reviews.apache.org/r/35880/diff/
 against branch origin/trunk

 Dynamically loaded classes (encoders, etc.) may not be found by Kafka 
 Producer 
 ---

 Key: KAFKA-2295
 URL: https://issues.apache.org/jira/browse/KAFKA-2295
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Tathagata Das
Assignee: Manikumar Reddy
 Fix For: 0.9.0

 Attachments: KAFKA-2295.patch, KAFKA-2295_2015-07-06_11:32:58.patch, 
 KAFKA-2295_2015-08-20_17:44:56.patch


 Kafka Producer (via CoreUtils.createObject) effectively uses Class.forName to 
 load encoder classes. Class.forName is by design finds classes only in the 
 defining classloader of the enclosing class (which is often the bootstrap 
 class loader). It does not use the current thread context class loader. This 
 can lead to problems in environments where classes are dynamically loaded and 
 therefore may not be present in the bootstrap classloader.
 This leads to ClassNotFound Exceptions in environments like Spark where 
 classes are loaded dynamically using custom classloaders. Issues like this 
 have reported. E.g. - 
 https://www.mail-archive.com/user@spark.apache.org/msg30951.html
 Other references regarding this issue with Class.forName 
 http://stackoverflow.com/questions/21749741/though-my-class-was-loaded-class-forname-throws-classnotfoundexception
 This is a problem we have faced repeatedly in Apache Spark and we solved it 
 by explicitly specifying the class loader to use. See 
 https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L178



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review95942
---


One more thing: it may be a good idea to rebase against trunk since the large 
SSL/TLS patch has now been merged (not sure if there are any conflicts).

- Ismael Juma


On Aug. 11, 2015, 1:32 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated Aug. 11, 2015, 1:32 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Now addressing Ismael's comments. Case sensitive checks.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/network/RequestChannel.scala 
 20741281dcaa76374ea6f86a2185dad27b515339 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7ea509c2c41acc00430c74e025e069a833aac4e7 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 84d4730ac634f9a5bf12a656e422fea03ad72da8 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706103#comment-14706103
 ] 

Sriharsha Chintalapani commented on KAFKA-1683:
---

[~gwenshap] got your point . Agree on not throwing exception and returning 
ANONYMOUS.

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1897) Enhance MockProducer for more sophisticated tests

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1897:

Fix Version/s: (was: 0.8.3)

 Enhance MockProducer for more sophisticated tests
 -

 Key: KAFKA-1897
 URL: https://issues.apache.org/jira/browse/KAFKA-1897
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Navina Ramesh
Assignee: Jun Rao

 Based on the experience of upgrading the kafka producer in Samza, we faced 
 two main constraints when using MockProducer: 
 1. The constructor requires a cluster specification and the tools to create a 
 test cluster is not exposed. It is available from TestUtils in Kafka, however 
 that jar is not published. This issue is currently being addressed in 
 KAFKA-1861.
 2. No support for testing a blocking client call. For example, flush in 
 Samza blocks on the future returned by the latest send request. In order to 
 test this, the MockProducer which buffers it should run in a concurrent mode. 
 There is currently no provision to do this. We want the MockProducer to 
 buffer the send and then, complete the callback concurrently while we wait 
 for flush to unblock. 
 We can write unit tests that have improved coverage if we can add support for 
 concurrent execution of the MockProducer and unit test thread. For example 
 implementation, please refer to the latest version of 
 KafkaSystemProducer.scala in the Apache Samza repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1855) Topic unusable after unsuccessful UpdateMetadataRequest

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-1855.
-
Resolution: Cannot Reproduce

 Topic unusable after unsuccessful UpdateMetadataRequest
 ---

 Key: KAFKA-1855
 URL: https://issues.apache.org/jira/browse/KAFKA-1855
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8.2.0
Reporter: Henri Pihkala
 Fix For: 0.8.3


 Sometimes, seemingly randomly, topic creation/initialization might fail with 
 the following lines in controller.log. Other logs show no errors. When this 
 happens, the topic is unusable (gives UnknownTopicOrPartition for all 
 requests).
 For me this happens 5-10% of the time. Feels like it's more likely to happen 
 if there is time between topic creations. Observed on 0.8.2-beta, have not 
 tried previous versions.
 [2015-01-09 16:15:27,153] WARN [Controller-0-to-broker-0-send-thread], 
 Controller 0 fails to send a request to broker 
 id:0,host:192.168.10.21,port:9092 (kafka.controller.RequestSendThread)
 java.io.EOFException: Received -1 when reading from channel, socket has 
 likely been closed.
   at kafka.utils.Utils$.read(Utils.scala:381)
   at 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
   at 
 kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
   at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:146)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 [2015-01-09 16:15:27,156] ERROR [Controller-0-to-broker-0-send-thread], 
 Controller 0 epoch 6 failed to send request 
 Name:UpdateMetadataRequest;Version:0;Controller:0;ControllerEpoch:6;CorrelationId:48;ClientId:id_0-host_192.168.10.21-port_9092;AliveBrokers:id:0,host:192.168.10.21,port:9092;PartitionState:[40963064-cdd2-4cd1-937a-9827d3ab77ad,0]
  - 
 (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:6),ReplicationFactor:1),AllReplicas:0)
  to broker id:0,host:192.168.10.21,port:9092. Reconnecting to broker. 
 (kafka.controller.RequestSendThread)
 java.nio.channels.ClosedChannelException
   at kafka.network.BlockingChannel.send(BlockingChannel.scala:97)
   at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
   at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1877) Expose version via JMX for 'new' producer

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1877:

Fix Version/s: (was: 0.8.3)

 Expose version via JMX for 'new' producer 
 --

 Key: KAFKA-1877
 URL: https://issues.apache.org/jira/browse/KAFKA-1877
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Vladimir Tretyakov
Assignee: Manikumar Reddy

 Add version of Kafka to jmx (monitoring tool can use this info).
 Something like that
 {code}
 kafka.common:type=AppInfo,name=Version
   Value java.lang.Object = 0.8.2-beta
 {code}
 we already have this in core Kafka module (see kafka.common.AppInfo object).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2170) 10 LogTest cases failed for file.renameTo failed under windows

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705922#comment-14705922
 ] 

ASF GitHub Bot commented on KAFKA-2170:
---

GitHub user mpoindexter opened a pull request:

https://github.com/apache/kafka/pull/154

KAFKA-2170, KAFKA-1194: Fixes for Windows

This branch fixes several Windows specific issues, both in the code and in 
the tests.  With these changes the whole test suite passes on my Windows 
machine.  I found the following issues that were relevant in Jira: KAFKA-2170 
and KAFKA-1194, but there may be some others.  I also have a branch with these 
changes done against 0.8.2.1 if there's any interest in merging to the 0.8 
series.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mpoindexter/kafka fix-windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/154.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #154


commit 19ae8ac6a4c8ceef6451635055f75bd038fe25ae
Author: Mike Poindexter statics...@gmail.com
Date:   2015-08-20T02:01:25Z

Fix Windows failures when renaming/deleting files

- Windows will not allow a file that is mmap'd to be renamed or
  deleted.  To work around this ensure that forceUnmap is called
  on close, delete and rename.  For the rename case, make sure that
  the file is reopened after the rename completes

- Windows will not allow a file that has an open FileChannel to be
  renamed.  This causes breakage in FileMessageSet.renameTo since
  it holds the FileChannel open during rename.  This can be worked
  around by changing how we open the FileChannel to use
  FileChannel.open instead of new FileInputStream(file).toChannel.
  This causes the file to be opened with the FILE_SHARE_DELETE flag
  which will allow the file to be renamed while open.  See this JDK
  bug for details: http://bugs.java.com/view_bug.do?bug_id=6357433

- Fix a bug in LogTest that caused a race between the next iteration
  of a test loop and the asynchronous delete of old segments

- Fix a bug in LogTest where the log was not closed leading to leftover
  garbage at the next run of a test loop

- Ensure that any time forceUnmap is called we set mmap to null.  This
  will ensure that invalid use after forceUnmap causes a NPE instead of
  JVM memory corruption

commit 2066306285738d42be74c7987ee0ef91b8a6d7ee
Author: Mike Poindexter statics...@gmail.com
Date:   2015-08-20T22:26:14Z

Fixes for load cleaning tests to ensure the files
in a segment are only open once so renames, etc.
do not fail on windows




 10 LogTest cases failed for  file.renameTo failed under windows
 ---

 Key: KAFKA-2170
 URL: https://issues.apache.org/jira/browse/KAFKA-2170
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.9.0
 Environment: Windows
Reporter: Honghai Chen
Assignee: Jay Kreps

 get latest code from trunk, then run test 
 gradlew  -i core:test --tests kafka.log.LogTest
 Got 10 cases failed for same reason:
 kafka.common.KafkaStorageException: Failed to change the log file suffix from 
  to .deleted for log segment 0
   at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:259)
   at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:756)
   at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:747)
   at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:514)
   at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:514)
   at scala.collection.immutable.List.foreach(List.scala:318)
   at kafka.log.Log.deleteOldSegments(Log.scala:514)
   at kafka.log.LogTest.testAsyncDelete(LogTest.scala:633)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 

[GitHub] kafka pull request: KAFKA-2170, KAFKA-1194: Fixes for Windows

2015-08-20 Thread mpoindexter
GitHub user mpoindexter opened a pull request:

https://github.com/apache/kafka/pull/154

KAFKA-2170, KAFKA-1194: Fixes for Windows

This branch fixes several Windows specific issues, both in the code and in 
the tests.  With these changes the whole test suite passes on my Windows 
machine.  I found the following issues that were relevant in Jira: KAFKA-2170 
and KAFKA-1194, but there may be some others.  I also have a branch with these 
changes done against 0.8.2.1 if there's any interest in merging to the 0.8 
series.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mpoindexter/kafka fix-windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/154.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #154


commit 19ae8ac6a4c8ceef6451635055f75bd038fe25ae
Author: Mike Poindexter statics...@gmail.com
Date:   2015-08-20T02:01:25Z

Fix Windows failures when renaming/deleting files

- Windows will not allow a file that is mmap'd to be renamed or
  deleted.  To work around this ensure that forceUnmap is called
  on close, delete and rename.  For the rename case, make sure that
  the file is reopened after the rename completes

- Windows will not allow a file that has an open FileChannel to be
  renamed.  This causes breakage in FileMessageSet.renameTo since
  it holds the FileChannel open during rename.  This can be worked
  around by changing how we open the FileChannel to use
  FileChannel.open instead of new FileInputStream(file).toChannel.
  This causes the file to be opened with the FILE_SHARE_DELETE flag
  which will allow the file to be renamed while open.  See this JDK
  bug for details: http://bugs.java.com/view_bug.do?bug_id=6357433

- Fix a bug in LogTest that caused a race between the next iteration
  of a test loop and the asynchronous delete of old segments

- Fix a bug in LogTest where the log was not closed leading to leftover
  garbage at the next run of a test loop

- Ensure that any time forceUnmap is called we set mmap to null.  This
  will ensure that invalid use after forceUnmap causes a NPE instead of
  JVM memory corruption

commit 2066306285738d42be74c7987ee0ef91b8a6d7ee
Author: Mike Poindexter statics...@gmail.com
Date:   2015-08-20T22:26:14Z

Fixes for load cleaning tests to ensure the files
in a segment are only open once so renames, etc.
do not fail on windows




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-1683: persisting session information in ...

2015-08-20 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/155

KAFKA-1683: persisting session information in Requests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-1683

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #155


commit d97449d3626c1a209eefa2eb01e011ecdec4f147
Author: Gwen Shapira csh...@gmail.com
Date:   2015-08-21T01:46:49Z

persisting session information in Requests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706096#comment-14706096
 ] 

Sriharsha Chintalapani commented on KAFKA-1683:
---

[~gwenshap] Hmm if the handshake establishes than they are authenticated 
technically. In this case it will give the hostname and we do have pluggable 
principal builder if the user's inclined to read X509Cert and makeup their own 
principal. Do you see any issues with this. 

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1898) compatibility testing framework

2015-08-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706121#comment-14706121
 ] 

Gwen Shapira commented on KAFKA-1898:
-

Looping back here [~granders], is this something our Ducktape tests address at 
the moment?

 compatibility testing framework 
 

 Key: KAFKA-1898
 URL: https://issues.apache.org/jira/browse/KAFKA-1898
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
 Fix For: 0.8.3

 Attachments: cctk.png


 There are a few different scenarios where you want/need to know the 
 status/state of a client library that works with Kafka. Client library 
 development is not just about supporting the wire protocol but also the 
 implementations around specific interactions of the API.  The API has 
 blossomed into a robust set of producer, consumer, broker and administrative 
 calls all of which have layers of logic above them.  A Client Library may 
 choose to deviate from the path the project sets out and that is ok. The goal 
 of this ticket is to have a system for Kafka that can help to explain what 
 the library is or isn't doing (regardless of what it claims).
 The idea behind this stems in being able to quickly/easily/succinctly analyze 
 the topic message data. Once you can analyze the topic(s) message you can 
 gather lots of information about what the client library is doing, is not 
 doing and such.  There are a few components to this.
 1) dataset-generator 
 Test Kafka dataset generation tool. Generates a random text file with given 
 params:
 --filename, -f - output file name.
 --filesize, -s - desired size of output file. The actual size will always be 
 a bit larger (with a maximum size of $filesize + $max.length - 1)
 --min.length, -l - minimum generated entry length.
 --max.length, -h - maximum generated entry length.
 Usage:
 ./gradlew build
 java -jar dataset-generator/build/libs/dataset-generator-*.jar -s 10 -l 2 
 -h 20
 2) dataset-producer
 Test Kafka dataset producer tool. Able to produce the given dataset to Kafka 
 or Syslog server.  The idea here is you already have lots of data sets that 
 you want to test different things for. You might have different sized 
 messages, formats, etc and want a repeatable benchmark to run and re-run the 
 testing on. You could just have a days worth of data and just choose to 
 replay it.  The CCTK idea is that you are always starting from CONSUME in 
 your state of library. If your library is only producing then you will fail a 
 bunch of tests and that might be ok for people.
 Accepts following params:
 {code}
 --filename, -f - input file name.
 --kafka, -k - Kafka broker address in host:port format. If this parameter is 
 set, --producer.config and --topic must be set too (otherwise they're 
 ignored).
 --producer.config, -p - Kafka producer properties file location.
 --topic, -t - Kafka topic to produce to.
 --syslog, -s - Syslog server address. Format: protocol://host:port 
 (tcp://0.0.0.0:5140 or udp://0.0.0.0:5141 for example)
 --loop, -l - flag to loop through file until shut off manually. False by 
 default.
 Usage:
 ./gradlew build
 java -jar dataset-producer/build/libs/dataset-producer-*.jar --filename 
 dataset --syslog tcp://0.0.0.0:5140 --loop true
 {code}
 3) extract
 This step is good so you can save data and compare tests. It could also be 
 removed if folks are just looking for a real live test (and we could support 
 that too).  Here we are taking data out of Kafka and putting it into 
 Cassandra (but other data stores can be used too and we should come up with a 
 way to abstract this out completely so folks could implement whatever they 
 wanted.
 {code}
 package ly.stealth.shaihulud.reader
 import java.util.UUID
 import com.datastax.spark.connector._
 import com.datastax.spark.connector.cql.CassandraConnector
 import consumer.kafka.MessageAndMetadata
 import consumer.kafka.client.KafkaReceiver
 import org.apache.spark._
 import org.apache.spark.storage.StorageLevel
 import org.apache.spark.streaming._
 import org.apache.spark.streaming.dstream.DStream
 object Main extends App with Logging {
   val parser = new scopt.OptionParser[ReaderConfiguration](spark-reader) {
 head(Spark Reader for Kafka client applications, 1.0)
 opt[String](testId) unbounded() optional() action { (x, c) =
   c.copy(testId = x)
 } text (Source topic with initial set of data)
 opt[String](source) unbounded() required() action { (x, c) =
   c.copy(sourceTopic = x)
 } text (Source topic with initial set of data)
 opt[String](destination) unbounded() required() action { (x, c) =
   c.copy(destinationTopic = x)
 } text (Destination topic with processed set of data)
 opt[Int](partitions) unbounded() optional() action { (x, c) =
   c.copy(partitions = x)
 } text 

[jira] [Updated] (KAFKA-1912) Create a simple request re-routing facility

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1912:

Fix Version/s: (was: 0.8.3)

 Create a simple request re-routing facility
 ---

 Key: KAFKA-1912
 URL: https://issues.apache.org/jira/browse/KAFKA-1912
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps

 We are accumulating a lot of requests that have to be directed to the correct 
 server. This makes sense for high volume produce or fetch requests. But it is 
 silly to put the extra burden on the client for the many miscellaneous 
 requests such as fetching or committing offsets and so on.
 This adds a ton of practical complexity to the clients with little or no 
 payoff in performance.
 We should add a generic request-type agnostic re-routing facility on the 
 server. This would allow any server to accept a request and forward it to the 
 correct destination, proxying the response back to the user. Naturally it 
 needs to do this without blocking the thread.
 The result is that a client implementation can choose to be optimally 
 efficient and manage a local cache of cluster state and attempt to always 
 direct its requests to the proper server OR it can choose simplicity and just 
 send things all to a single host and let that host figure out where to 
 forward it.
 I actually think we should implement this more or less across the board, but 
 some requests such as produce and fetch require more logic to proxy since 
 they have to be scattered out to multiple servers and gathered back to create 
 the response. So these could be done in a second phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1911) Log deletion on stopping replicas should be async

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1911:

Fix Version/s: (was: 0.8.3)

 Log deletion on stopping replicas should be async
 -

 Key: KAFKA-1911
 URL: https://issues.apache.org/jira/browse/KAFKA-1911
 Project: Kafka
  Issue Type: Bug
  Components: log, replication
Reporter: Joel Koshy
Assignee: Geoff Anderson
  Labels: newbie++

 If a StopReplicaRequest sets delete=true then we do a file.delete on the file 
 message sets. I was under the impression that this is fast but it does not 
 seem to be the case.
 On a partition reassignment in our cluster the local time for stop replica 
 took nearly 30 seconds.
 {noformat}
 Completed request:Name: StopReplicaRequest; Version: 0; CorrelationId: 467; 
 ClientId: ;DeletePartitions: true; ControllerId: 1212; ControllerEpoch: 
 53 from 
 client/...:45964;totalTime:29191,requestQueueTime:1,localTime:29190,remoteTime:0,responseQueueTime:0,sendTime:0
 {noformat}
 This ties up one API thread for the duration of the request.
 Specifically in our case, the queue times for other requests also went up and 
 producers to the partition that was just deleted on the old leader took a 
 while to refresh their metadata (see KAFKA-1303) and eventually ran out of 
 retries on some messages leading to data loss.
 I think the log deletion in this case should be fully asynchronous although 
 we need to handle the case when a broker may respond immediately to the 
 stop-replica-request but then go down after deleting only some of the log 
 segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1913) App hungs when calls producer.send to wrong IP of Kafka broker

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-1913.
-
Resolution: Duplicate

 App hungs when calls producer.send to wrong IP of Kafka broker
 --

 Key: KAFKA-1913
 URL: https://issues.apache.org/jira/browse/KAFKA-1913
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1.1
 Environment: OS X 10.10.1, Java 7, AWS Linux
Reporter: Igor Khomenko
Assignee: Jun Rao
 Fix For: 0.8.3


 I have next test code to check the Kafka functionality:
 {code}
 package com.company;
 import kafka.common.FailedToSendMessageException;
 import kafka.javaapi.producer.Producer;
 import kafka.producer.KeyedMessage;
 import kafka.producer.ProducerConfig;
 import java.util.Date;
 import java.util.Properties;
 public class Main {
 public static void main(String[] args) {
 Properties props = new Properties();
 props.put(metadata.broker.list, 192.168.9.3:9092);
 props.put(serializer.class, com.company.KafkaMessageSerializer);
 props.put(request.required.acks, 1);
 ProducerConfig config = new ProducerConfig(props);
 // The first is the type of the Partition key, the second the type of 
 the message.
 ProducerString, String messagesProducer = new ProducerString, 
 String(config);
 // Send
 String topicName = my_messages;
 String message = hello world;
 KeyedMessageString, String data = new KeyedMessageString, 
 String(topicName, message);
 try {
 System.out.println(new Date() + : sending...);
 messagesProducer.send(data);
 System.out.println(new Date() +  : sent);
 }catch (FailedToSendMessageException e){
 System.out.println(e:  + e);
 e.printStackTrace();
 }catch (Exception exc){
 System.out.println(e:  + exc);
 exc.printStackTrace();
 }
 }
 }
 {code}
 {code}
 package com.company;
 import kafka.serializer.Encoder;
 import kafka.utils.VerifiableProperties;
 /**
  * Created by igorkhomenko on 2/2/15.
  */
 public class KafkaMessageSerializer implements EncoderString {
 public KafkaMessageSerializer(VerifiableProperties verifiableProperties) {
 /* This constructor must be present for successful compile. */
 }
 @Override
 public byte[] toBytes(String entity) {
 byte [] serializedMessage = doCustomSerialization(entity);
 return serializedMessage;
 }
 private byte[] doCustomSerialization(String entity) {
 return entity.getBytes();
 }
 }
 {code}
 Here is also GitHub version https://github.com/soulfly/Kafka-java-producer
 So it just hungs on next line:
 {code}
 messagesProducer.send(data)
 {code}
 When I replaced the brokerlist to
 {code}
 props.put(metadata.broker.list, localhost:9092);
 {code}
 then I got an exception:
 {code}
 kafka.common.FailedToSendMessageException: Failed to send messages after 3 
 tries.
 {code}
 so it's okay
 Why it hungs with wrong brokerlist? Any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1566) Kafka environment configuration (kafka-env.sh)

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706128#comment-14706128
 ] 

ASF GitHub Bot commented on KAFKA-1566:
---

GitHub user harshach opened a pull request:

https://github.com/apache/kafka/pull/156

KAFKA-1566: Kafka environment configuration (kafka-env.sh)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/harshach/kafka KAFKA-1566

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/156.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #156


commit 42b0acdb392494984b6928d94f0c611d4e1925de
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-01-08T15:50:20Z

KAFKA-1566. Kafka environment configuration (kafka-env.sh).

commit 31d0dcab655b37864c207a08d5c77b9d27fff7bc
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-03-18T00:14:41Z

KAFKA-1566. Kafka environment configuration (kafka-env.sh).




 Kafka environment configuration (kafka-env.sh)
 --

 Key: KAFKA-1566
 URL: https://issues.apache.org/jira/browse/KAFKA-1566
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Cosmin Lehene
Assignee: Sriharsha Chintalapani
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1566.patch, KAFKA-1566_2015-02-21_21:57:02.patch, 
 KAFKA-1566_2015-03-17_17:01:38.patch, KAFKA-1566_2015-03-17_17:19:23.patch


 It would be useful (especially for automated deployments) to have an 
 environment configuration file that could be sourced from the launcher files 
 (e.g. kafka-run-server.sh). 
 This is how this could look like kafka-env.sh 
 {code}
 export KAFKA_JVM_PERFORMANCE_OPTS=-XX:+UseCompressedOops 
 -XX:+DisableExplicitGC -Djava.awt.headless=true \ -XX:+UseG1GC 
 -XX:PermSize=48m -XX:MaxPermSize=48m -XX:MaxGCPauseMillis=20 
 -XX:InitiatingHeapOccupancyPercent=35' % 
 export KAFKA_HEAP_OPTS='-Xmx1G -Xms1G' % 
 export KAFKA_LOG4J_OPTS=-Dkafka.logs.dir=/var/log/kafka 
 {code} 
 kafka-server-start.sh 
 {code} 
 ... 
 source $base_dir/config/kafka-env.sh 
 ... 
 {code} 
 This approach is consistent with Hadoop and HBase. However the idea here is 
 to be able to set these values in a single place without having to edit 
 startup scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1566: Kafka environment configuration (k...

2015-08-20 Thread harshach
GitHub user harshach opened a pull request:

https://github.com/apache/kafka/pull/156

KAFKA-1566: Kafka environment configuration (kafka-env.sh)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/harshach/kafka KAFKA-1566

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/156.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #156


commit 42b0acdb392494984b6928d94f0c611d4e1925de
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-01-08T15:50:20Z

KAFKA-1566. Kafka environment configuration (kafka-env.sh).

commit 31d0dcab655b37864c207a08d5c77b9d27fff7bc
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-03-18T00:14:41Z

KAFKA-1566. Kafka environment configuration (kafka-env.sh).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-1893) Allow regex subscriptions in the new consumer

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1893:

Reviewer: Gwen Shapira

 Allow regex subscriptions in the new consumer
 -

 Key: KAFKA-1893
 URL: https://issues.apache.org/jira/browse/KAFKA-1893
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Ashish K Singh
Priority: Critical
 Fix For: 0.8.3


 The consumer needs to handle subscribing to regular expressions. Presumably 
 this would be done as a new api,
 {code}
   void subscribe(java.util.regex.Pattern pattern);
 {code}
 Some questions/thoughts to work out:
  - It should not be possible to mix pattern subscription with partition 
 subscription.
  - Is it allowable to mix this with normal topic subscriptions? Logically 
 this is okay but a bit complex to implement.
  - We need to ensure we regularly update the metadata and recheck our regexes 
 against the metadata to update subscriptions for new topics that are created 
 or old topics that are deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2007) update offsetrequest for more useful information we have on broker about partition

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2007:

Fix Version/s: (was: 0.8.3)

 update offsetrequest for more useful information we have on broker about 
 partition
 --

 Key: KAFKA-2007
 URL: https://issues.apache.org/jira/browse/KAFKA-2007
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani

 this will need a KIP
 via [~jkreps] in KIP-6 discussion about KAFKA-1694
 The other information that would be really useful to get would be
 information about partitions--how much data is in the partition, what are
 the segment offsets, what is the log-end offset (i.e. last offset), what is
 the compaction point, etc. I think that done right this would be the
 successor to the very awkward OffsetRequest we have today.
 This is not really blocking that ticket and could happen before/after and has 
 a lot of other useful purposes and is important to get done so tracking it 
 here in this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706135#comment-14706135
 ] 

Parth Brahmbhatt commented on KAFKA-1683:
-

[~gwenshap] I agree, the original KIP also assumes that when no authentication 
is done we will still set the Principal as ANONYMOUS. Only suggestion is , lets 
declare this constant in some Config in case we need to access the value from 
somewhere else in the code.

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705911#comment-14705911
 ] 

Edward Ribeiro commented on KAFKA-873:
--

Sorry, I didn't follow here. Guava is not already bundled with Kafka, so there 
are no clients relying on it to pull guava, right?

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705909#comment-14705909
 ] 

Edward Ribeiro commented on KAFKA-873:
--

Sorry, I didn't follow here. Guava is not already bundled with Kafka, so there 
are no clients relying on it to pull guava, right?

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1566) Kafka environment configuration (kafka-env.sh)

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1566:

Reviewer: Gwen Shapira

 Kafka environment configuration (kafka-env.sh)
 --

 Key: KAFKA-1566
 URL: https://issues.apache.org/jira/browse/KAFKA-1566
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Cosmin Lehene
Assignee: Sriharsha Chintalapani
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1566.patch, KAFKA-1566_2015-02-21_21:57:02.patch, 
 KAFKA-1566_2015-03-17_17:01:38.patch, KAFKA-1566_2015-03-17_17:19:23.patch


 It would be useful (especially for automated deployments) to have an 
 environment configuration file that could be sourced from the launcher files 
 (e.g. kafka-run-server.sh). 
 This is how this could look like kafka-env.sh 
 {code}
 export KAFKA_JVM_PERFORMANCE_OPTS=-XX:+UseCompressedOops 
 -XX:+DisableExplicitGC -Djava.awt.headless=true \ -XX:+UseG1GC 
 -XX:PermSize=48m -XX:MaxPermSize=48m -XX:MaxGCPauseMillis=20 
 -XX:InitiatingHeapOccupancyPercent=35' % 
 export KAFKA_HEAP_OPTS='-Xmx1G -Xms1G' % 
 export KAFKA_LOG4J_OPTS=-Dkafka.logs.dir=/var/log/kafka 
 {code} 
 kafka-server-start.sh 
 {code} 
 ... 
 source $base_dir/config/kafka-env.sh 
 ... 
 {code} 
 This approach is consistent with Hadoop and HBase. However the idea here is 
 to be able to set these values in a single place without having to edit 
 startup scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705917#comment-14705917
 ] 

Parth Brahmbhatt commented on KAFKA-873:


Sorry, i misunderstood the original question. you are right guava is not 
bundled with kafka but with curator. and Flavio is correct that a client 
attempting to use guava with kafka could use shading to avoid conflicts or 
incompatibilities. 

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-20 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review96009
---



core/src/main/scala/kafka/api/FetchResponse.scala (line 172)
https://reviews.apache.org/r/33378/#comment151206

Since (in the event of multiple calls) this grouping would be repeated, 
should we just have `responseSize` take the `FetchResponse` object and have 
that look up the `lazy val dataGroupedByTopic`? That said, I think the original 
version should have had `sizeInBytes` as a `lazy val` as well.



core/src/main/scala/kafka/server/ClientQuotaManager.scala (line 115)
https://reviews.apache.org/r/33378/#comment151207

`throttleTimeMs` for consistency



core/src/main/scala/kafka/server/ClientQuotaManager.scala (line 119)
https://reviews.apache.org/r/33378/#comment151209

Maybe make this explicitly zero, and `delayTime` can move below as a `val`



core/src/main/scala/kafka/server/ClientQuotaManager.scala (line 142)
https://reviews.apache.org/r/33378/#comment151208

any specific reason for this change?


- Joel Koshy


On Aug. 18, 2015, 8:24 p.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33378/
 ---
 
 (Updated Aug. 18, 2015, 8:24 p.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2136
 https://issues.apache.org/jira/browse/KAFKA-2136
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Changes are
 - Addressing Joel's comments
 - protocol changes to the fetch request and response to return the 
 throttle_time_ms to clients
 - New producer/consumer metrics to expose the avg and max delay time for a 
 client
 - Test cases.
 - Addressed Joel and Juns comments
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
  9dc669728e6b052f5c6686fcf1b5696a50538ab4 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 0baf16e55046a2f49f6431e01d52c323c95eddf0 
   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
 df073a0e76cc5cc731861b9604d0e19a928970e0 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 eb8951fba48c335095cc43fc3672de1c733e07ff 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
 715504b32950666e9aa5a260fa99d5f897b2007a 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
 febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
  a7c83cac47d41138d47d7590a3787432d675c1b0 
   
 clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
   
 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
  8b2aca85fa738180e5420985fddc39a4bf9681ea 
   core/src/main/scala/kafka/api/FetchRequest.scala 
 5b38f8554898e54800abd65a7415dd0ac41fd958 
   core/src/main/scala/kafka/api/FetchResponse.scala 
 0b6b33ab6f7a732ff1322b6f48abd4c43e0d7215 
   core/src/main/scala/kafka/api/ProducerRequest.scala 
 c866180d3680da03e48d374415f10220f6ca68c4 
   core/src/main/scala/kafka/api/ProducerResponse.scala 
 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
 7ebc0405d1f309bed9943e7116051d1d8276f200 
   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
 f84306143c43049e3aa44e42beaffe7eb2783163 
   core/src/main/scala/kafka/server/ClientQuotaManager.scala 
 9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 67f0cad802f901f255825aa2158545d7f5e7cc3d 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 fae22d2af8daccd528ac24614290f46ae8f6c797 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 d829e180c3943a90861a12ec184f9b4e4bbafe7d 
   core/src/main/scala/kafka/server/ThrottledResponse.scala 
 1f80d5480ccf7c411a02dd90296a7046ede0fae2 
   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
 b4c2a228c3c9872e5817ac58d3022e4903e317b7 
   core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
 97dcca8c96f955acb3d92b29d7faa1e031ba71d4 
   core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
 14a7f4538041d557c190127e3d5f85edf2a0e7c1 
 
 Diff: https://reviews.apache.org/r/33378/diff/
 
 
 Testing
 ---
 
 New tests added
 
 
 Thanks,
 
 Aditya Auradkar
 




[jira] [Created] (KAFKA-2455) Test Failure: kafka.consumer.MetricsTest testMetricsLeak

2015-08-20 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2455:
---

 Summary: Test Failure: kafka.consumer.MetricsTest  
testMetricsLeak 
 Key: KAFKA-2455
 URL: https://issues.apache.org/jira/browse/KAFKA-2455
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


I've seen this failure in builds twice recently:

kafka.consumer.MetricsTest  testMetricsLeak FAILED
java.lang.AssertionError: expected:174 but was:176
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.failNotEquals(Assert.java:689)
at org.junit.Assert.assertEquals(Assert.java:127)
at org.junit.Assert.assertEquals(Assert.java:514)
at org.junit.Assert.assertEquals(Assert.java:498)
at 
kafka.consumer.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:65)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at kafka.consumer.MetricsTest.testMetricsLeak(MetricsTest.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706099#comment-14706099
 ] 

Gwen Shapira commented on KAFKA-1683:
-

[~harsha_ch] mmm... I'm talking about returning ANONYMOUS instead of throwing 
unauthenticated exception. 

You can check my pull request to see what I mean.

Since we need to have a session object anyway, I need to put some principal 
there, so if getPrincipal throws I'll need to handle it when creating a 
session. I think it is cleaner not to throw, but I may be missing SSL context.

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2015-08-20 Thread Michael Poindexter (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705947#comment-14705947
 ] 

Michael Poindexter commented on KAFKA-1194:
---

I have created a PR that should address this: 
https://github.com/apache/kafka/pull/154

 The kafka broker cannot delete the old log files after the configured time
 --

 Key: KAFKA-1194
 URL: https://issues.apache.org/jira/browse/KAFKA-1194
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1
 Environment: window
Reporter: Tao Qin
Assignee: Jay Kreps
  Labels: features, patch
 Fix For: 0.9.0

 Attachments: KAFKA-1194.patch, kafka-1194-v1.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 We tested it in windows environment, and set the log.retention.hours to 24 
 hours.
 # The minimum age of a log file to be eligible for deletion
 log.retention.hours=24
 After several days, the kafka broker still cannot delete the old log file. 
 And we get the following exceptions:
 [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
 'kafka-log-retention' (kafka.utils.KafkaScheduler)
 kafka.common.KafkaStorageException: Failed to change the log file suffix from 
  to .deleted for log segment 1516723
  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
  at 
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
  at scala.collection.immutable.List.foreach(List.scala:76)
  at kafka.log.Log.deleteOldSegments(Log.scala:418)
  at 
 kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
  at 
 kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
  at 
 kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
  at 
 scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
  at scala.collection.Iterator$class.foreach(Iterator.scala:772)
  at 
 scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
  at 
 scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
  at 
 scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
  at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
  at 
 kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
  at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
  at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:724)
 I think this error happens because kafka tries to rename the log file when it 
 is still opened.  So we should close the file first before rename.
 The index file uses a special data structure, the MappedByteBuffer. Javadoc 
 describes it as:
 A mapped byte buffer and the file mapping that it represents remain valid 
 until the buffer itself is garbage-collected.
 Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
 be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706097#comment-14706097
 ] 

ASF GitHub Bot commented on KAFKA-1683:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/155

KAFKA-1683: persisting session information in Requests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-1683

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #155


commit d97449d3626c1a209eefa2eb01e011ecdec4f147
Author: Gwen Shapira csh...@gmail.com
Date:   2015-08-21T01:46:49Z

persisting session information in Requests




 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2456) Disable SSLv3 for ssl.enabledprotocols config on client broker side

2015-08-20 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-2456:
-

 Summary: Disable SSLv3 for ssl.enabledprotocols config on client  
broker side
 Key: KAFKA-2456
 URL: https://issues.apache.org/jira/browse/KAFKA-2456
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


This is a follow-up on KAFKA-1690 . Currently users have option to pass in 
SSLv3 we should not be allowing this as its deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1792) change behavior of --generate to produce assignment config with fair replica distribution and minimal number of reassignments

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1792:

Fix Version/s: (was: 0.8.3)

 change behavior of --generate to produce assignment config with fair replica 
 distribution and minimal number of reassignments
 -

 Key: KAFKA-1792
 URL: https://issues.apache.org/jira/browse/KAFKA-1792
 Project: Kafka
  Issue Type: Sub-task
  Components: tools
Reporter: Dmitry Pekar
Assignee: Dmitry Pekar
 Attachments: KAFKA-1792.patch, KAFKA-1792_2014-12-03_19:24:56.patch, 
 KAFKA-1792_2014-12-08_13:42:43.patch, KAFKA-1792_2014-12-19_16:48:12.patch, 
 KAFKA-1792_2015-01-14_12:54:52.patch, KAFKA-1792_2015-01-27_19:09:27.patch, 
 KAFKA-1792_2015-02-13_21:07:06.patch, KAFKA-1792_2015-02-26_16:58:23.patch, 
 generate_alg_tests.txt, rebalance_use_cases.txt


 Current implementation produces fair replica distribution between specified 
 list of brokers. Unfortunately, it doesn't take
 into account current replica assignment.
 So if we have, for instance, 3 brokers id=[0..2] and are going to add fourth 
 broker id=3, 
 generate will create an assignment config which will redistribute replicas 
 fairly across brokers [0..3] 
 in the same way as those partitions were created from scratch. It will not 
 take into consideration current replica 
 assignment and accordingly will not try to minimize number of replica moves 
 between brokers.
 As proposed by [~charmalloc] this should be improved. New output of improved 
 --generate algorithm should suite following requirements:
 - fairness of replica distribution - every broker will have R or R+1 replicas 
 assigned;
 - minimum of reassignments - number of replica moves between brokers will be 
 minimal;
 Example.
 Consider following replica distribution per brokers [0..3] (we just added 
 brokers 2 and 3):
 - broker - 0, 1, 2, 3 
 - replicas - 7, 6, 0, 0
 The new algorithm will produce following assignment:
 - broker - 0, 1, 2, 3 
 - replicas - 4, 3, 3, 3
 - moves - -3, -3, +3, +3
 It will be fair and number of moves will be 6, which is minimal for specified 
 initial distribution.
 The scope of this issue is:
 - design an algorithm matching the above requirements;
 - implement this algorithm and unit tests;
 - test it manually using different initial assignments;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1802) Add a new type of request for the discovery of the controller

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1802:

Fix Version/s: (was: 0.8.3)

 Add a new type of request for the discovery of the controller
 -

 Key: KAFKA-1802
 URL: https://issues.apache.org/jira/browse/KAFKA-1802
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Andrii Biletskyi

 The goal here is like meta data discovery is for producer so CLI can find 
 which broker it should send the rest of its admin requests too.  Any broker 
 can respond to this specific AdminMeta RQ/RP but only the controller broker 
 should be responding to Admin message otherwise that broker should respond to 
 any admin message with the response for what the controller is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1842) New producer/consumer should support configurable connection timeouts

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1842:

Fix Version/s: (was: 0.8.3)

 New producer/consumer should support configurable connection timeouts
 -

 Key: KAFKA-1842
 URL: https://issues.apache.org/jira/browse/KAFKA-1842
 Project: Kafka
  Issue Type: Bug
  Components: clients, config
Affects Versions: 0.8.2.0
Reporter: Ewen Cheslack-Postava

 During discussion of KAFKA-1642 it became clear that the current connection 
 handling code for the new clients doesn't give enough flexibility in some 
 failure cases. We need to support connection timeouts that are configurable 
 via Kafka configs rather than relying on the underlying TCP stack's default 
 settings. This would give the user control over how aggressively they want to 
 try new servers when trying to fetch metadata (currently dependent on the 
 underlying OS timeouts and some implementation details of 
 NetworkClient.maybeUpdateMetadata and NetworkClient.leastLoadedNode), which 
 is the specific issue that came up in KAFKA-1642. More generally it gives 
 better control over how fast the user sees failures when there are network 
 failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1817) AdminUtils.createTopic vs kafka-topics.sh --create with partitions

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1817:

Fix Version/s: (was: 0.8.3)

 AdminUtils.createTopic vs kafka-topics.sh --create with partitions
 --

 Key: KAFKA-1817
 URL: https://issues.apache.org/jira/browse/KAFKA-1817
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.8.2.0
 Environment: debian linux current version  up to date
Reporter: Jason Kania

 When topics are created using AdminUtils.createTopic in code, no partitions 
 folder is created The zookeeper shell shows this.
 ls /brokers/topics/foshizzle
 []
 However, when kafka-topics.sh --create is run, the partitions folder is 
 created:
 ls /brokers/topics/foshizzle
 [partitions]
 The unfortunately useless error message KeeperErrorCode = NoNode for 
 /brokers/topics/periodicReading/partitions makes it unclear what to do. When 
 the topics are listed via kafka-topics.sh, they appear to have been created 
 fine. It would be good if the exception was wrapped by Kafka to suggested 
 looking in the zookeeper shell so a person didn't have to dig around to 
 understand what the meaning of this path is...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1796) Sanity check partition command line tools

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1796:

Fix Version/s: (was: 0.8.3)

 Sanity check partition command line tools
 -

 Key: KAFKA-1796
 URL: https://issues.apache.org/jira/browse/KAFKA-1796
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Mayuresh Gharat
  Labels: newbie

 We need to sanity check the input json has the valid values before triggering 
 the admin process. For example, we have seen a scenario where the json input 
 for partition reassignment tools have partition replica info as {broker-1, 
 broker-1, broker-2} and it is still accepted in ZK and eventually lead to 
 under replicated count, etc. This is partially because we use a Map rather 
 than a Set reading the json input for this case; but in general we need to 
 make sure the input parameters like Json  needs to be valid before writing it 
 to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1772) Add an Admin message type for request response

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1772:

Fix Version/s: (was: 0.8.3)

 Add an Admin message type for request response
 --

 Key: KAFKA-1772
 URL: https://issues.apache.org/jira/browse/KAFKA-1772
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Andrii Biletskyi

 - utility int8
 - command int8
 - format int8
 - args variable length bytes
 utility 
 0 - Broker
 1 - Topic
 2 - Replication
 3 - Controller
 4 - Consumer
 5 - Producer
 Command
 0 - Create
 1 - Alter
 3 - Delete
 4 - List
 5 - Audit
 format
 0 - JSON
 args e.g. (which would equate to the data structure values == 2,1,0)
 meta-store: {
 {zookeeper:localhost:12913/kafka}
 }args: {
  partitions:
   [
 {topic: topic1, partition: 0},
 {topic: topic1, partition: 1},
 {topic: topic1, partition: 2},
  
 {topic: topic2, partition: 0},
 {topic: topic2, partition: 1},
   ]
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1775) Re-factor TopicCommand into thew handerAdminMessage call

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1775:

Fix Version/s: (was: 0.8.3)

 Re-factor TopicCommand into thew handerAdminMessage call 
 -

 Key: KAFKA-1775
 URL: https://issues.apache.org/jira/browse/KAFKA-1775
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Andrii Biletskyi

 kafka-topic.sh should become
 kafka --topic --everything else the same from the CLI perspective so we need 
 to have the calls from the byte lalery get fed into that same code (few 
 changes as possible called from the handleAdmin call after deducing what 
 Utility[1] it is operating for 
 I think we should not remove the existing kafka-topic.sh and preserve the 
 existing functionality (with as little code duplication as possible) until 
 0.9 (and there we can remove it after folks have used it for a release or two 
 and feedback and the rest)[2]
 [1] https://issues.apache.org/jira/browse/KAFKA-1772
 [2] https://issues.apache.org/jira/browse/KAFKA-1776



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1786) implement a global configuration feature for brokers

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1786:

Fix Version/s: (was: 0.8.3)

 implement a global configuration feature for brokers
 

 Key: KAFKA-1786
 URL: https://issues.apache.org/jira/browse/KAFKA-1786
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Andrii Biletskyi
 Attachments: KAFKA_1786.patch


 Global level configurations (much like topic level) for brokers are managed 
 by humans and automation systems through server.properties.  
 Some configuration make sense to use default (like it is now) or override 
 from central location (zookeeper for now). We can modify this through the new 
 CLI tool so that every broker can have exact same setting.  Some 
 configurations we should allow to be overriden from server.properties (like 
 port) but others we should use the global store as source of truth (e.g. auto 
 topic enable, fetch replica message size, etc). Since most configuration I 
 believe are going to fall into this category we should have the list of 
 server.properties that can override the global config in the code in a list 
 which we can manage... everything else the global takes precedence. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1774:

Fix Version/s: (was: 0.8.3)

 REPL and Shell Client for Admin Message RQ/RP
 -

 Key: KAFKA-1774
 URL: https://issues.apache.org/jira/browse/KAFKA-1774
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Andrii Biletskyi

 We should have a REPL we can work in and execute the commands with the 
 arguments. With this we can do:
 ./kafka.sh --shell 
 kafkaattach cluster -b localhost:9092;
 kafkadescribe topic sampleTopicNameForExample;
 the command line version can work like it does now so folks don't have to 
 re-write all of their tooling.
 kafka.sh --topics --everything the same like kafka-topics.sh is 
 kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh 
 is 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1777) Re-factor reasign-partitions into CLI

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1777:

Fix Version/s: (was: 0.8.3)

 Re-factor reasign-partitions into CLI
 -

 Key: KAFKA-1777
 URL: https://issues.apache.org/jira/browse/KAFKA-1777
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Andrii Biletskyi





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1778) Create new re-elect controller admin function

2015-08-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1778:

Fix Version/s: (was: 0.8.3)

 Create new re-elect controller admin function
 -

 Key: KAFKA-1778
 URL: https://issues.apache.org/jira/browse/KAFKA-1778
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Abhishek Nigam

 kafka --controller --elect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-08-20 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2454:
---

 Summary: Dead lock between delete log segment and shutting down.
 Key: KAFKA-2454
 URL: https://issues.apache.org/jira/browse/KAFKA-2454
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


When the broker shutdown, it will shutdown scheduler which grabs the scheduler 
lock then wait for all the threads in scheduler to shutdown.

The dead lock will happen when the scheduled task try to delete old log 
segment, it will schedule a log delete task which also needs to acquire the 
scheduler lock. In this case the shutdown thread will hold scheduler lock and 
wait for the the log deletion thread to finish, but the log deletion thread 
will block on waiting for the scheduler lock.

Related stack trace:
{noformat}
Thread-1 #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
condition [0x7fe7cf698000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x000640d53540 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
- locked 0x000640b6d480 (a kafka.utils.KafkaScheduler)
at 
kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
at kafka.utils.Logging$class.swallow(Logging.scala:94)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
at 
kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
at 
com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
at 
com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
at 
com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
- locked 0x000640b77bb0 (a java.util.ArrayDeque)
at com.linkedin.util.factory.Generator.stop(Generator.java:177)
- locked 0x000640b77bc8 (a java.lang.Object)
at 
com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
at 
com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
- locked 0x0006400018b8 (a java.lang.Object)
at 
com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
- locked 0x000640001900 (a java.lang.Object)
at 
com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
at 
com.linkedin.emweb.WebappDeployerImpl.doStop(WebappDeployerImpl.java:414)
- locked 0x0006400019c0 (a com.linkedin.emweb.MapBasedHandlerImpl)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
- locked 0x0006404ee8e8 (a java.lang.Object)
at 
org.eclipse.jetty.util.component.AggregateLifeCycle.doStop(AggregateLifeCycle.java:107)
at 
org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:69)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.doStop(HandlerWrapper.java:108)
at org.eclipse.jetty.server.Server.doStop(Server.java:338)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
- locked 0x000640476ab0 (a java.lang.Object)
at com.linkedin.emweb.BaseRunner.destroy(BaseRunner.java:162)
at 
com.linkedin.spring.core.TerminationHandler.destroy(TerminationHandler.java:151)
at 
com.linkedin.spring.core.TerminationHandler.runTermination(TerminationHandler.java:113)
at 

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-20 Thread Manikumar Reddy
  Also can we migrate svn repo to git repo? This will help us to fix
occasional  doc changes/bug fixes through github PR.

On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang wangg...@gmail.com wrote:

 Gwen: I remembered it wrong. We would not need another round of voting.

 On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira g...@confluent.io wrote:

  Looking back at this thread, the +1 mention same repo, so I'm not sure
 a
  new vote is required.
 
  On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang wangg...@gmail.com
 wrote:
 
   So I think we have two different approaches here. The original proposal
   from Aseem is to move website from SVN to a separate Git repo, and
 hence
   have separate commits on code / doc changes. For that we have
 accumulated
   enough binging +1s to move on; Gwen's proposal is to move website into
  the
   same repo under a different folder. If people feel they prefer this
 over
   the previous approach I would like to call for another round of voting.
  
   Guozhang
  
   On Wed, Aug 19, 2015 at 10:24 AM, Ashish paliwalash...@gmail.com
  wrote:
  
+1 to what Gwen has suggested. This is what we follow in Flume.
   
All the latest doc changes are in git, once ready you move changes to
svn to update website.
The only catch is, when you need to update specific changes to
 website
outside release cycle, need to be a bit careful :)
   
On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira g...@confluent.io
  wrote:
 Yeah, so the way this works in few other projects I worked on is:

 * The code repo has a /docs directory with the latest revision of
 the
docs
 (not multiple versions, just one that matches the latest state of
  code)
 * When you submit a patch that requires doc modification, you
 modify
   all
 relevant files in same patch and they get reviewed and committed
   together
 (ideally)
 * When we release, we copy the docs matching the release and commit
  to
SVN
 website. We also do this occasionally to fix bugs in earlier docs.
 * Release artifacts include a copy of the docs

 Nice to have:
 * Docs are in Asciidoc and build generates the HTML. Asciidoc is
  easier
to
 edit and review.

 I suggest a similar process for Kafka.

 On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma ism...@juma.me.uk
   wrote:

 I should clarify: it's not possible unless we add an additional
 step
that
 moves the docs from the code repo to the website repo.

 Ismael

 On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma ism...@juma.me.uk
   wrote:

  Hi all,
 
  It looks like it's not feasible to update the code and website
 in
   the
 same
  commit given existing limitations of the Apache infra:
 
 
 

   
  
 
 https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma 
 ism...@juma.me.uk
wrote:
 
  Hi Gwen,
 
  I filed KAFKA-2425 as KAFKA-2364 is about improving the website
  documentation. Aseem Bansal seemed interested in helping us
 with
   the
 move
  so I pinged him in the issue.
 
  Best,
  Ismael
 
  On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira 
 g...@confluent.io
  
 wrote:
 
  Ah, there is already a JIRA in the title. Never mind :)
 
  On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira 
  g...@confluent.io
 wrote:
 
   The vote opened 5 days ago. I believe we can conclude with 3
binding
  +1, 3
   non-binding +1 and no -1.
  
   Ismael, are you opening and JIRA and migrating? Or are we
   looking
 for a
   volunteer?
  
   On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
asi...@cloudera.com
  wrote:
  
   +1 on same repo.
  
   On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro 
   edward.ribe...@gmail.com
   wrote:
  
+1. As soon as possible, please. :)
   
On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
n...@confluent.io
   wrote:
   
 +1 on the same repo for code and website. It helps to
  keep
both
 in
   sync.

 On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke 
 ghe...@cloudera.com
   wrote:

  +1 for the same repo. The closer docs can be to code
  the
more
   accurate
 they
  are likely to be. The same way we encourage unit
 tests
   for
a
 new
  feature/patch. Updating the docs can be the same.
 
  If we follow Sqoop's process for example, how would
  small
  fixes/adjustments/additions to the live documentation
   occur
  without
   a
new
  release?
 
  On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang 
  

[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705442#comment-14705442
 ] 

Edward Ribeiro commented on KAFKA-873:
--

[~granthenke], [~fpj], [~ijuma] Please, count on me to help you on whatever you 
need to have this port happen. :) I am working on a ticket that can be 
simplified a lot just by using Curator.

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705491#comment-14705491
 ] 

ASF GitHub Bot commented on KAFKA-2454:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/153

KAFKA-2454: Deadlock between log segment deletion and server shutdown.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2454

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/153.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #153


commit 27c6a6e70f969ee13d857213d5a0aa3e40b1c19f
Author: Jiangjie Qin becket@gmail.com
Date:   2015-08-20T18:21:32Z

KAFKA-2454: Deadlock between log segment deletion and server shutdown.




 Dead lock between delete log segment and shutting down.
 ---

 Key: KAFKA-2454
 URL: https://issues.apache.org/jira/browse/KAFKA-2454
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin

 When the broker shutdown, it will shutdown scheduler which grabs the 
 scheduler lock then wait for all the threads in scheduler to shutdown.
 The dead lock will happen when the scheduled task try to delete old log 
 segment, it will schedule a log delete task which also needs to acquire the 
 scheduler lock. In this case the shutdown thread will hold scheduler lock and 
 wait for the the log deletion thread to finish, but the log deletion thread 
 will block on waiting for the scheduler lock.
 Related stack trace:
 {noformat}
 Thread-1 #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
 condition [0x7fe7cf698000]
java.lang.Thread.State: TIMED_WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x000640d53540 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at 
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
 java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
 at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
 - locked 0x000640b6d480 (a kafka.utils.KafkaScheduler)
 at 
 kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
 at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
 at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
 at kafka.utils.Logging$class.swallow(Logging.scala:94)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
 at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
 at 
 kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
 at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
 - locked 0x000640b77bb0 (a java.util.ArrayDeque)
 at com.linkedin.util.factory.Generator.stop(Generator.java:177)
 - locked 0x000640b77bc8 (a java.lang.Object)
 at 
 com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
 at 
 com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
 at 
 org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
 at 
 org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
 at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006400018b8 (a java.lang.Object)
 at 
 com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x000640001900 (a java.lang.Object)
 at 
 com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
 at 
 

[GitHub] kafka pull request: KAFKA-2454: Deadlock between log segment delet...

2015-08-20 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/153

KAFKA-2454: Deadlock between log segment deletion and server shutdown.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2454

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/153.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #153


commit 27c6a6e70f969ee13d857213d5a0aa3e40b1c19f
Author: Jiangjie Qin becket@gmail.com
Date:   2015-08-20T18:21:32Z

KAFKA-2454: Deadlock between log segment deletion and server shutdown.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-20 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705537#comment-14705537
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

[~ijuma] [~junrao] Thanks for the review and sorry for the bad quality in last 
patch.  The code wont compile at this point as I have removed KAFKA-1683 
changes from this patch and this patch depends on changes from KAFKA-1683. If 
you apply the old patch from KAFKA-1683 it should compile and locally all the 
tests passed for me. 

[~ijuma] I had every intention to just have authorizer call at top level in the 
switch case where we multiplex. Realized early on that it won't be possible 
given we don't have a common TopicRequest method and even though we have a top 
level response class some API's return their extensions even for error cases. 
In addition some of the methods need to verify topics , some topics and groups 
and some topics and cluster and sometimes there are side effects like topic 
creation. For the partition method, the best we might be able to do is create a 
method that would look awfully similar to authorize method but will take care 
of not duplicating boilerplate az.map().getOrElse() which IMO is not vastly 
different nor does it make the code more readable.  

Let me know if you come up with something better and I will be happy to 
incorporate your suggestion, I will also try the same on my end.

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
 KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705538#comment-14705538
 ] 

Parth Brahmbhatt commented on KAFKA-1683:
-

[~gwenshap] Given this blocks KAFKA-2210, if you are busy I can submit a PR for 
this one. 

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated Aug. 20, 2015, 6:27 p.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Now addressing Ismael's comments. Case sensitive checks.


Addressing Jun's comments.


Merge remote-tracking branch 'origin/trunk' into az

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala
core/src/main/scala/kafka/server/KafkaServer.scala

Deleting KafkaConfigDefTest


Addressing comments from Ismael.


Diffs (updated)
-

  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
67f0cad802f901f255825aa2158545d7f5e7cc3d 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
d547a01cf7098f216a3775e1e1901c5794e1b24c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
0e7ba3ede78dbc995c404e0387a6be687703836a 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
3da666f73227fc7ef7093e3790546344065f6825 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Parth Brahmbhatt


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/network/RequestChannel.scala, line 48
  https://reviews.apache.org/r/34492/diff/10/?file=1037026#file1037026line48
 
  Normally one would use `Option[Session]` here. Are we using `null` due 
  to efficiency concerns? Sorry if I am missing some context.

My bad, This class is not suppose to be part of this PR but the dependent jira. 
Removed this class.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/security/auth/Operation.scala, line 42
  https://reviews.apache.org/r/34492/diff/10/?file=1037030#file1037030line42
 
  Generally a good idea to set the result type for public methods. This 
  makes it possible to change the underlying implementation without affecting 
  binary compatibility. For example, here we may set the result type as 
  `Seq[Operation]`, which would give us the option of changing the underlying 
  implementation to `Vector` if that turned out to be better. In `Scala`, 
  `List` is a concrete type unlike `Java`.
  
  Not sure what is the usual policy for Kafka though, would be useful to 
  have some input from Jun. If we decide to change it, there are other places 
  where the same comment would apply.
 
 Jun Rao wrote:
 We don't have a policy on that yet. I think explicitly defining return 
 types in this case makes sense.

Fixed in Operation, PermissionType and ResourceType.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/security/auth/Resource.scala, line 24
  https://reviews.apache.org/r/34492/diff/10/?file=1037032#file1037032line24
 
  Nitpick: space before `:` should be removed.

Fixed.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala, line 25
  https://reviews.apache.org/r/34492/diff/10/?file=1037029#file1037029line25
 
  Nitpick: space before `:` should be removed.

Fixed.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/security/auth/Acl.scala, line 116
  https://reviews.apache.org/r/34492/diff/10/?file=1037027#file1037027line116
 
  Nitpick: no need for `()` and space before `:` should be removed.

Fixed.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 104
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line104
 
  This code exists in 4 places, how about we introduce an method like:
  
  ```
  def authorizeClusterAction(authorizer, request): Unit = {
if (authorizer.map(_.authorizer(request.session, ClusterAction, 
  Resource.ClusterResource)).getOrElse(false))
  throw new AuthorizationException(sRequest $request is not 
  authorized.)
  }
  ```
  
  And then callers can just do (as an example):
  
  `authorizeClusterAction(authorizer, leaderAndIsrRequest)`
  
  
  Am I missing something?

Fixed.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, lines 189-192
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line189
 
  Nitpick: space after `case`. There are a number of other cases like 
  this.

Fixed.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 549
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line549
 
  Nitpick: val instead of var.

I am actually changing these values later so they need to be vars.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala, line 24
  https://reviews.apache.org/r/34492/diff/10/?file=1037037#file1037037line24
 
  We don't use `JUnit3Suite` anymore. Either use `JUnitSuite` or don't 
  inherit from anything (we have both examples in the codebase now). This 
  applies to all the tests. I noticed that Jun already mentioned this in 
  another test.

Fixed.


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala, line 30
  https://reviews.apache.org/r/34492/diff/10/?file=1037037#file1037037line30
 
  @Test annotation is needed after you remove `JUnit3Suite`. This applies 
  to all the tests.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review95934
---


On Aug. 20, 2015, 6:27 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated Aug. 20, 2015, 6:27 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 

[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-20 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-08-20_11:27:18.patch

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
 KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-08-20 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2454:

Status: Patch Available  (was: Open)

 Dead lock between delete log segment and shutting down.
 ---

 Key: KAFKA-2454
 URL: https://issues.apache.org/jira/browse/KAFKA-2454
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin

 When the broker shutdown, it will shutdown scheduler which grabs the 
 scheduler lock then wait for all the threads in scheduler to shutdown.
 The dead lock will happen when the scheduled task try to delete old log 
 segment, it will schedule a log delete task which also needs to acquire the 
 scheduler lock. In this case the shutdown thread will hold scheduler lock and 
 wait for the the log deletion thread to finish, but the log deletion thread 
 will block on waiting for the scheduler lock.
 Related stack trace:
 {noformat}
 Thread-1 #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
 condition [0x7fe7cf698000]
java.lang.Thread.State: TIMED_WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x000640d53540 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at 
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
 java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
 at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
 - locked 0x000640b6d480 (a kafka.utils.KafkaScheduler)
 at 
 kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
 at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
 at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
 at kafka.utils.Logging$class.swallow(Logging.scala:94)
 at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
 at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
 at 
 kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
 at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
 at 
 com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
 - locked 0x000640b77bb0 (a java.util.ArrayDeque)
 at com.linkedin.util.factory.Generator.stop(Generator.java:177)
 - locked 0x000640b77bc8 (a java.lang.Object)
 at 
 com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
 at 
 com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
 at 
 org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
 at 
 org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
 at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006400018b8 (a java.lang.Object)
 at 
 com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x000640001900 (a java.lang.Object)
 at 
 com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
 at 
 com.linkedin.emweb.WebappDeployerImpl.doStop(WebappDeployerImpl.java:414)
 - locked 0x0006400019c0 (a 
 com.linkedin.emweb.MapBasedHandlerImpl)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x0006404ee8e8 (a java.lang.Object)
 at 
 org.eclipse.jetty.util.component.AggregateLifeCycle.doStop(AggregateLifeCycle.java:107)
 at 
 org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:69)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.doStop(HandlerWrapper.java:108)
 at org.eclipse.jetty.server.Server.doStop(Server.java:338)
 at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
 - locked 0x000640476ab0 (a java.lang.Object)
 

[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-20 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705505#comment-14705505
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

Updated reviewboard https://reviews.apache.org/r/34492/diff/
 against branch origin/trunk

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
 KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Parth Brahmbhatt


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/security/auth/Acl.scala, lines 84-87
  https://reviews.apache.org/r/34492/diff/10/?file=1037027#file1037027line84
 
  Do we need the case match here since acls is always a Set?

Fixed.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/security/auth/ResourceType.scala, line 21
  https://reviews.apache.org/r/34492/diff/10/?file=1037033#file1037033line21
 
  Not needed.

Fixed.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 104
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line104
 
  If authorizer is not specified, getOrElse() should return true, right? 
  There are a few other places like that.

Fixed.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, lines 189-192
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line189
 
  For coding style, to be consistent with most existing code, it seems 
  it's better to do authorizer.map{ ... }.getOrElse().

Done.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, lines 641-644
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line641
 
  The current logic requires that we grant CREATE on CLUSTER to the 
  consumer, which is a bit weird. Perhaps we should just always allow the 
  consumer to create the offset topic as long as it has the permission to 
  read the topic and the consumer group. That way, we don't have to grant 
  CREATE permssion to the consumer.

Done.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 677
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line677
 
  It seems the test on errorCode == ErrorMapping.NoError is unnecessary.

The code tries to priortize non authoirzation errors above authorization error. 
We can only send one error code and IMO if we have an error other than 
Authorization error we should propogate that to the user.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 553
  https://reviews.apache.org/r/34492/diff/10/?file=1037034#file1037034line553
 
  Currently, metadataCache.getTopicMetadata() will return all the 
  metadata of all topics if the input topic is empty. This causes a couple of 
  issues here. (1) If authorizedTopics is empty, we end up returning more 
  topics than needed. (2) If the original request has an empty topic list, we 
  will return the metadata of all topics whether the client has the DESCRIBE 
  permission or not.

Fixed.


 On Aug. 20, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala, line 
  30
  https://reviews.apache.org/r/34492/diff/10/?file=1037042#file1037042line30
 
  We removed this class in KAFKA-2288 since it's no longer necessary.

Removed the class.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review95801
---


On Aug. 20, 2015, 6:27 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated Aug. 20, 2015, 6:27 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Now addressing Ismael's comments. Case sensitive checks.
 
 
 Addressing Jun's comments.
 
 
 Merge remote-tracking branch 'origin/trunk' into az
 
 Conflicts:
   core/src/main/scala/kafka/server/KafkaApis.scala
   core/src/main/scala/kafka/server/KafkaServer.scala
 
 Deleting KafkaConfigDefTest
 
 
 Addressing comments from Ismael.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   

[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-20 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Status: Patch Available  (was: In Progress)

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
 KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Parth Brahmbhatt


 On Aug. 20, 2015, 10:30 a.m., Ismael Juma wrote:
  One more thing: it may be a good idea to rebase against trunk since the 
  large SSL/TLS patch has now been merged (not sure if there are any 
  conflicts).

Done.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review95942
---


On Aug. 20, 2015, 6:27 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated Aug. 20, 2015, 6:27 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Now addressing Ismael's comments. Case sensitive checks.
 
 
 Addressing Jun's comments.
 
 
 Merge remote-tracking branch 'origin/trunk' into az
 
 Conflicts:
   core/src/main/scala/kafka/server/KafkaApis.scala
   core/src/main/scala/kafka/server/KafkaServer.scala
 
 Deleting KafkaConfigDefTest
 
 
 Addressing comments from Ismael.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 67f0cad802f901f255825aa2158545d7f5e7cc3d 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 d547a01cf7098f216a3775e1e1901c5794e1b24c 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 0e7ba3ede78dbc995c404e0387a6be687703836a 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 3da666f73227fc7ef7093e3790546344065f6825 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705513#comment-14705513
 ] 

Jay Kreps commented on KAFKA-873:
-

Curator will add a transitive dependency on Guava for the scala clients, right? 
Guava is heavily used, how good is its compatibility story? If there are any 
Guava incompatibilities over time any app that uses Kafka and Guava (but not 
whatever version netflix likes) will be unable to upgrade.

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2452) enable new consumer in mirror maker

2015-08-20 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-2452:
---

Assignee: Jiangjie Qin

 enable new consumer in mirror maker
 ---

 Key: KAFKA-2452
 URL: https://issues.apache.org/jira/browse/KAFKA-2452
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Jiangjie Qin
 Fix For: 0.8.3


 We need to an an option to enable the new consumer in mirror maker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1683) Implement a session concept in the socket server

2015-08-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705540#comment-14705540
 ] 

Gwen Shapira commented on KAFKA-1683:
-

I think I'm good, expect a patch by Friday.

 Implement a session concept in the socket server
 --

 Key: KAFKA-1683
 URL: https://issues.apache.org/jira/browse/KAFKA-1683
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.9.0
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1683.patch, KAFKA-1683.patch


 To implement authentication we need a way to keep track of some things 
 between requests. The initial use for this would be remembering the 
 authenticated user/principle info, but likely more uses would come up (for 
 example we will also need to remember whether and which encryption or 
 integrity measures are in place on the socket so we can wrap and unwrap 
 writes and reads).
 I was thinking we could just add a Session object that might have a user 
 field. The session object would need to get added to RequestChannel.Request 
 so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-08-20 Thread Jun Rao


 On Aug. 20, 2015, 10:29 a.m., Ismael Juma wrote:
  core/src/main/scala/kafka/security/auth/Operation.scala, line 42
  https://reviews.apache.org/r/34492/diff/10/?file=1037030#file1037030line42
 
  Generally a good idea to set the result type for public methods. This 
  makes it possible to change the underlying implementation without affecting 
  binary compatibility. For example, here we may set the result type as 
  `Seq[Operation]`, which would give us the option of changing the underlying 
  implementation to `Vector` if that turned out to be better. In `Scala`, 
  `List` is a concrete type unlike `Java`.
  
  Not sure what is the usual policy for Kafka though, would be useful to 
  have some input from Jun. If we decide to change it, there are other places 
  where the same comment would apply.

We don't have a policy on that yet. I think explicitly defining return types in 
this case makes sense.


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review95934
---


On Aug. 11, 2015, 1:32 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated Aug. 11, 2015, 1:32 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Now addressing Ismael's comments. Case sensitive checks.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/network/RequestChannel.scala 
 20741281dcaa76374ea6f86a2185dad27b515339 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7ea509c2c41acc00430c74e025e069a833aac4e7 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 84d4730ac634f9a5bf12a656e422fea03ad72da8 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




[jira] [Comment Edited] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Jordan Zimmerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705003#comment-14705003
 ] 

Jordan Zimmerman edited comment on KAFKA-873 at 8/20/15 2:30 PM:
-

NOTE: Curator's PersistentEphemeralNode would likely resolve KAFKA-1387 
http://curator.apache.org/curator-recipes/persistent-ephemeral-node.html


was (Author: randgalt):
NOTE: Curator's PersistenEphemeralNode would likely resolve KAFKA-1387

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Jordan Zimmerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705003#comment-14705003
 ] 

Jordan Zimmerman commented on KAFKA-873:


NOTE: Curator's PersistenEphemeralNode would likely resolve KAFKA-1387

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705044#comment-14705044
 ] 

Flavio Junqueira commented on KAFKA-873:


Ah! That's what I was looking for, thanks Jordan. What I've noticed is that the 
ZkClient bridge alone is not going to do it because it is only a different 
implementation of IZkConnection, so the client code is still the same. I'd need 
to remove the ZkClient listeners to use the recipe you suggest, but it sounds 
ok because I just need to initialize PersistentEphemeralNode once (funny name, 
btw). For this jira here, I'm trying to determine if it makes sense to do this 
intermediate step through the bridge or go directly into removing the listeners 
and add calls to curator framework directly.  

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1543) Changing replication factor

2015-08-20 Thread Alexander Pakulov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pakulov reassigned KAFKA-1543:


Assignee: Alexander Pakulov

 Changing replication factor
 ---

 Key: KAFKA-1543
 URL: https://issues.apache.org/jira/browse/KAFKA-1543
 Project: Kafka
  Issue Type: Improvement
Reporter: Alexey Ozeritskiy
Assignee: Alexander Pakulov
 Attachments: can-change-replication.patch


 It is difficult to change replication factor by manual editing json config.
 I propose to add a key to kafka-reassign-partitions.sh command to 
 automatically create json config.
 Example of usage
 {code}
 kafka-reassign-partitions.sh --zookeeper zk --replicas new-replication-factor 
 --topics-to-move-json-file topics-file --broker-list 1,2,3,4 --generate  
 output
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2411:
---
Status: Patch Available  (was: In Progress)

Addressed feedback from Gwen, so setting status to Patch Available.

 remove usage of BlockingChannel in the broker
 -

 Key: KAFKA-2411
 URL: https://issues.apache.org/jira/browse/KAFKA-2411
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Ismael Juma
 Fix For: 0.8.3


 In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
 still a few places where we use BlockingChannel for inter-broker 
 communication. We need to replace those usage with Selector/NetworkClient to 
 enable inter-broker communication over SSL. Specially, BlockingChannel is 
 currently used in the following places.
 1. ControllerChannelManager: for the controller to propagate metadata to the 
 brokers.
 2. KafkaServer: for the broker to send controlled shutdown request to the 
 controller.
 3. -AbstractFetcherThread: for the follower to fetch data from the leader 
 (through SimpleConsumer)- moved to KAFKA-2440



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2451) Exception logged but not managed

2015-08-20 Thread JIRA
Gwenhaël PASQUIERS created KAFKA-2451:
-

 Summary: Exception logged but not managed
 Key: KAFKA-2451
 URL: https://issues.apache.org/jira/browse/KAFKA-2451
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1
 Environment: Windows + Java
Reporter: Gwenhaël PASQUIERS
Assignee: Jun Rao


We've been having issues with java-snappy and it's native dll.
To make it short : we have exceptions when serializing the message.

We are using kafka producer it in Camel.

The problem is that kafka thinks that the message was worrectly sent, and 
returns no error: camel consumes the files even though kafka coult not send the 
messages.

Where the issue lies (if i'm correct):

In DefaultEventHandler line 115 with tag 0.8.1 the exception that is thrown by 
groupMessageToSet() is catched and logged. The return value of the function 
dispatchSerializedData() is used to determine if the send was successfull (if 
(outstandingProduceRequest.size 0) { ...}).

BUT in this case I'm suspecting that, not even one message could be serialized 
and added to  failedProduceRequests. So the code that called 
dispatchSerializedData thinks everything is OK though it's not.

The producer could behave better and propagate the error properly. Since, it 
could lead to pure data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2411; remove usage of blocking channel

2015-08-20 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/151


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2411; remove usage of blocking channel

2015-08-20 Thread ijuma
GitHub user ijuma reopened a pull request:

https://github.com/apache/kafka/pull/151

KAFKA-2411; remove usage of blocking channel



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2411-remove-usage-of-blocking-channel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/151.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #151


commit dbcde7e828a250708752866c4610298773dea006
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T13:30:35Z

Introduce `ChannelBuilders.create` and use it in `ClientUtils` and 
`SocketServer`

commit 6de8b9b18c6bfb67e72a4fccc10768dff15098f8
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:22:55Z

Use `Selector` instead of `BlockingChannel` for controlled shutdown

commit da7a980887ab2b5d007ddf80c3059b6619d52f99
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:23:11Z

Use `Selector` instead of `BlockingChannel` in `ControllerChannelManager`

commit 2b258901929e24fce2329bc85e650e4ca022bca0
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T11:10:53Z

Move `readCompletely` from `NetworkReceive` to `BlockingChannel`

It is now a private method since it's not used anywhere else and it's
been changed slightly to match the use-case better.

commit f804f633d93d2beea94017bba9225504c2f9cea4
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T12:29:55Z

Adjust buffer and max request size to match `BlockingChannel` behaviour

Based on feedback from Gwen.

commit c71aab9b6e4c6172615a125d2406ff6f3d668996
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T12:53:39Z

Introduce specific methods in `SelectorUtils` and make the generic ones 
private

As suggested by Gwen.

commit 1de16166232e0b9a4b0798de493869d3ce23964c
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T13:17:45Z

Reuse `Selector` when removing and re-adding brokers in 
`ControllerChannelManager`

As suggested by Gwen.

commit bf5b9c81fa59efa2429a409d3872f4f6f0d5d589
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T14:55:03Z

Merge remote-tracking branch 'apache/trunk' into 
kafka-2411-remove-usage-of-blocking-channel

* apache/trunk:
  KAFKA-2330: Vagrantfile sets global configs instead of per-provider 
override configs; patched by Ewen Cheslack-Postava, reviewed by Geoff Anderson 
and Gwen Shapira
  KAFKA-2246; Fix incorrect config ZK path.
  KAFKA-2084; trivial follow-up (remove JUnit3Suite dependency)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705145#comment-14705145
 ] 

ASF GitHub Bot commented on KAFKA-2411:
---

GitHub user ijuma reopened a pull request:

https://github.com/apache/kafka/pull/151

KAFKA-2411; remove usage of blocking channel



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2411-remove-usage-of-blocking-channel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/151.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #151


commit dbcde7e828a250708752866c4610298773dea006
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T13:30:35Z

Introduce `ChannelBuilders.create` and use it in `ClientUtils` and 
`SocketServer`

commit 6de8b9b18c6bfb67e72a4fccc10768dff15098f8
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:22:55Z

Use `Selector` instead of `BlockingChannel` for controlled shutdown

commit da7a980887ab2b5d007ddf80c3059b6619d52f99
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-19T14:23:11Z

Use `Selector` instead of `BlockingChannel` in `ControllerChannelManager`

commit 2b258901929e24fce2329bc85e650e4ca022bca0
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T11:10:53Z

Move `readCompletely` from `NetworkReceive` to `BlockingChannel`

It is now a private method since it's not used anywhere else and it's
been changed slightly to match the use-case better.

commit f804f633d93d2beea94017bba9225504c2f9cea4
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T12:29:55Z

Adjust buffer and max request size to match `BlockingChannel` behaviour

Based on feedback from Gwen.

commit c71aab9b6e4c6172615a125d2406ff6f3d668996
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T12:53:39Z

Introduce specific methods in `SelectorUtils` and make the generic ones 
private

As suggested by Gwen.

commit 1de16166232e0b9a4b0798de493869d3ce23964c
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T13:17:45Z

Reuse `Selector` when removing and re-adding brokers in 
`ControllerChannelManager`

As suggested by Gwen.

commit bf5b9c81fa59efa2429a409d3872f4f6f0d5d589
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-20T14:55:03Z

Merge remote-tracking branch 'apache/trunk' into 
kafka-2411-remove-usage-of-blocking-channel

* apache/trunk:
  KAFKA-2330: Vagrantfile sets global configs instead of per-provider 
override configs; patched by Ewen Cheslack-Postava, reviewed by Geoff Anderson 
and Gwen Shapira
  KAFKA-2246; Fix incorrect config ZK path.
  KAFKA-2084; trivial follow-up (remove JUnit3Suite dependency)




 remove usage of BlockingChannel in the broker
 -

 Key: KAFKA-2411
 URL: https://issues.apache.org/jira/browse/KAFKA-2411
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Ismael Juma
 Fix For: 0.8.3


 In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
 still a few places where we use BlockingChannel for inter-broker 
 communication. We need to replace those usage with Selector/NetworkClient to 
 enable inter-broker communication over SSL. Specially, BlockingChannel is 
 currently used in the following places.
 1. ControllerChannelManager: for the controller to propagate metadata to the 
 brokers.
 2. KafkaServer: for the broker to send controlled shutdown request to the 
 controller.
 3. -AbstractFetcherThread: for the follower to fetch data from the leader 
 (through SimpleConsumer)- moved to KAFKA-2440



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1282) Disconnect idle socket connection in Selector

2015-08-20 Thread nicu marasoiu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705068#comment-14705068
 ] 

nicu marasoiu commented on KAFKA-1282:
--

Hi,

I noticed that the dependencies are done and I will resume this task.
The task contributions had been:
- a fix
- unit test(s)

As far as the fix is concerned, I noticed that it is already fixed in the 
current Selector, namely the lruConnections is a LinkedHashMap with 
accessOrder=true. This was the only fix needed, and I am 100% convinced that 
the fix is already done.

I already have a unit test too, I will try to put a patch here this week.

Just wanted to mention that the old connections should be closed by the kafka 
installations using the new reusable network code.

Thanks
Nicu

 Disconnect idle socket connection in Selector
 -

 Key: KAFKA-1282
 URL: https://issues.apache.org/jira/browse/KAFKA-1282
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.0
Reporter: Jun Rao
Assignee: nicu marasoiu
  Labels: newbie++
 Attachments: 1282_access-order_+_test_(same_class).patch, 
 KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch, 
 access-order_+_test.patch, 
 access_order_+_test_waiting_from_350ms_to_1100ms.patch


 To reduce # socket connections, it would be useful for the new producer to 
 close socket connections that are idle. We can introduce a new producer 
 config for the idle time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705071#comment-14705071
 ] 

Ismael Juma commented on KAFKA-873:
---

Would there be an advantage to going to the bridge first?

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705096#comment-14705096
 ] 

Flavio Junqueira commented on KAFKA-873:


ZkClient is used in a number of places across the code base and using the 
bridge would let us have a smoother transition over a few jiras. But, using the 
bridge still requires ZkClient and Curator co-existing, so the approach of 
replacing ZkClient with Curator gradually without the bridge would also work 
fine. I was thinking that using the bridge would make the transition easier 
because the PR already exists for this issue, but I can be convinced that this 
intermediate step isn't entirely necessary. As I understand it, this issue has 
been proposed so that we could use Exhibitor with Kafka, which depends on 
Curator I believe.

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705144#comment-14705144
 ] 

ASF GitHub Bot commented on KAFKA-2411:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/151


 remove usage of BlockingChannel in the broker
 -

 Key: KAFKA-2411
 URL: https://issues.apache.org/jira/browse/KAFKA-2411
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Ismael Juma
 Fix For: 0.8.3


 In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
 still a few places where we use BlockingChannel for inter-broker 
 communication. We need to replace those usage with Selector/NetworkClient to 
 enable inter-broker communication over SSL. Specially, BlockingChannel is 
 currently used in the following places.
 1. ControllerChannelManager: for the controller to propagate metadata to the 
 brokers.
 2. KafkaServer: for the broker to send controlled shutdown request to the 
 controller.
 3. -AbstractFetcherThread: for the follower to fetch data from the leader 
 (through SimpleConsumer)- moved to KAFKA-2440



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1651) Removed some extra whitespace in KafkaServer.scala

2015-08-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1651:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

These unwanted white-spaces got removed in other JIRAs.

 Removed some extra whitespace in KafkaServer.scala
 --

 Key: KAFKA-1651
 URL: https://issues.apache.org/jira/browse/KAFKA-1651
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.8.2.0
Reporter: Jonathan Creasy
Priority: Trivial
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1651.patch, KAFKA-1651_2014-09-25_00:49:36.patch, 
 KAFKA-1651_2014-09-25_00:50:11.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1901) Move Kafka version to be generated in code by build (instead of in manifest)

2015-08-20 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1901:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch - committed to trunk.

 Move Kafka version to be generated in code by build (instead of in manifest)
 

 Key: KAFKA-1901
 URL: https://issues.apache.org/jira/browse/KAFKA-1901
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Jason Rosenberg
Assignee: Manikumar Reddy
 Attachments: KAFKA-1901.patch, KAFKA-1901_2015-06-26_13:16:29.patch, 
 KAFKA-1901_2015-07-10_16:42:53.patch, KAFKA-1901_2015-07-14_17:59:56.patch, 
 KAFKA-1901_2015-08-09_15:04:39.patch, KAFKA-1901_2015-08-20_12:35:00.patch


 With 0.8.2 (rc2), I've started seeing this warning in the logs of apps 
 deployed to our staging (both server and client):
 {code}
 2015-01-23 00:55:25,273  WARN [async-message-sender-0] common.AppInfo$ - 
 Can't read Kafka version from MANIFEST.MF. Possible cause: 
 java.lang.NullPointerException
 {code}
 The issues is that in our deployment, apps are deployed with single 'shaded' 
 jars (e.g. using the maven shade plugin).  This means the MANIFEST.MF file 
 won't have a kafka version.  Instead, suggest the kafka build generate the 
 proper version in code, as part of the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2015: Enable ConsoleConsumer to use new ...

2015-08-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/144


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2015) Enable ConsoleConsumer to use new consumer

2015-08-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2015:
-
   Resolution: Fixed
Fix Version/s: (was: 0.9.0)
   0.8.3
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 144
[https://github.com/apache/kafka/pull/144]

 Enable ConsoleConsumer to use new consumer
 --

 Key: KAFKA-2015
 URL: https://issues.apache.org/jira/browse/KAFKA-2015
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Guozhang Wang
Assignee: Ben Stopford
 Fix For: 0.8.3

 Attachments: KAFKA-2015.patch


 As titled, enable ConsoleConsumer to use new consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2015) Enable ConsoleConsumer to use new consumer

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705664#comment-14705664
 ] 

ASF GitHub Bot commented on KAFKA-2015:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/144


 Enable ConsoleConsumer to use new consumer
 --

 Key: KAFKA-2015
 URL: https://issues.apache.org/jira/browse/KAFKA-2015
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Guozhang Wang
Assignee: Ben Stopford
 Fix For: 0.8.3

 Attachments: KAFKA-2015.patch


 As titled, enable ConsoleConsumer to use new consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35867: Patch for KAFKA-1901

2015-08-20 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35867/#review96003
---

Ship it!


Ship It!

- Joel Koshy


On Aug. 20, 2015, 7:08 a.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35867/
 ---
 
 (Updated Aug. 20, 2015, 7:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1901
 https://issues.apache.org/jira/browse/KAFKA-1901
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Joel's comments, rebase
 
 
 Diffs
 -
 
   build.gradle 17fc223907f6b55f2d730be81ddb8d8e07a2c7ad 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 3749880b765f74af117d6c44705daf170095a1b7 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 c4621e22c32c1a1fb23726d7f56004845def96ef 
   clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java 
 PRE-CREATION 
   core/src/main/scala/kafka/common/AppInfo.scala 
 d642ca555f83c41451d4fcaa5c01a1f86eff0a1c 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 0e7ba3ede78dbc995c404e0387a6be687703836a 
   core/src/main/scala/kafka/server/KafkaServerStartable.scala 
 1c1b75b4137a8b233b61739018e9cebcc3a34343 
 
 Diff: https://reviews.apache.org/r/35867/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705842#comment-14705842
 ] 

Flavio Junqueira commented on KAFKA-873:


can we use shading to solve this problem?

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >