[jira] [Commented] (KAFKA-2695) Handle null string/bytes protocol primitives

2016-01-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098250#comment-15098250
 ] 

Jun Rao commented on KAFKA-2695:


[~hachikuji], when using a compacted topic, we use a null value in the message 
to indicate a delete. Are you saying the new consumer won't be able to parse 
that?

> Handle null string/bytes protocol primitives
> 
>
> Key: KAFKA-2695
> URL: https://issues.apache.org/jira/browse/KAFKA-2695
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The kafka protocol supports null bytes and string primitives by passing -1 as 
> the size, but the current deserializers implemented in 
> o.a.k.common.protocol.types.Type do not handle them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3100; Broker.createBroker should work if...

2016-01-14 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/773

KAFKA-3100; Broker.createBroker should work if json is version > 2 and 
still compatible



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3100-create-broker-version-check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/773.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #773


commit cab20ca25a317a04818acf3ec81ea8e082ce9b7a
Author: Ismael Juma 
Date:   2016-01-14T16:07:26Z

Broker.createBroker should work if json is version > 2 and still compatible




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3100) Broker.createBroker should work if json is version > 2, but still compatible

2016-01-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3100:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> Broker.createBroker should work if json is version > 2, but still compatible
> 
>
> Key: KAFKA-3100
> URL: https://issues.apache.org/jira/browse/KAFKA-3100
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Description from Jun:
> In 0.9.0.0, the old consumer reads broker info directly from ZK and the code 
> throws an exception if the version in json is not 1 or 2. This old consumer 
> will break when we upgrade the broker json to version 3 in ZK in 0.9.1, which 
> will be an issue. We overlooked this issue in 0.9.0.0. The easiest fix is 
> probably not to check the version in ZkUtils.getBrokerInfo().
> This way, as long as we are only adding new fields in broker json, we can 
> preserve the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3100) Broker.createBroker should work if json is version > 2, but still compatible

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098316#comment-15098316
 ] 

ASF GitHub Bot commented on KAFKA-3100:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/773

KAFKA-3100; Broker.createBroker should work if json is version > 2 and 
still compatible



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3100-create-broker-version-check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/773.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #773


commit cab20ca25a317a04818acf3ec81ea8e082ce9b7a
Author: Ismael Juma 
Date:   2016-01-14T16:07:26Z

Broker.createBroker should work if json is version > 2 and still compatible




> Broker.createBroker should work if json is version > 2, but still compatible
> 
>
> Key: KAFKA-3100
> URL: https://issues.apache.org/jira/browse/KAFKA-3100
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Description from Jun:
> In 0.9.0.0, the old consumer reads broker info directly from ZK and the code 
> throws an exception if the version in json is not 1 or 2. This old consumer 
> will break when we upgrade the broker json to version 3 in ZK in 0.9.1, which 
> will be an issue. We overlooked this issue in 0.9.0.0. The easiest fix is 
> probably not to check the version in ZkUtils.getBrokerInfo().
> This way, as long as we are only adding new fields in broker json, we can 
> preserve the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3100) Broker.createBroker should work if json is version > 2, but still compatible

2016-01-14 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098314#comment-15098314
 ] 

Ismael Juma commented on KAFKA-3100:


After some thought, it makes more sense to update the upgrade notes once the 
rack information is added (and this is mentioned in the KIP).

> Broker.createBroker should work if json is version > 2, but still compatible
> 
>
> Key: KAFKA-3100
> URL: https://issues.apache.org/jira/browse/KAFKA-3100
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Description from Jun:
> In 0.9.0.0, the old consumer reads broker info directly from ZK and the code 
> throws an exception if the version in json is not 1 or 2. This old consumer 
> will break when we upgrade the broker json to version 3 in ZK in 0.9.1, which 
> will be an issue. We overlooked this issue in 0.9.0.0. The easiest fix is 
> probably not to check the version in ZkUtils.getBrokerInfo().
> This way, as long as we are only adding new fields in broker json, we can 
> preserve the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1464) Add a throttling option to the Kafka replication tool

2016-01-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1464:
---
Fix Version/s: 0.9.1.0

> Add a throttling option to the Kafka replication tool
> -
>
> Key: KAFKA-1464
> URL: https://issues.apache.org/jira/browse/KAFKA-1464
> Project: Kafka
>  Issue Type: New Feature
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: mjuarez
>Assignee: Ismael Juma
>Priority: Minor
>  Labels: replication, replication-tools
> Fix For: 0.9.1.0
>
>
> When performing replication on new nodes of a Kafka cluster, the replication 
> process will use all available resources to replicate as fast as possible.  
> This causes performance issues (mostly disk IO and sometimes network 
> bandwidth) when doing this in a production environment, in which you're 
> trying to serve downstream applications, at the same time you're performing 
> maintenance on the Kafka cluster.
> An option to throttle the replication to a specific rate (in either MB/s or 
> activities/second) would help production systems to better handle maintenance 
> tasks while still serving downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-01-14 Thread Mohit Anchlia (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098512#comment-15098512
 ] 

Mohit Anchlia commented on KAFKA-3102:
--

I enabled debug and still not much info:

[2016-01-14 12:52:47,541] DEBUG sessionid:0x1524142e5c2 type:closeSession 
cxid:0x1 zxid:0x2 txntype:-11 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,543] INFO Closed socket connection for client 
/0:0:0:0:0:0:0:1:52904 which had sessionid 0x1524142e5c2 
(org.apache.zookeeper.server.NIOServerCnxn)
[2016-01-14 12:52:47,543] DEBUG Reading reply sessionid:0x1524142e5c2, 
packet:: clientPath:null serverPath:null finished:false header:: 1,-11  
replyHeader:: 1,2,0  request:: null response:: null 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,543] DEBUG Disconnecting client for session: 
0x1524142e5c2 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,544] INFO Session: 0x1524142e5c2 closed 
(org.apache.zookeeper.ZooKeeper)
[2016-01-14 12:52:47,544] DEBUG Closing ZkClient...done 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG ignoring event '{None | null}' since shutdown 
triggered (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Received event: WatchedEvent 
state:SyncConnected type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG ignoring event '{None | null}' since shutdown 
triggered (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] INFO EventThread shut down 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,545] FATAL Fatal error during KafkaServer startup. Prepare 
to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)


> Kafka server unable to connect to zookeeper
> ---
>
> Key: KAFKA-3102
> URL: https://issues.apache.org/jira/browse/KAFKA-3102
> Project: Kafka
>  Issue Type: Bug
>  Components: security
> Environment: RHEL 6
>Reporter: Mohit Anchlia
>
> Server disconnects from the zookeeper with the following log, and logs are 
> not indicative of any problem. It works without the security setup however. 
> I followed the security configuration steps from this site: 
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> In here find the list of principals, logs and Jaas file:
> 1) Jaas file 
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> 2) Principles from krb admin
> kadmin.local:  list_principals
> K/m...@example.com
> kadmin/ad...@example.com
> kadmin/chang...@example.com
> kadmin/ip-10-24-251-175.us-west-2.compute.inter...@example.com
> kafka/10.24.251@example.com
> krbtgt/example@example.com
> [2016-01-13 16:26:00,551] INFO starting (kafka.server.KafkaServer)
> [2016-01-13 16:26:00,557] INFO Connecting to zookeeper on localhost:2181 
> (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,718] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,721] INFO shutting down (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,727] INFO shut down completed (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,728] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at 

[jira] [Updated] (KAFKA-3098) "partition.assignment.strategy" appears twice in documentation

2016-01-14 Thread David Jacot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-3098:
---
Assignee: David Jacot
  Status: Patch Available  (was: Open)

> "partition.assignment.strategy" appears twice in documentation
> --
>
> Key: KAFKA-3098
> URL: https://issues.apache.org/jira/browse/KAFKA-3098
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> In the old consumer docs "partition.assignment.strategy" appears twice, the 
> second has much better details, so keep that one :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3106) After PUT a connector config from REST API, GET a connector config will fail

2016-01-14 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098458#comment-15098458
 ] 

Ewen Cheslack-Postava commented on KAFKA-3106:
--

Rebalances are going to be pretty expensive since all the connectors need to 
flush, commit offsets, and be stopped before you can safely rebalance if you 
don't want to lose any work you've already done. Therefore we should be very 
careful to only rebalance when absolutely necessary. So, generally you should 
*not* rebalance if you're updating an existing connector config since the 
connector can pick up the config change without a rebalance and depending on 
what changed, the change may never even need to trigger a rebalance.

> After  PUT a connector config from REST API, GET a connector config will fail
> -
>
> Key: KAFKA-3106
> URL: https://issues.apache.org/jira/browse/KAFKA-3106
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: jin xing
>Assignee: jin xing
>
> If there is already a connector in Connect, and we PUT a connector config by 
> REST API, the assignment.offset of DistributedHerder will below the 
> configStat.offset, thus GET connector config though REST API will fail 
> because of failed to pass "checkConfigSynced";
> The failed message is "Cannot get config data because config is not in sync 
> and this is not the leader";
> There need to be a rebalance process for  PUT to update the assignment.offset;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-01-14 Thread Mohit Anchlia (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098512#comment-15098512
 ] 

Mohit Anchlia edited comment on KAFKA-3102 at 1/14/16 5:54 PM:
---

I enabled debug and still not much info:

[2016-01-14 12:51:17,404] DEBUG zookeeper.disableAutoWatchReset is false 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:51:17,418] DEBUG Awaiting connection to Zookeeper server 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:51:17,418] INFO Waiting for keeper state SaslAuthenticated 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:51:17,420] DEBUG JAAS loginContext is: Client 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 12:51:23,419] DEBUG Closing ZkClient... 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:51:23,419] INFO Terminate ZkClient event thread. 
(org.I0Itec.zkclient.ZkEventThread)
[2016-01-14 12:51:23,419] DEBUG Closing ZooKeeper connected to localhost:2181 
(org.I0Itec.zkclient.ZkConnection)
[2016-01-14 12:51:23,419] DEBUG Closing session: 0x0 
(org.apache.zookeeper.ZooKeeper)
[2016-01-14 12:51:23,419] DEBUG Closing client for session: 0x0 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,501] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: Receive timed out Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it. (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,503] INFO Opening socket connection to server 
localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,503] DEBUG Received event: WatchedEvent state:AuthFailed 
type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,509] INFO Accepted socket connection from 
/0:0:0:0:0:0:0:1:52904 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-01-14 12:52:47,514] INFO Socket connection established to 
localhost/0:0:0:0:0:0:0:1:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,515] DEBUG Session establishment request sent on 
localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,519] DEBUG Session establishment request from client 
/0:0:0:0:0:0:0:1:52904 client's lastZxid is 0x0 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 12:52:47,521] INFO Client attempting to establish new session at 
/0:0:0:0:0:0:0:1:52904 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 12:52:47,524] INFO Creating new log file: log.1 
(org.apache.zookeeper.server.persistence.FileTxnLog)
[2016-01-14 12:52:47,528] DEBUG Processing request:: 
sessionid:0x1524142e5c2 type:createSession cxid:0x0 zxid:0x1 txntype:-10 
reqpath:n/a (org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,533] DEBUG sessionid:0x1524142e5c2 type:createSession 
cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,537] INFO Established session 0x1524142e5c2 with 
negotiated timeout 6000 for client /0:0:0:0:0:0:0:1:52904 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 12:52:47,539] INFO Session establishment complete on server 
localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x1524142e5c2, negotiated 
timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,541] INFO Processed session termination for sessionid: 
0x1524142e5c2 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-01-14 12:52:47,541] DEBUG Processing request:: 
sessionid:0x1524142e5c2 type:closeSession cxid:0x1 zxid:0x2 txntype:-11 
reqpath:n/a (org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,541] DEBUG sessionid:0x1524142e5c2 type:closeSession 
cxid:0x1 zxid:0x2 txntype:-11 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,543] INFO Closed socket connection for client 
/0:0:0:0:0:0:0:1:52904 which had sessionid 0x1524142e5c2 
(org.apache.zookeeper.server.NIOServerCnxn)
[2016-01-14 12:52:47,543] DEBUG Reading reply sessionid:0x1524142e5c2, 
packet:: clientPath:null serverPath:null finished:false header:: 1,-11  
replyHeader:: 1,2,0  request:: null response:: null 
(org.apache.zookeeper.ClientCnxn)

[2016-01-14 12:52:47,541] DEBUG sessionid:0x1524142e5c2 type:closeSession 
cxid:0x1 zxid:0x2 txntype:-11 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,543] INFO Closed socket connection for client 
/0:0:0:0:0:0:0:1:52904 which had sessionid 0x1524142e5c2 
(org.apache.zookeeper.server.NIOServerCnxn)
[2016-01-14 12:52:47,543] DEBUG Reading reply sessionid:0x1524142e5c2, 
packet:: clientPath:null serverPath:null finished:false header:: 1,-11  
replyHeader:: 1,2,0  request:: null response:: null 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,543] DEBUG Disconnecting client for session: 
0x1524142e5c2 (org.apache.zookeeper.ClientCnxn)

[jira] [Commented] (KAFKA-3098) "partition.assignment.strategy" appears twice in documentation

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098531#comment-15098531
 ] 

ASF GitHub Bot commented on KAFKA-3098:
---

GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/774

KAFKA-3098: "partition.assignment.strategy" appears twice in documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-3098

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/774.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #774


commit ef5fd379f90ed621e1a2bf08be707235eedaa877
Author: David Jacot 
Date:   2016-01-14T18:05:51Z

"partition.assignment.strategy" appears twice in documentation




> "partition.assignment.strategy" appears twice in documentation
> --
>
> Key: KAFKA-3098
> URL: https://issues.apache.org/jira/browse/KAFKA-3098
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> In the old consumer docs "partition.assignment.strategy" appears twice, the 
> second has much better details, so keep that one :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3098: "partition.assignment.strategy" ap...

2016-01-14 Thread dajac
GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/774

KAFKA-3098: "partition.assignment.strategy" appears twice in documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-3098

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/774.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #774


commit ef5fd379f90ed621e1a2bf08be707235eedaa877
Author: David Jacot 
Date:   2016-01-14T18:05:51Z

"partition.assignment.strategy" appears twice in documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3050) Space in the value for "host.name" causes "Unresolved address"

2016-01-14 Thread Sylwester (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylwester reassigned KAFKA-3050:


Assignee: Sylwester

> Space in the value for "host.name" causes "Unresolved address"
> --
>
> Key: KAFKA-3050
> URL: https://issues.apache.org/jira/browse/KAFKA-3050
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Navin Markandeya
>Assignee: Sylwester
>  Labels: newbie
>
> In {{/config/server.properties}},  after updating the 
> {{host.name}}  to a value with a space after "localhost", received
> {code}
> [2015-12-30 11:13:43,014] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> kafka.common.KafkaException: Socket server failed to bind to localhost :9092: 
> Unresolved address.
>   at kafka.network.Acceptor.openServerSocket(SocketServer.scala:260)
>   at kafka.network.Acceptor.(SocketServer.scala:205)
>   at kafka.network.SocketServer.startup(SocketServer.scala:86)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:99)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:46)
>   at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.SocketException: Unresolved address
>   at sun.nio.ch.Net.translateToSocketException(Net.java:131)
>   at sun.nio.ch.Net.translateException(Net.java:157)
>   at sun.nio.ch.Net.translateException(Net.java:163)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
>   at kafka.network.Acceptor.openServerSocket(SocketServer.scala:256)
>   ... 6 more
> Caused by: java.nio.channels.UnresolvedAddressException
>   at sun.nio.ch.Net.checkAddress(Net.java:101)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   ... 8 more
> {code}
>  I am running {{kafka_2.9.1-0.8.2.2}} on Centos6.5 with Java8
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3108) KStream custom Partitioner for windowed key

2016-01-14 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-3108:
---

 Summary: KStream custom Partitioner for windowed key
 Key: KAFKA-3108
 URL: https://issues.apache.org/jira/browse/KAFKA-3108
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Affects Versions: 0.9.1.0
Reporter: Yasuhiro Matsuda
Assignee: Yasuhiro Matsuda






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3080: Fix ConsoleConsumerTest by checkin...

2016-01-14 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/770

KAFKA-3080: Fix ConsoleConsumerTest by checking version when service is 
started

The MessageFormatter being used was only introduced as of 0.9.0.0. The Kafka
version in some tests is changed dynamically, sometimes from trunk back to 
an
earlier version, so this option must be set based on the version used when 
the
service is started, not when it is created.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-3080-system-test-console-consumer-version-failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #770


commit 0f0d33fe251f7dd94bd580c7e368f763ea680aea
Author: Ewen Cheslack-Postava 
Date:   2016-01-14T07:54:55Z

KAFKA-3080: Fix ConsoleConsumerTest by checking version when service is 
started

The MessageFormatter being used was only introduced as of 0.9.0.0. The Kafka
version in some tests is changed dynamically, sometimes from trunk back to 
an
earlier version, so this option must be set based on the version used when 
the
service is started, not when it is created.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3080) ConsoleConsumerTest.test_version system test fails consistently

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097796#comment-15097796
 ] 

ASF GitHub Bot commented on KAFKA-3080:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/770

KAFKA-3080: Fix ConsoleConsumerTest by checking version when service is 
started

The MessageFormatter being used was only introduced as of 0.9.0.0. The Kafka
version in some tests is changed dynamically, sometimes from trunk back to 
an
earlier version, so this option must be set based on the version used when 
the
service is started, not when it is created.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-3080-system-test-console-consumer-version-failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #770


commit 0f0d33fe251f7dd94bd580c7e368f763ea680aea
Author: Ewen Cheslack-Postava 
Date:   2016-01-14T07:54:55Z

KAFKA-3080: Fix ConsoleConsumerTest by checking version when service is 
started

The MessageFormatter being used was only introduced as of 0.9.0.0. The Kafka
version in some tests is changed dynamically, sometimes from trunk back to 
an
earlier version, so this option must be set based on the version used when 
the
service is started, not when it is created.




> ConsoleConsumerTest.test_version system test fails consistently
> ---
>
> Key: KAFKA-3080
> URL: https://issues.apache.org/jira/browse/KAFKA-3080
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Ewen Cheslack-Postava
>
> This test on trunk is failing consistently:
> {quote}
> test_id:
> 2016-01-07--001.kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_version
> status: FAIL
> run time:   38.451 seconds
> num_produced: 1000, num_consumed: 0
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/sanity_checks/test_console_consumer.py",
>  line 93, in test_version
> assert num_produced == num_consumed, "num_produced: %d, num_consumed: %d" 
> % (num_produced, num_consumed)
> AssertionError: num_produced: 1000, num_consumed: 0
> {quote}
> Example run where it fails: 
> http://jenkins.confluent.io/job/kafka_system_tests/79/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-41: Consumer Max Records

2016-01-14 Thread Ismael Juma
+1 (non-binding)

On Thu, Jan 14, 2016 at 6:26 AM, Neha Narkhede  wrote:

> +1 (binding)
>
>
>
>
>
>
> On Wed, Jan 13, 2016 at 9:10 PM -0800, "Joel Koshy" 
> wrote:
>
>
>
>
>
>
>
>
>
>
> +1
>
> On Wed, Jan 13, 2016 at 3:18 PM, Jason Gustafson  wrote:
>
> > Hi All,
> >
> > I'd like to open up the vote on KIP-41. This KIP adds a new consumer
> > configuration option "max.poll.records" which sets an upper bound on the
> > number of records returned in a call to poll(). This gives users a way to
> > limit message processing time to avoid unexpected rebalancing. This
> change
> > is backwards compatible with the default implementing the current
> behavior.
> >
> > Here's a link to the KIP wiki:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records
> >
> > Thanks,
> > Jason
> >
>
>
>
>
>
>


[jira] [Updated] (KAFKA-3106) After PUT a connector config from REST API, GET a connector config will fail

2016-01-14 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated KAFKA-3106:

Description: 
If there is already a connector in Connect, and we PUT a connector config by 
REST API, the assignment.offset of DistributedHerder will below the 
configStat.offset, thus GET connector config though REST API will fail because 
of failed to pass "checkConfigSynced";
The failed message is "Cannot get config data because config is not in sync and 
this is not the leader";
There need to be a rebalance process for  PUT to update the assignment.offset;

  was:
If there is already a connector in Connect, and we PUT a connector config by 
REST API, the assignment.offset of DistributedHerder will below the 
configStat.offset, thus GET connector config though REST API will fail because 
of failed to pass "checkConfigSynced";
The failed message is "Cannot get config data because config is not in sync and 
this is not the leader";
There is a rebalance process for PUT a connector to update the 
assignment.offset;


> After  PUT a connector config from REST API, GET a connector config will fail
> -
>
> Key: KAFKA-3106
> URL: https://issues.apache.org/jira/browse/KAFKA-3106
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: jin xing
>Assignee: jin xing
>
> If there is already a connector in Connect, and we PUT a connector config by 
> REST API, the assignment.offset of DistributedHerder will below the 
> configStat.offset, thus GET connector config though REST API will fail 
> because of failed to pass "checkConfigSynced";
> The failed message is "Cannot get config data because config is not in sync 
> and this is not the leader";
> There need to be a rebalance process for  PUT to update the assignment.offset;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3105) Use `Utils.atomicMoveWithFallback` instead of `File.rename`

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097841#comment-15097841
 ] 

ASF GitHub Bot commented on KAFKA-3105:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/771

KAFKA-3105: Use `Utils.atomicMoveWithFallback` instead of `File.rename`

It behaves better on Windows and provides more useful error messages.

Also:
* Minor inconsistency fix in `kafka.server.OffsetCheckpoint`.
* Remove delete from `streams.state.OffsetCheckpoint` constructor (similar 
to the change in `kafka.server.OffsetCheckpoint` in 
https://github.com/apache/kafka/commit/836cb1963330a9e342379899e0fe52b72347736e#diff-2503b32f29cbbd61ed8316f127829455L29).


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3105-use-atomic-move-with-fallback-instead-of-rename

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #771


commit e4c94793bd0bd82baf9c3be61046b719182878bf
Author: Ismael Juma 
Date:   2016-01-14T08:45:49Z

Use `Utils.atomicMoveWithFallback` instead of `File.rename`

It behaves better on Windows and provides more useful error
messages.

commit e9894b9691f8d865b8c1a3afe989ae17ccbf15fe
Author: Ismael Juma 
Date:   2016-01-14T08:47:08Z

Minor inconsistency fix in `OffsetCheckpoint.malformedLineException`

commit 29372fa2d3fbe4cfb5b4b88184539e5c9ac405b2
Author: Ismael Juma 
Date:   2016-01-14T08:48:44Z

Remove delete from `streams.state.OffsetCheckpoint` constructor

This is similar to the change in kafka.server.OffsetCheckpoint.




> Use `Utils.atomicMoveWithFallback` instead of `File.rename`
> ---
>
> Key: KAFKA-3105
> URL: https://issues.apache.org/jira/browse/KAFKA-3105
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> It behaves better on Windows and provides more useful error messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3094) Kafka process 100% CPU when no message in topic

2016-01-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097948#comment-15097948
 ] 

Jens Rantil commented on KAFKA-3094:


Also, are you seeing anything specific in the Kafka logs?

> Kafka process 100% CPU when no message in topic
> ---
>
> Key: KAFKA-3094
> URL: https://issues.apache.org/jira/browse/KAFKA-3094
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Omar AL Zabir
>
> When there's no message in a kafka topic and it is not getting any traffic 
> for some time, all the kafka nodes go 100% CPU. 
> As soon as I post a message, the CPU comes back to normal. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3094) Kafka process 100% CPU when no message in topic

2016-01-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097946#comment-15097946
 ] 

Jens Rantil commented on KAFKA-3094:


Omar: Can you recreate this issue? If so, would it be possible for you to see 
which thread in a `jstack` that uses the CPU? Here's a link to how to extract 
this: 
http://code.nomad-labs.com/2010/11/18/identifying-which-java-thread-is-consuming-most-cpu/

> Kafka process 100% CPU when no message in topic
> ---
>
> Key: KAFKA-3094
> URL: https://issues.apache.org/jira/browse/KAFKA-3094
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Omar AL Zabir
>
> When there's no message in a kafka topic and it is not getting any traffic 
> for some time, all the kafka nodes go 100% CPU. 
> As soon as I post a message, the CPU comes back to normal. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3105: Use `Utils.atomicMoveWithFallback`...

2016-01-14 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/771

KAFKA-3105: Use `Utils.atomicMoveWithFallback` instead of `File.rename`

It behaves better on Windows and provides more useful error messages.

Also:
* Minor inconsistency fix in `kafka.server.OffsetCheckpoint`.
* Remove delete from `streams.state.OffsetCheckpoint` constructor (similar 
to the change in `kafka.server.OffsetCheckpoint` in 
https://github.com/apache/kafka/commit/836cb1963330a9e342379899e0fe52b72347736e#diff-2503b32f29cbbd61ed8316f127829455L29).


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3105-use-atomic-move-with-fallback-instead-of-rename

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #771


commit e4c94793bd0bd82baf9c3be61046b719182878bf
Author: Ismael Juma 
Date:   2016-01-14T08:45:49Z

Use `Utils.atomicMoveWithFallback` instead of `File.rename`

It behaves better on Windows and provides more useful error
messages.

commit e9894b9691f8d865b8c1a3afe989ae17ccbf15fe
Author: Ismael Juma 
Date:   2016-01-14T08:47:08Z

Minor inconsistency fix in `OffsetCheckpoint.malformedLineException`

commit 29372fa2d3fbe4cfb5b4b88184539e5c9ac405b2
Author: Ismael Juma 
Date:   2016-01-14T08:48:44Z

Remove delete from `streams.state.OffsetCheckpoint` constructor

This is similar to the change in kafka.server.OffsetCheckpoint.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3106) After PUT a connector config from REST API, GET a connector config will fail

2016-01-14 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097951#comment-15097951
 ] 

jin xing commented on KAFKA-3106:
-

[~ewencp]
Is is appropriate to call  a rebalance by "member.reqeustRejoin()" in 
DistributedHerder::tick() no matter the config is for a new connector config or 
updating an existing one;


> After  PUT a connector config from REST API, GET a connector config will fail
> -
>
> Key: KAFKA-3106
> URL: https://issues.apache.org/jira/browse/KAFKA-3106
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: jin xing
>Assignee: jin xing
>
> If there is already a connector in Connect, and we PUT a connector config by 
> REST API, the assignment.offset of DistributedHerder will below the 
> configStat.offset, thus GET connector config though REST API will fail 
> because of failed to pass "checkConfigSynced";
> The failed message is "Cannot get config data because config is not in sync 
> and this is not the leader";
> There is a rebalance process for PUT a connector to update the 
> assignment.offset;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3106) After PUT a connector config from REST API, GET a connector config will fail

2016-01-14 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated KAFKA-3106:

Description: 
If there is already a connector in Connect, and we PUT a connector config by 
REST API, the assignment.offset of DistributedHerder will below the 
configStat.offset, thus GET connector config though REST API will fail because 
of failed to pass "checkConfigSynced";
The failed message is "Cannot get config data because config is not in sync and 
this is not the leader";
There is a rebalance process for PUT a connector to update the 
assignment.offset;

  was:
If there is already a connector in Connect, and we PUT a connector config by 
REST API, the assignment.offset of DistributedHerder will below the 
configStat.offset, thus GET connector config though REST API will fail because 
of failed to pass "checkConfigSynced";
the failed message is "Cannot get config data because config is not in sync and 
this is not the leader".


> After  PUT a connector config from REST API, GET a connector config will fail
> -
>
> Key: KAFKA-3106
> URL: https://issues.apache.org/jira/browse/KAFKA-3106
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: jin xing
>Assignee: jin xing
>
> If there is already a connector in Connect, and we PUT a connector config by 
> REST API, the assignment.offset of DistributedHerder will below the 
> configStat.offset, thus GET connector config though REST API will fail 
> because of failed to pass "checkConfigSynced";
> The failed message is "Cannot get config data because config is not in sync 
> and this is not the leader";
> There is a rebalance process for PUT a connector to update the 
> assignment.offset;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-41: Consumer Max Records

2016-01-14 Thread Jens Rantil
+1

On Thu, Jan 14, 2016 at 12:18 AM, Jason Gustafson 
wrote:

> Hi All,
>
> I'd like to open up the vote on KIP-41. This KIP adds a new consumer
> configuration option "max.poll.records" which sets an upper bound on the
> number of records returned in a call to poll(). This gives users a way to
> limit message processing time to avoid unexpected rebalancing. This change
> is backwards compatible with the default implementing the current behavior.
>
> Here's a link to the KIP wiki:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records
>
> Thanks,
> Jason
>



-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook  Linkedin

 Twitter 


[GitHub] kafka pull request: KAFKA-3087: Fix retention.ms property document...

2016-01-14 Thread rajubairishetti
GitHub user rajubairishetti opened a pull request:

https://github.com/apache/kafka/pull/772

KAFKA-3087: Fix retention.ms property documentation in config docs

Log retention settings can be set it in broker and some properties can be 
overriden at topic level. 
|Property |Default|Server Default property| Description|
|retention.ms|7 days|log.retention.minutes|This configuration controls the 
maximum time we will retain a log before we will discard old log segments to 
free up space if we are using the "delete" retention policy. This represents an 
SLA on how soon consumers must read their data.|

But retention.ms is in milli seconds not in minutes. So corresponding 
*Server Default property* should be *log.retention.ms* instead of 
*log.retention.minutes*.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajubairishetti/kafka KAFKA-3087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/772.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #772


commit 7ea672013f5d028986f869fc6bf6a5c409655ef7
Author: raju 
Date:   2016-01-14T08:53:04Z

KAFKA-3087: Fix retention.ms property documentation in config docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3106) After PUT a connector config from REST API, GET a connector config will fail

2016-01-14 Thread jin xing (JIRA)
jin xing created KAFKA-3106:
---

 Summary: After  PUT a connector config from REST API, GET a 
connector config will fail
 Key: KAFKA-3106
 URL: https://issues.apache.org/jira/browse/KAFKA-3106
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: jin xing
Assignee: jin xing


If there is already a connector in Connect, and we PUT a connector config by 
REST API, the assignment.offset of DistributedHerder will below the 
configStat.offset, thus GET connector config though REST API will fail because 
of failed to pass "checkConfigSynced";
the failed message is "Cannot get config data because config is not in sync and 
this is not the leader".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097844#comment-15097844
 ] 

ASF GitHub Bot commented on KAFKA-3087:
---

GitHub user rajubairishetti opened a pull request:

https://github.com/apache/kafka/pull/772

KAFKA-3087: Fix retention.ms property documentation in config docs

Log retention settings can be set it in broker and some properties can be 
overriden at topic level. 
|Property |Default|Server Default property| Description|
|retention.ms|7 days|log.retention.minutes|This configuration controls the 
maximum time we will retain a log before we will discard old log segments to 
free up space if we are using the "delete" retention policy. This represents an 
SLA on how soon consumers must read their data.|

But retention.ms is in milli seconds not in minutes. So corresponding 
*Server Default property* should be *log.retention.ms* instead of 
*log.retention.minutes*.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajubairishetti/kafka KAFKA-3087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/772.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #772


commit 7ea672013f5d028986f869fc6bf6a5c409655ef7
Author: raju 
Date:   2016-01-14T08:53:04Z

KAFKA-3087: Fix retention.ms property documentation in config docs




> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding *Server 
> Default property* should be *log.retention.ms* instead of 
> *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3105) Use `Utils.atomicMoveWithFallback` instead of `File.rename`

2016-01-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3105:
---
Status: Patch Available  (was: Open)

> Use `Utils.atomicMoveWithFallback` instead of `File.rename`
> ---
>
> Key: KAFKA-3105
> URL: https://issues.apache.org/jira/browse/KAFKA-3105
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> It behaves better on Windows and provides more useful error messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3107) Error when trying to shut down auto balancing scheduler of controller

2016-01-14 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-3107:
---

 Summary: Error when trying to shut down auto balancing scheduler 
of controller
 Key: KAFKA-3107
 URL: https://issues.apache.org/jira/browse/KAFKA-3107
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Flavio Junqueira


We observed the following exception when a controller was shutting down:

{noformat}
[run] Error handling event ZkEvent[New session event sent to 
kafka.controller.KafkaController$SessionExpirationListener@3278c211]
java.lang.IllegalStateException: Kafka scheduler has not been started
at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
at 
kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
at 
kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1108)
at 
kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
at 
kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at 
kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1107)
at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
{noformat}

The scheduler should have been started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3108) KStream custom StreamPartitioner for windowed key

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099106#comment-15099106
 ] 

ASF GitHub Bot commented on KAFKA-3108:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/779

KAFKA-3108: custom StreamParitioner for Windowed key

@guozhangwang 

When ```WindowedSerializer``` is specified in ```to(...)``` or 
```through(...)``` for a key, we use ```WindowedStreamPartitioner```.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka partitioner

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/779.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #779


commit 509bdb2568931365a2a6f912588721213ea6f408
Author: Yasuhiro Matsuda 
Date:   2016-01-14T23:20:37Z

KAFKA-3108: custom StreamParitioner for Windowed key




> KStream custom StreamPartitioner for windowed key
> -
>
> Key: KAFKA-3108
> URL: https://issues.apache.org/jira/browse/KAFKA-3108
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3108: custom StreamParitioner for Window...

2016-01-14 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/779

KAFKA-3108: custom StreamParitioner for Windowed key

@guozhangwang 

When ```WindowedSerializer``` is specified in ```to(...)``` or 
```through(...)``` for a key, we use ```WindowedStreamPartitioner```.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka partitioner

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/779.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #779


commit 509bdb2568931365a2a6f912588721213ea6f408
Author: Yasuhiro Matsuda 
Date:   2016-01-14T23:20:37Z

KAFKA-3108: custom StreamParitioner for Windowed key




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #961

2016-01-14 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: add internal source topic for tracking

[wangguoz] KAFKA-3108: custom StreamParitioner for Windowed key

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 37be6d98da842512367ab0b31d8f0244afafda92 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 37be6d98da842512367ab0b31d8f0244afafda92
 > git rev-list 4f22705c7d0c8e8cab68883e76f554439341e34a # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson6942639497698924529.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.533 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson817497444529656158.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.963 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #289

2016-01-14 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3108: custom StreamParitioner for Windowed key

--
[...truncated 2797 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 

[jira] [Resolved] (KAFKA-3108) KStream custom StreamPartitioner for windowed key

2016-01-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3108.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 779
[https://github.com/apache/kafka/pull/779]

> KStream custom StreamPartitioner for windowed key
> -
>
> Key: KAFKA-3108
> URL: https://issues.apache.org/jira/browse/KAFKA-3108
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Minor
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3108) KStream custom StreamPartitioner for windowed key

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099251#comment-15099251
 ] 

ASF GitHub Bot commented on KAFKA-3108:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/779


> KStream custom StreamPartitioner for windowed key
> -
>
> Key: KAFKA-3108
> URL: https://issues.apache.org/jira/browse/KAFKA-3108
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Minor
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3108: custom StreamParitioner for Window...

2016-01-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/779


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Improve Kafka documentation

2016-01-14 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/778

MINOR: Improve Kafka documentation

Improve the documentation by fixing typos, punctuations, and correcting the 
content.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
typo05/fix_documentation_typos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/778.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #778


commit 94f9ba73131470be8bcbf8d9dfcd8f7994e70a7b
Author: Vahid Hashemian 
Date:   2016-01-14T22:19:07Z

MINOR: Improve Kafka documentation

Improve the documentation by fixing typos, punctuations, and correcting
the content.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3108) KStream custom StreamPartitioner for windowed key

2016-01-14 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-3108:

Priority: Minor  (was: Major)

> KStream custom StreamPartitioner for windowed key
> -
>
> Key: KAFKA-3108
> URL: https://issues.apache.org/jira/browse/KAFKA-3108
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3108) KStream custom StreamPartitioner for windowed key

2016-01-14 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-3108:

Summary: KStream custom StreamPartitioner for windowed key  (was: KStream 
custom Partitioner for windowed key)

> KStream custom StreamPartitioner for windowed key
> -
>
> Key: KAFKA-3108
> URL: https://issues.apache.org/jira/browse/KAFKA-3108
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-01-14 Thread Mohit Anchlia (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098512#comment-15098512
 ] 

Mohit Anchlia edited comment on KAFKA-3102 at 1/15/16 12:38 AM:


I enabled debug and still not much info:

[2016-01-14 19:36:40,052] ERROR An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) 
occurred when evaluating Zookeeper Quorum Member's  received SASL token. This 
may be caused by Java's being unable to resolve the Zookeeper Quorum Member's 
hostname correctly. You may want to try to adding 
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS 
environment. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 19:36:40,052] ERROR SASL authentication with Zookeeper Quorum 
member failed: javax.security.sasl.SaslException: An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) 
occurred when evaluating Zookeeper Quorum Member's  received SASL token. This 
may be caused by Java's being unable to resolve the Zookeeper Quorum Member's 
hostname correctly. You may want to try to adding 
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS 
environment. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 19:36:40,052] DEBUG Received event: WatchedEvent state:AuthFailed 
type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 19:36:40,052] INFO zookeeper state changed (AuthFailed) 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 19:36:40,052] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 19:36:44,057] WARN caught end of stream exception 
(org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 
0x15242b64cf8, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)

[2016-01-14 12:52:47,541] DEBUG sessionid:0x1524142e5c2 type:closeSession 
cxid:0x1 zxid:0x2 txntype:-11 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,543] INFO Closed socket connection for client 
/0:0:0:0:0:0:0:1:52904 which had sessionid 0x1524142e5c2 
(org.apache.zookeeper.server.NIOServerCnxn)
[2016-01-14 12:52:47,543] DEBUG Reading reply sessionid:0x1524142e5c2, 
packet:: clientPath:null serverPath:null finished:false header:: 1,-11  
replyHeader:: 1,2,0  request:: null response:: null 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,543] DEBUG Disconnecting client for session: 
0x1524142e5c2 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,544] INFO Session: 0x1524142e5c2 closed 
(org.apache.zookeeper.ZooKeeper)
[2016-01-14 12:52:47,544] DEBUG Closing ZkClient...done 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG ignoring event '{None | null}' since shutdown 
triggered (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Received event: WatchedEvent 
state:SyncConnected type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG ignoring event '{None | null}' since shutdown 
triggered (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] INFO EventThread shut down 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,545] FATAL Fatal error during KafkaServer startup. Prepare 
to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)



was (Author: mohitanchlia):
I enabled debug and still not much info:

[2016-01-14 18:57:25,346] DEBUG JAAS loginContext is: Client 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 18:57:25,445] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: Checksum failed Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it. (org.apache.zookeeper.ClientCnxn)
[2016-01-14 18:57:25,447] INFO Opening socket connection to server 
localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 18:57:25,447] DEBUG Received event: WatchedEvent state:AuthFailed 

[GitHub] kafka pull request: MINOR: add internal source topic for tracking

2016-01-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/775


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #288

2016-01-14 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: add internal source topic for tracking

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a3d3d5379df71e7a2c653d06ebf1b30923dde738 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a3d3d5379df71e7a2c653d06ebf1b30923dde738
 > git rev-list 4f22705c7d0c8e8cab68883e76f554439341e34a # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8008829539694329741.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 9.239 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8339974357914235697.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.189 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Commented] (KAFKA-2695) Handle null string/bytes protocol primitives

2016-01-14 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098604#comment-15098604
 ] 

Jason Gustafson commented on KAFKA-2695:


[~junrao] That actually works currently because records are not parsed using 
o.a.k.common.protocol.types.Type. But most other cases will fail because the 
parsing code looks like this:

{code}
int length = buffer.getShort();
byte[] bytes = new byte[length];
{code}

It's not difficult to fix, but we'll probably need to make sure that current 
usage can handle null values without breaking in other ways.

> Handle null string/bytes protocol primitives
> 
>
> Key: KAFKA-2695
> URL: https://issues.apache.org/jira/browse/KAFKA-2695
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The kafka protocol supports null bytes and string primitives by passing -1 as 
> the size, but the current deserializers implemented in 
> o.a.k.common.protocol.types.Type do not handle them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3083) a soft failure in controller may leader a topic partition in an inconsistent state

2016-01-14 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098607#comment-15098607
 ] 

Mayuresh Gharat commented on KAFKA-3083:


That's a very good point, I will verify if this can happen. 
Moreover, I think the behavior should be :
1) Broker A was the controller.
2) Broker A faces a session expiration, invokes the controllerResignation and 
clears all its caches and also stops all the ongoing controller work.
3) Broker B becomes the controller and proceeds. 

> a soft failure in controller may leader a topic partition in an inconsistent 
> state
> --
>
> Key: KAFKA-3083
> URL: https://issues.apache.org/jira/browse/KAFKA-3083
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Mayuresh Gharat
>
> The following sequence can happen.
> 1. Broker A is the controller and is in the middle of processing a broker 
> change event. As part of this process, let's say it's about to shrink the isr 
> of a partition.
> 2. Then broker A's session expires and broker B takes over as the new 
> controller. Broker B sends the initial leaderAndIsr request to all brokers.
> 3. Broker A continues by shrinking the isr of the partition in ZK and sends 
> the new leaderAndIsr request to the broker (say C) that leads the partition. 
> Broker C will reject this leaderAndIsr since the request comes from a 
> controller with an older epoch. Now we could be in a situation that Broker C 
> thinks the isr has all replicas, but the isr stored in ZK is different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2016-01-14 Thread Allen Wang
Thanks Ismael. KIP is updated to use 0.9.0.0 and add link to the JIRA.


On Thu, Jan 14, 2016 at 8:46 AM, Ismael Juma  wrote:

> On Thu, Jan 14, 2016 at 1:24 AM, Allen Wang  wrote:
>
> > Updated KIP regarding how broker JSON version will be handled and new
> > procedure of upgrade.
>
>
> Thanks Allen. In the following text, I think we should replace 0.9.0 with
> 0.9.0.0:
>
> "Due to a bug introduced in 0.9.0 in ZkUtils.getBrokerInfo(), old clients
> will throw an exception when it sees the broker JSON version is not 1 or 2.
> Therefore, *a minor release 0.9.0.1 is required* to fix the problem first
> so that old clients can parse future version of broker JSON in ZooKeeper.
> That means 0.9.0 clients must be upgraded to 0.9.0.1 before 0.9.1 upgrade
> can start. In addition, since ZkUtils.getBrokerInfo() is also used by
> broker, version specific code has to be used when registering broker with
> ZooKeeper"
>
> Also, I posted a PR for supporting version > 2 in 0.9.0.1 and trunk:
>
> https://github.com/apache/kafka/pull/773
>
> Ismael
>


[jira] [Comment Edited] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-01-14 Thread Mohit Anchlia (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098512#comment-15098512
 ] 

Mohit Anchlia edited comment on KAFKA-3102 at 1/15/16 12:00 AM:


I enabled debug and still not much info:

[2016-01-14 18:57:25,346] DEBUG JAAS loginContext is: Client 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 18:57:25,445] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: Checksum failed Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it. (org.apache.zookeeper.ClientCnxn)
[2016-01-14 18:57:25,447] INFO Opening socket connection to server 
localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 18:57:25,447] DEBUG Received event: WatchedEvent state:AuthFailed 
type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 18:57:25,447] INFO zookeeper state changed (AuthFailed) 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 18:57:25,447] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)

[2016-01-14 12:52:47,541] DEBUG sessionid:0x1524142e5c2 type:closeSession 
cxid:0x1 zxid:0x2 txntype:-11 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 12:52:47,543] INFO Closed socket connection for client 
/0:0:0:0:0:0:0:1:52904 which had sessionid 0x1524142e5c2 
(org.apache.zookeeper.server.NIOServerCnxn)
[2016-01-14 12:52:47,543] DEBUG Reading reply sessionid:0x1524142e5c2, 
packet:: clientPath:null serverPath:null finished:false header:: 1,-11  
replyHeader:: 1,2,0  request:: null response:: null 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,543] DEBUG Disconnecting client for session: 
0x1524142e5c2 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,544] INFO Session: 0x1524142e5c2 closed 
(org.apache.zookeeper.ZooKeeper)
[2016-01-14 12:52:47,544] DEBUG Closing ZkClient...done 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG ignoring event '{None | null}' since shutdown 
triggered (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Received event: WatchedEvent 
state:SyncConnected type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG ignoring event '{None | null}' since shutdown 
triggered (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,544] INFO EventThread shut down 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,545] FATAL Fatal error during KafkaServer startup. Prepare 
to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)



was (Author: mohitanchlia):
I enabled debug and still not much info:

[2016-01-14 12:51:17,404] DEBUG zookeeper.disableAutoWatchReset is false 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:51:17,418] DEBUG Awaiting connection to Zookeeper server 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:51:17,418] INFO Waiting for keeper state SaslAuthenticated 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:51:17,420] DEBUG JAAS loginContext is: Client 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 12:51:23,419] DEBUG Closing ZkClient... 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:51:23,419] INFO Terminate ZkClient event thread. 
(org.I0Itec.zkclient.ZkEventThread)
[2016-01-14 12:51:23,419] DEBUG Closing ZooKeeper connected to localhost:2181 
(org.I0Itec.zkclient.ZkConnection)
[2016-01-14 12:51:23,419] DEBUG Closing session: 0x0 
(org.apache.zookeeper.ZooKeeper)
[2016-01-14 12:51:23,419] DEBUG Closing client for session: 0x0 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,501] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: Receive timed out Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it. (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,503] INFO Opening socket connection to server 
localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,503] DEBUG Received event: WatchedEvent state:AuthFailed 
type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 12:52:47,509] INFO Accepted socket connection from 
/0:0:0:0:0:0:0:1:52904 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-01-14 12:52:47,514] INFO Socket connection established to 
localhost/0:0:0:0:0:0:0:1:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,515] DEBUG Session establishment request sent on 
localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 12:52:47,519] DEBUG Session establishment request from client 
/0:0:0:0:0:0:0:1:52904 

[jira] [Comment Edited] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-01-14 Thread Mohit Anchlia (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098512#comment-15098512
 ] 

Mohit Anchlia edited comment on KAFKA-3102 at 1/15/16 12:46 AM:


I enabled debug and still not much info:

Forwardable Ticket true
Forwarded Ticket false
Proxiable Ticket false
Proxy Ticket false
Postdated Ticket false
Renewable Ticket false
Initial Ticket false
Auth Time = Thu Jan 14 19:44:43 EST 2016
Start Time = Thu Jan 14 19:44:43 EST 2016
End Time = Fri Jan 15 19:44:43 EST 2016
Renew Till = null
Client Addresses  Null . (org.apache.zookeeper.Login)
[2016-01-14 19:44:28,212] INFO TGT valid starting at:Thu Jan 14 
19:44:43 EST 2016 (org.apache.zookeeper.Login)
[2016-01-14 19:44:28,212] INFO TGT expires:  Fri Jan 15 
19:44:43 EST 2016 (org.apache.zookeeper.Login)
[2016-01-14 19:44:28,213] INFO TGT refresh sleeping until: Fri Jan 15 15:53:07 
EST 2016 (org.apache.zookeeper.Login)
[2016-01-14 19:44:28,223] INFO Opening socket connection to server 
localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context 
section 'Client' (org.apache.zookeeper.ClientCnxn)
[2016-01-14 19:44:28,231] INFO Socket connection established to 
localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2016-01-14 19:44:28,232] INFO Accepted socket connection from /127.0.0.1:53042 
(org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-01-14 19:44:28,233] DEBUG Session establishment request sent on 
localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 19:44:28,242] DEBUG Session establishment request from client 
/127.0.0.1:53042 client's lastZxid is 0x0 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 19:44:28,244] INFO Client attempting to establish new session at 
/127.0.0.1:53042 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 19:44:28,248] INFO Creating new log file: log.1 
(org.apache.zookeeper.server.persistence.FileTxnLog)
[2016-01-14 19:44:28,255] DEBUG Processing request:: 
sessionid:0x15242bd6342 type:createSession cxid:0x0 zxid:0x1 txntype:-10 
reqpath:n/a (org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 19:44:28,261] DEBUG sessionid:0x15242bd6342 type:createSession 
cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 
(org.apache.zookeeper.server.FinalRequestProcessor)
[2016-01-14 19:44:28,267] INFO Established session 0x15242bd6342 with 
negotiated timeout 6000 for client /127.0.0.1:53042 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 19:44:28,270] INFO Session establishment complete on server 
localhost/127.0.0.1:2181, sessionid = 0x15242bd6342, negotiated timeout = 
6000 (org.apache.zookeeper.ClientCnxn)
[2016-01-14 19:44:28,272] DEBUG ClientCnxn:sendSaslPacket:length=0 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 19:44:28,273] DEBUG Received event: WatchedEvent 
state:SyncConnected type:None path:null (org.I0Itec.zkclient.ZkClient)
[2016-01-14 19:44:28,273] INFO zookeeper state changed (SyncConnected) 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 19:44:28,273] DEBUG Leaving process event 
(org.I0Itec.zkclient.ZkClient)
[2016-01-14 19:44:28,274] DEBUG saslClient.evaluateChallenge(len=0) 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 19:44:28,301] DEBUG Responding to client SASL token. 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 19:44:28,302] DEBUG Size of client SASL token: 611 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 19:44:28,302] ERROR cnxn.saslServer is null: cnxn object did not 
initialize its saslServer properly. 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-01-14 19:44:28,304] ERROR SASL authentication failed using login context 
'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)



was (Author: mohitanchlia):
I enabled debug and still not much info:

[2016-01-14 19:36:40,052] ERROR An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) 
occurred when evaluating Zookeeper Quorum Member's  received SASL token. This 
may be caused by Java's being unable to resolve the Zookeeper Quorum Member's 
hostname correctly. You may want to try to adding 
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS 
environment. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-01-14 19:36:40,052] ERROR SASL authentication with Zookeeper Quorum 
member failed: javax.security.sasl.SaslException: An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) 

[jira] [Commented] (KAFKA-2695) Handle null string/bytes protocol primitives

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099227#comment-15099227
 ] 

ASF GitHub Bot commented on KAFKA-2695:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/780

KAFKA-2695: limited support for nullable byte arrays



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2695

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/780.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #780


commit 770f3d037b5a5190d6ccc187bcbf623ca9240618
Author: Jason Gustafson 
Date:   2016-01-15T00:58:34Z

KAFKA-2695: limited support for nullable byte arrays




> Handle null string/bytes protocol primitives
> 
>
> Key: KAFKA-2695
> URL: https://issues.apache.org/jira/browse/KAFKA-2695
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The kafka protocol supports null bytes and string primitives by passing -1 as 
> the size, but the current deserializers implemented in 
> o.a.k.common.protocol.types.Type do not handle them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2695: limited support for nullable byte ...

2016-01-14 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/780

KAFKA-2695: limited support for nullable byte arrays



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2695

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/780.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #780


commit 770f3d037b5a5190d6ccc187bcbf623ca9240618
Author: Jason Gustafson 
Date:   2016-01-15T00:58:34Z

KAFKA-2695: limited support for nullable byte arrays




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3050: Acceptor allows hostnames surround...

2016-01-14 Thread Zixxy
GitHub user Zixxy opened a pull request:

https://github.com/apache/kafka/pull/777

KAFKA-3050: Acceptor allows hostnames surrounded by whitespaces



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Zixxy/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/777.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #777


commit 48321d4bb9d0490c3cbb993069b7349013fd42e5
Author: Zixxy 
Date:   2016-01-14T21:13:27Z

KAFKA-3050: Acceptor allows hostnames surrounded by whitespaces




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3050) Space in the value for "host.name" causes "Unresolved address"

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098883#comment-15098883
 ] 

ASF GitHub Bot commented on KAFKA-3050:
---

GitHub user Zixxy opened a pull request:

https://github.com/apache/kafka/pull/777

KAFKA-3050: Acceptor allows hostnames surrounded by whitespaces



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Zixxy/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/777.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #777


commit 48321d4bb9d0490c3cbb993069b7349013fd42e5
Author: Zixxy 
Date:   2016-01-14T21:13:27Z

KAFKA-3050: Acceptor allows hostnames surrounded by whitespaces




> Space in the value for "host.name" causes "Unresolved address"
> --
>
> Key: KAFKA-3050
> URL: https://issues.apache.org/jira/browse/KAFKA-3050
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Navin Markandeya
>Assignee: Sylwester
>  Labels: newbie
>
> In {{/config/server.properties}},  after updating the 
> {{host.name}}  to a value with a space after "localhost", received
> {code}
> [2015-12-30 11:13:43,014] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> kafka.common.KafkaException: Socket server failed to bind to localhost :9092: 
> Unresolved address.
>   at kafka.network.Acceptor.openServerSocket(SocketServer.scala:260)
>   at kafka.network.Acceptor.(SocketServer.scala:205)
>   at kafka.network.SocketServer.startup(SocketServer.scala:86)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:99)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:46)
>   at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.SocketException: Unresolved address
>   at sun.nio.ch.Net.translateToSocketException(Net.java:131)
>   at sun.nio.ch.Net.translateException(Net.java:157)
>   at sun.nio.ch.Net.translateException(Net.java:163)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
>   at kafka.network.Acceptor.openServerSocket(SocketServer.scala:256)
>   ... 6 more
> Caused by: java.nio.channels.UnresolvedAddressException
>   at sun.nio.ch.Net.checkAddress(Net.java:101)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   ... 8 more
> {code}
>  I am running {{kafka_2.9.1-0.8.2.2}} on Centos6.5 with Java8
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3095) No documentation on format of sasl.kerberos.principal.to.local.rules

2016-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098844#comment-15098844
 ] 

ASF GitHub Bot commented on KAFKA-3095:
---

GitHub user tgravescs opened a pull request:

https://github.com/apache/kafka/pull/776

KAFKA-3095: No documentation on format of 
sasl.kerberos.principal.to.local.rules

Add some basic documentation about the format, a link to get more detailed 
information and an example usage.  I didn't want to make a huge section on the 
format since it documented elsewhere but I can expand is folks want.

https://issues.apache.org/jira/browse/KAFKA-3095

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tgravescs/kafka KAFKA-3095

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/776.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #776


commit d9b81554c0edc03e877ff752b070711b965f4813
Author: Tom Graves 
Date:   2016-01-14T17:07:25Z

KAFKA-3095. No documentation on format of 
sasl.kerberos.principal.to.local.rules

commit 5babe63e5ba22ea633f927ecd85c9e021cc78b4d
Author: Tom Graves 
Date:   2016-01-14T20:54:09Z

formatting changes




> No documentation on format of sasl.kerberos.principal.to.local.rules
> 
>
> Key: KAFKA-3095
> URL: https://issues.apache.org/jira/browse/KAFKA-3095
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> The documentation talked about the config 
> sasl.kerberos.principal.to.local.rules and the format of the default but it 
> doesn't say what format the rules should be specified in.  A description and 
> perhaps an example would be very useful here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3095: No documentation on format of sasl...

2016-01-14 Thread tgravescs
GitHub user tgravescs opened a pull request:

https://github.com/apache/kafka/pull/776

KAFKA-3095: No documentation on format of 
sasl.kerberos.principal.to.local.rules

Add some basic documentation about the format, a link to get more detailed 
information and an example usage.  I didn't want to make a huge section on the 
format since it documented elsewhere but I can expand is folks want.

https://issues.apache.org/jira/browse/KAFKA-3095

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tgravescs/kafka KAFKA-3095

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/776.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #776


commit d9b81554c0edc03e877ff752b070711b965f4813
Author: Tom Graves 
Date:   2016-01-14T17:07:25Z

KAFKA-3095. No documentation on format of 
sasl.kerberos.principal.to.local.rules

commit 5babe63e5ba22ea633f927ecd85c9e021cc78b4d
Author: Tom Graves 
Date:   2016-01-14T20:54:09Z

formatting changes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-41: Consumer Max Records

2016-01-14 Thread Liquan Pei
+1

On Thu, Jan 14, 2016 at 2:05 AM, Jens Rantil  wrote:

> +1
>
> On Thu, Jan 14, 2016 at 12:18 AM, Jason Gustafson 
> wrote:
>
> > Hi All,
> >
> > I'd like to open up the vote on KIP-41. This KIP adds a new consumer
> > configuration option "max.poll.records" which sets an upper bound on the
> > number of records returned in a call to poll(). This gives users a way to
> > limit message processing time to avoid unexpected rebalancing. This
> change
> > is backwards compatible with the default implementing the current
> behavior.
> >
> > Here's a link to the KIP wiki:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records
> >
> > Thanks,
> > Jason
> >
>
>
>
> --
> Jens Rantil
> Backend engineer
> Tink AB
>
> Email: jens.ran...@tink.se
> Phone: +46 708 84 18 32
> Web: www.tink.se
>
> Facebook  Linkedin
> <
> http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary
> >
>  Twitter 
>



-- 
Liquan Pei
Department of Physics
University of Massachusetts Amherst


[GitHub] kafka pull request: MINOR: add internal source topic for tracking

2016-01-14 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/775

MINOR: add internal source topic for tracking



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KRepartTopic

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/775.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #775


commit bf4c4cb3dbb5b4066d9c3e0ada5b7ffd98eb129a
Author: Guozhang Wang 
Date:   2016-01-14T20:27:58Z

add internal source topic for tracking




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-3083) a soft failure in controller may leader a topic partition in an inconsistent state

2016-01-14 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098607#comment-15098607
 ] 

Mayuresh Gharat edited comment on KAFKA-3083 at 1/14/16 6:34 PM:
-

That's a very good point, I will verify if this can happen. 
Moreover, I think the behavior should be :
1) Broker A was the controller.
2) Broker A faces a session expiration, invokes the controllerResignation and 
clears all its caches and also stops all the ongoing controller work.
3) Broker B becomes the controller and proceeds. 

what do you think?


was (Author: mgharat):
That's a very good point, I will verify if this can happen. 
Moreover, I think the behavior should be :
1) Broker A was the controller.
2) Broker A faces a session expiration, invokes the controllerResignation and 
clears all its caches and also stops all the ongoing controller work.
3) Broker B becomes the controller and proceeds. 

> a soft failure in controller may leader a topic partition in an inconsistent 
> state
> --
>
> Key: KAFKA-3083
> URL: https://issues.apache.org/jira/browse/KAFKA-3083
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Mayuresh Gharat
>
> The following sequence can happen.
> 1. Broker A is the controller and is in the middle of processing a broker 
> change event. As part of this process, let's say it's about to shrink the isr 
> of a partition.
> 2. Then broker A's session expires and broker B takes over as the new 
> controller. Broker B sends the initial leaderAndIsr request to all brokers.
> 3. Broker A continues by shrinking the isr of the partition in ZK and sends 
> the new leaderAndIsr request to the broker (say C) that leads the partition. 
> Broker C will reject this leaderAndIsr since the request comes from a 
> controller with an older epoch. Now we could be in a situation that Broker C 
> thinks the isr has all replicas, but the isr stored in ZK is different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2016-01-14 Thread Ismael Juma
On Thu, Jan 14, 2016 at 1:24 AM, Allen Wang  wrote:

> Updated KIP regarding how broker JSON version will be handled and new
> procedure of upgrade.


Thanks Allen. In the following text, I think we should replace 0.9.0 with
0.9.0.0:

"Due to a bug introduced in 0.9.0 in ZkUtils.getBrokerInfo(), old clients
will throw an exception when it sees the broker JSON version is not 1 or 2.
Therefore, *a minor release 0.9.0.1 is required* to fix the problem first
so that old clients can parse future version of broker JSON in ZooKeeper.
That means 0.9.0 clients must be upgraded to 0.9.0.1 before 0.9.1 upgrade
can start. In addition, since ZkUtils.getBrokerInfo() is also used by
broker, version specific code has to be used when registering broker with
ZooKeeper"

Also, I posted a PR for supporting version > 2 in 0.9.0.1 and trunk:

https://github.com/apache/kafka/pull/773

Ismael