Jenkins build is back to normal : kafka-trunk-jdk8 #851

2016-08-29 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-4102) Is kafka mirror maker in 0.10.X don't support mirror kafka 0.8.X?

2016-08-29 Thread yf (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yf updated KAFKA-4102:
--
Description: 
I want this feature 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-3+-+Mirror+Maker+Enhancement)
 in kafka new version.
But it seems can't support kafka 0.8.X ?

Cmd: ./kafka_2.11-0.10.0.1/bin/kafka-mirror-maker.sh --producer.config 
svc_run/mirror_kafka_write_online2streaming/producer.properties 
--consumer.config 
svc_run/mirror_kafka_write_online2streaming/consumer.properties --whitelist 
sandbox

Producer config:
queue.buffering.max.messages=16384
bootstrap.servers=XXX
send.buffer.bytes=131072
message.send.max.retries=1048576

consumer config:
zookeeper.connect=
group.id=test_mirror
consumer.timeout.ms=-1
zookeeper.connection.timeout.ms=6
zookeeper.session.timeout.ms=6
socket.receive.buffer.bytes=-1
auto.commit.interval.ms=5000
auto.commit.enable=false

But I got failures like follows:

FetchRequest@33437902 (kafka.consumer.ConsumerFetcherThread)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at 
kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:38)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:100)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:98)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.api.TopicData$.readFrom(FetchResponse.scala:98)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

  was:
I want this feature 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-3+-+Mirror+Maker+Enhancement)
 in kafka new version.
But it seems can't support kafka 0.8.X ?

FetchRequest@33437902 (kafka.consumer.ConsumerFetcherThread)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at 
kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:38)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:100)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:98)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.api.TopicData$.readFrom(FetchResponse.scala:98)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29)
at 

[jira] [Updated] (KAFKA-4102) Is kafka mirror maker in 0.10.X don't support mirror kafka 0.8.X?

2016-08-29 Thread yf (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yf updated KAFKA-4102:
--
Description: 
I want this feature 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-3+-+Mirror+Maker+Enhancement)
 in kafka new version.
But it seems can't support kafka 0.8.X ?

FetchRequest@33437902 (kafka.consumer.ConsumerFetcherThread)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at 
kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:38)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:100)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:98)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.api.TopicData$.readFrom(FetchResponse.scala:98)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

  was:
FetchRequest@33437902 (kafka.consumer.ConsumerFetcherThread)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at 
kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:38)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:100)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:98)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.api.TopicData$.readFrom(FetchResponse.scala:98)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)


> Is kafka mirror maker in 0.10.X don't support mirror kafka 0.8.X?
> -
>
> Key: KAFKA-4102
> URL: https://issues.apache.org/jira/browse/KAFKA-4102
> Project: Kafka
>  Issue Type: Bug
>Reporter: yf
> Fix For: 0.10.1.0
>
>
> I want this feature 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-3+-+Mirror+Maker+Enhancement)
>  in kafka new version.
> But it seems can't support kafka 0.8.X ?
> FetchRequest@33437902 (kafka.consumer.ConsumerFetcherThread)

[jira] [Created] (KAFKA-4102) Is kafka mirror maker in 0.10.X don't support mirror kafka 0.8.X?

2016-08-29 Thread yf (JIRA)
yf created KAFKA-4102:
-

 Summary: Is kafka mirror maker in 0.10.X don't support mirror 
kafka 0.8.X?
 Key: KAFKA-4102
 URL: https://issues.apache.org/jira/browse/KAFKA-4102
 Project: Kafka
  Issue Type: Bug
Reporter: yf
 Fix For: 0.10.1.0


FetchRequest@33437902 (kafka.consumer.ConsumerFetcherThread)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at 
kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:38)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:100)
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:98)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.api.TopicData$.readFrom(FetchResponse.scala:98)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108)
at 
kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4100) Connect Struct schemas built using SchemaBuilder with no fields cause NPE in Struct constructor

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447731#comment-15447731
 ] 

ASF GitHub Bot commented on KAFKA-4100:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1800


> Connect Struct schemas built using SchemaBuilder with no fields cause NPE in 
> Struct constructor
> ---
>
> Key: KAFKA-4100
> URL: https://issues.apache.org/jira/browse/KAFKA-4100
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Avro records can legitimately have 0 fields (though arguable how useful that 
> is).
> When using the Confluent Schema Registry's {{AvroConverter}} with such a 
> schema,
> {noformat}
> java.lang.NullPointerException
>   at org.apache.kafka.connect.data.Struct.(Struct.java:56)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:980)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:782)
>   at 
> io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:358)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:171)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is because it is using the {{SchemaBuilder}} to create the Struct 
> schema, which provides a {{field(..)}} builder for each field. If there are 
> no fields, the list stays as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1800: KAFKA-4100: ensure 'fields' and 'fieldsByName' are...

2016-08-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1800


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4100) Connect Struct schemas built using SchemaBuilder with no fields cause NPE in Struct constructor

2016-08-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4100.
--
Resolution: Fixed

Issue resolved by pull request 1800
[https://github.com/apache/kafka/pull/1800]

> Connect Struct schemas built using SchemaBuilder with no fields cause NPE in 
> Struct constructor
> ---
>
> Key: KAFKA-4100
> URL: https://issues.apache.org/jira/browse/KAFKA-4100
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Avro records can legitimately have 0 fields (though arguable how useful that 
> is).
> When using the Confluent Schema Registry's {{AvroConverter}} with such a 
> schema,
> {noformat}
> java.lang.NullPointerException
>   at org.apache.kafka.connect.data.Struct.(Struct.java:56)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:980)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:782)
>   at 
> io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:358)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:171)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is because it is using the {{SchemaBuilder}} to create the Struct 
> schema, which provides a {{field(..)}} builder for each field. If there are 
> no fields, the list stays as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4101) java.lang.IllegalStateException in org.apache.kafka.common.network.Selector.channelOrFail

2016-08-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447337#comment-15447337
 ] 

Jason Gustafson edited comment on KAFKA-4101 at 8/29/16 11:03 PM:
--

[~andrey-sra] I think this might be fixed by KAFKA-3341 (in 0.10.0.0 
unfortunately). Was there any additional context in the log? I'm looking for an 
error messaging saying something like "Closing socket for [host] because of 
error."


was (Author: hachikuji):
[~andrey-sra] I think this might be fixed by KAFKA-3341 (in 0.10.0.0 
unfortunately). Was there any additional context in the log? I'm looking for an 
error messaging saying something like "Closing socket for {host} because of 
error."

> java.lang.IllegalStateException in 
> org.apache.kafka.common.network.Selector.channelOrFail
> -
>
> Key: KAFKA-4101
> URL: https://issues.apache.org/jira/browse/KAFKA-4101
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: Ubuntu 14.04, AWS deployment, under heavy network load
>Reporter: Andrey Savov
>
> {code}
>  at org.apache.kafka.common.network.Selector.channelOrFail(Selector.java:467)
> at org.apache.kafka.common.network.Selector.mute(Selector.java:347)
> at 
> kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:434)
> at 
> kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.network.Processor.run(SocketServer.scala:421)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4101) java.lang.IllegalStateException in org.apache.kafka.common.network.Selector.channelOrFail

2016-08-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447337#comment-15447337
 ] 

Jason Gustafson commented on KAFKA-4101:


[~andrey-sra] I think this might be fixed by KAFKA-3341 (in 0.10.0.0 
unfortunately). Was there any additional context in the log? I'm looking for an 
error messaging saying something like "Closing socket for {host} because of 
error."

> java.lang.IllegalStateException in 
> org.apache.kafka.common.network.Selector.channelOrFail
> -
>
> Key: KAFKA-4101
> URL: https://issues.apache.org/jira/browse/KAFKA-4101
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: Ubuntu 14.04, AWS deployment, under heavy network load
>Reporter: Andrey Savov
>
> {code}
>  at org.apache.kafka.common.network.Selector.channelOrFail(Selector.java:467)
> at org.apache.kafka.common.network.Selector.mute(Selector.java:347)
> at 
> kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:434)
> at 
> kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.network.Processor.run(SocketServer.scala:421)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4100) Connect Struct schemas built using SchemaBuilder with no fields cause NPE in Struct constructor

2016-08-29 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4100 started by Shikhar Bhushan.
--
> Connect Struct schemas built using SchemaBuilder with no fields cause NPE in 
> Struct constructor
> ---
>
> Key: KAFKA-4100
> URL: https://issues.apache.org/jira/browse/KAFKA-4100
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Avro records can legitimately have 0 fields (though arguable how useful that 
> is).
> When using the Confluent Schema Registry's {{AvroConverter}} with such a 
> schema,
> {noformat}
> java.lang.NullPointerException
>   at org.apache.kafka.connect.data.Struct.(Struct.java:56)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:980)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:782)
>   at 
> io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:358)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:171)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is because it is using the {{SchemaBuilder}} to create the Struct 
> schema, which provides a {{field(..)}} builder for each field. If there are 
> no fields, the list stays as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4099) Change the time based log rolling to base on the file create time instead of timestamp of the first message.

2016-08-29 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447253#comment-15447253
 ] 

Jiangjie Qin commented on KAFKA-4099:
-

[~junrao] I am thinking about this solution. It seems still not ideal. For some 
low volume topics, if we roll the log based on the segment create time, during 
partition relocation, we may keep the sensitive data for much longer than we 
wanted to - because all the data may be end up in the same segment and the old 
data cannot be deleted because they are still with the new data.

It seems the root cause of the unnecessary log rolling is that we are comparing 
the timestamp in the message and the wall clock time. This caused the log 
rolling to become wall clock time sensitive. I am thinking may be we should 
always use the timestamp in the message. i.e. we roll out the log segment if 
the timestamp in the current message is greater than the timestamp of the first 
message in the segment by more than log.roll.ms. This approach is wall clock 
independent and should solve the problem. With 
message.timestamp.difference.max.ms configuration, we can achieve 1) the log 
segment will be rolled out in a bounded time, 2) no excessively large timestamp 
will be accepted and cause frequent log rolling.

What do you think?

> Change the time based log rolling to base on the file create time instead of 
> timestamp of the first message.
> 
>
> Key: KAFKA-4099
> URL: https://issues.apache.org/jira/browse/KAFKA-4099
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> This is an issue introduced in KAFKA-3163. When partition relocation occurs, 
> the newly created replica may have messages with old timestamp and cause the 
> log segment rolling for each message. The fix is to change the log rolling 
> behavior back to based on segment create time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4101) java.lang.IllegalStateException in org.apache.kafka.common.network.Selector.channelOrFail

2016-08-29 Thread Andrey Savov (JIRA)
Andrey Savov created KAFKA-4101:
---

 Summary: java.lang.IllegalStateException in 
org.apache.kafka.common.network.Selector.channelOrFail
 Key: KAFKA-4101
 URL: https://issues.apache.org/jira/browse/KAFKA-4101
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
 Environment: Ubuntu 14.04, AWS deployment, under heavy network load
Reporter: Andrey Savov


{code}
 at org.apache.kafka.common.network.Selector.channelOrFail(Selector.java:467)
at org.apache.kafka.common.network.Selector.mute(Selector.java:347)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:434)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4100) Connect Struct schemas built using SchemaBuilder with no fields cause NPE in Struct constructor

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447176#comment-15447176
 ] 

ASF GitHub Bot commented on KAFKA-4100:
---

GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1800

KAFKA-4100: ensure 'fields' and 'fieldsByName' are not null for Struct 
schemas



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4100

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1800.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1800


commit 1918046f3247d135cbeeddfbadafbe333bde2d55
Author: Shikhar Bhushan 
Date:   2016-08-29T21:55:33Z

KAFKA-4100: ensure 'fields' and 'fieldsByName' are not null for Struct 
schemas




> Connect Struct schemas built using SchemaBuilder with no fields cause NPE in 
> Struct constructor
> ---
>
> Key: KAFKA-4100
> URL: https://issues.apache.org/jira/browse/KAFKA-4100
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Avro records can legitimately have 0 fields (though arguable how useful that 
> is).
> When using the Confluent Schema Registry's {{AvroConverter}} with such a 
> schema,
> {noformat}
> java.lang.NullPointerException
>   at org.apache.kafka.connect.data.Struct.(Struct.java:56)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:980)
>   at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:782)
>   at 
> io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:358)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:171)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is because it is using the {{SchemaBuilder}} to create the Struct 
> schema, which provides a {{field(..)}} builder for each field. If there are 
> no fields, the list stays as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1800: KAFKA-4100: ensure 'fields' and 'fieldsByName' are...

2016-08-29 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1800

KAFKA-4100: ensure 'fields' and 'fieldsByName' are not null for Struct 
schemas



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4100

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1800.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1800


commit 1918046f3247d135cbeeddfbadafbe333bde2d55
Author: Shikhar Bhushan 
Date:   2016-08-29T21:55:33Z

KAFKA-4100: ensure 'fields' and 'fieldsByName' are not null for Struct 
schemas




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4100) Connect Struct schemas built using SchemaBuilder with no fields cause NPE in Struct constructor

2016-08-29 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created KAFKA-4100:
--

 Summary: Connect Struct schemas built using SchemaBuilder with no 
fields cause NPE in Struct constructor
 Key: KAFKA-4100
 URL: https://issues.apache.org/jira/browse/KAFKA-4100
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.1
Reporter: Shikhar Bhushan
Assignee: Shikhar Bhushan
Priority: Minor
 Fix For: 0.10.1.0


Avro records can legitimately have 0 fields (though arguable how useful that 
is).

When using the Confluent Schema Registry's {{AvroConverter}} with such a schema,
{noformat}
java.lang.NullPointerException
at org.apache.kafka.connect.data.Struct.(Struct.java:56)
at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:980)
at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:782)
at 
io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:358)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:171)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

This is because it is using the {{SchemaBuilder}} to create the Struct schema, 
which provides a {{field(..)}} builder for each field. If there are no fields, 
the list stays as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1506

2016-08-29 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #850

2016-08-29 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4098: NetworkClient should not intercept user metdata requests on

--
[...truncated 336 lines...]
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-authn/2.0.0-M21/apacheds-interceptors-authn-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-number/2.0.0-M21/apacheds-interceptors-number-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-authz/2.0.0-M21/apacheds-interceptors-authz-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-changelog/2.0.0-M21/apacheds-interceptors-changelog-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-collective/2.0.0-M21/apacheds-interceptors-collective-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-event/2.0.0-M21/apacheds-interceptors-event-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-exception/2.0.0-M21/apacheds-interceptors-exception-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-journal/2.0.0-M21/apacheds-interceptors-journal-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-normalization/2.0.0-M21/apacheds-interceptors-normalization-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-operational/2.0.0-M21/apacheds-interceptors-operational-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-referral/2.0.0-M21/apacheds-interceptors-referral-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-schema/2.0.0-M21/apacheds-interceptors-schema-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-subtree/2.0.0-M21/apacheds-interceptors-subtree-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptors-trigger/2.0.0-M21/apacheds-interceptors-trigger-2.0.0-M21.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/api/api-ldap-extras-trigger/1.0.0-M33/api-ldap-extras-trigger-1.0.0-M33.pom
Download 
https://repo1.maven.org/maven2/org/apache/directory/api/api-all/1.0.0-M33/api-all-1.0.0-M33.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-core-api/2.0.0-M21/apacheds-core-api-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-interceptor-kerberos/2.0.0-M21/apacheds-interceptor-kerberos-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-protocol-shared/2.0.0-M21/apacheds-protocol-shared-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-protocol-kerberos/2.0.0-M21/apacheds-protocol-kerberos-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-protocol-ldap/2.0.0-M21/apacheds-protocol-ldap-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-ldif-partition/2.0.0-M21/apacheds-ldif-partition-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-mavibot-partition/2.0.0-M21/apacheds-mavibot-partition-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-jdbm-partition/2.0.0-M21/apacheds-jdbm-partition-2.0.0-M21.jar
Download 
https://repo1.maven.org/maven2/org/scalatest/scalatest_2.10/2.2.6/scalatest_2.10-2.2.6.jar
Download 
https://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.xpp3/1.1.4c_6/org.apache.servicemix.bundles.xpp3-1.1.4c_6.jar
Download 
https://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.dom4j/1.6.1_5/org.apache.servicemix.bundles.dom4j-1.6.1_5.jar
Download 
https://repo1.maven.org/maven2/commons-pool/commons-pool/1.6/commons-pool-1.6.jar
Download 
https://repo1.maven.org/maven2/org/apache/mina/mina-core/2.0.10/mina-core-2.0.10.jar
Download 
https://repo1.maven.org/maven2/commons-lang/commons-lang/2.6/commons-lang-2.6.jar
Download 
https://repo1.maven.org/maven2/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar
Download 
https://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.antlr/2.7.7_5/org.apache.servicemix.bundles.antlr-2.7.7_5.jar
Download 
https://repo1.maven.org/maven2/commons-io/commons-io/2.4/commons-io-2.4.jar
Download 
https://repo1.maven.org/maven2/org/apache/directory/server/apacheds-core-constants/2.0.0-M21/apacheds-core-constants-2.0.0-M21.jar
Download 

[GitHub] kafka pull request #1799: Kafka 4060 remove zk client dependency in kafka st...

2016-08-29 Thread hjafarpour
GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/1799

Kafka 4060 remove zk client dependency in kafka streams

Removed Zookeeper client from the Kafka Streams. The internal topic manager 
now will use Kafka Client to create/delete internal topics.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1799.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1799


commit ed3f003ee1851c47df74167ca13874e8e2bd846d
Author: Hojjat Jafarpour 
Date:   2016-08-23T21:24:34Z

Removed the ZooKeeper dependency from Kafka Streams and instead use the 
KafkaClient to create imtermediate topics. We will check if a topic exits 
already, if it does we delete it. Then we create the topic.

commit 5fbded64c9230441c8d9c5936fdfee75a23c312d
Author: Hojjat Jafarpour 
Date:   2016-08-24T16:23:59Z

Code style fixes.

commit 9f8fae9135366ef80a61fc049497ad045e76b573
Author: Hojjat Jafarpour 
Date:   2016-08-24T19:56:18Z

Removed ZOOKEEPER_CONNECT_CONFIG from StreamConfig along with it's uses.

commit 25a35d8f7f43d42065d59fadafd85acf6b57e4a2
Author: Hojjat Jafarpour 
Date:   2016-08-24T21:16:24Z

Updated the unittest for InternalTopicIntegration to test the new behaviour 
using the KafkaClient.

commit 10fb87c1bb04a5acf1d8c79dd2afe8a359b0be82
Author: Hojjat Jafarpour 
Date:   2016-08-29T18:13:30Z

Merge remote-tracking branch 'upstream/trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams

commit dfd3e4be313c2a40c802aaef9b7dba03cd6c3682
Author: Hojjat Jafarpour 
Date:   2016-08-29T20:38:04Z

Added StreamsKafkaClient to wrap the kafka client so we can have a proper 
close. Tests are passing now.

commit 2cde963850c62c54e81b6772bb708934e107f62b
Author: Hojjat Jafarpour 
Date:   2016-08-29T20:58:42Z

Fixed the word count example changes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4081) Consumer API consumer new interface commitSyn does not verify the validity of offset

2016-08-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447017#comment-15447017
 ] 

Jason Gustafson commented on KAFKA-4081:


I think it definitely makes sense to reject negative values (we currently use 
-1 to indicate an invalid offset which leads to inconsistent behavior as noted 
by [~mimaison]), but rejecting offsets greater than hw seems more challenging 
since neither the client (nor the server accepting the offset commit) will 
necessarily have an up-to-date value. We could use the last hw returned from a 
previous fetch, but it might be stale by the time the user attempts to commit 
offsets. Perhaps it would make more sense to reject _any_ offset which is 
greater than the current position? It would still be possible to commit an 
invalid offset, but not without an explicit seek to that offset. However, there 
is a compatibility concern for use cases which use the consumer only for access 
to the offset API, which could be seen in offset tooling.

> Consumer API consumer new interface commitSyn does not verify the validity of 
> offset
> 
>
> Key: KAFKA-4081
> URL: https://issues.apache.org/jira/browse/KAFKA-4081
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: lifeng
>
> Consumer API consumer new interface commitSyn synchronization update offset, 
> for the illegal offset successful return, illegal offset<0 or offset>hw



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4099) Change the time based log rolling to base on the file create time instead of timestamp of the first message.

2016-08-29 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-4099:
---

 Summary: Change the time based log rolling to base on the file 
create time instead of timestamp of the first message.
 Key: KAFKA-4099
 URL: https://issues.apache.org/jira/browse/KAFKA-4099
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Fix For: 0.10.1.0


This is an issue introduced in KAFKA-3163. When partition relocation occurs, 
the newly created replica may have messages with old timestamp and cause the 
log segment rolling for each message. The fix is to change the log rolling 
behavior back to based on segment create time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4098) NetworkClient should not intercept all metadata requests on disconnect

2016-08-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4098.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1798
[https://github.com/apache/kafka/pull/1798]

> NetworkClient should not intercept all metadata requests on disconnect 
> ---
>
> Key: KAFKA-4098
> URL: https://issues.apache.org/jira/browse/KAFKA-4098
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> It looks like we're missing a check in 
> {{DefaultMetadataUpdater.maybeHandleDisconnecttion}} that the request was 
> initiated by {{NetworkClient}}. We should do the same thing we do in 
> {{maybeHandleCompletedReceive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4098) NetworkClient should not intercept all metadata requests on disconnect

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446938#comment-15446938
 ] 

ASF GitHub Bot commented on KAFKA-4098:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1798


> NetworkClient should not intercept all metadata requests on disconnect 
> ---
>
> Key: KAFKA-4098
> URL: https://issues.apache.org/jira/browse/KAFKA-4098
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> It looks like we're missing a check in 
> {{DefaultMetadataUpdater.maybeHandleDisconnecttion}} that the request was 
> initiated by {{NetworkClient}}. We should do the same thing we do in 
> {{maybeHandleCompletedReceive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1798: KAFKA-4098: NetworkClient should not intercept use...

2016-08-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1798


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4098) NetworkClient should not intercept all metadata requests on disconnect

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446753#comment-15446753
 ] 

ASF GitHub Bot commented on KAFKA-4098:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1798

KAFKA-4098: NetworkClient should not intercept user metdata requests on 
disconnect



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4098

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1798


commit b94352753090617ff1992769328e484bfbf69407
Author: Jason Gustafson 
Date:   2016-08-29T18:55:58Z

KAFKA-4098: NetworkClient should not intercept user metdata requests on 
disconnect




> NetworkClient should not intercept all metadata requests on disconnect 
> ---
>
> Key: KAFKA-4098
> URL: https://issues.apache.org/jira/browse/KAFKA-4098
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> It looks like we're missing a check in 
> {{DefaultMetadataUpdater.maybeHandleDisconnecttion}} that the request was 
> initiated by {{NetworkClient}}. We should do the same thing we do in 
> {{maybeHandleCompletedReceive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1798: KAFKA-4098: NetworkClient should not intercept use...

2016-08-29 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1798

KAFKA-4098: NetworkClient should not intercept user metdata requests on 
disconnect



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4098

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1798


commit b94352753090617ff1992769328e484bfbf69407
Author: Jason Gustafson 
Date:   2016-08-29T18:55:58Z

KAFKA-4098: NetworkClient should not intercept user metdata requests on 
disconnect




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Remove beta label from the new Java consumer

2016-08-29 Thread Jay Kreps
+1 I talk to a lot of kafka users, and I would say > 75% of people doing
new things are on the new consumer despite our warnings :-)

-Jay

On Thu, Aug 25, 2016 at 2:05 PM, Jason Gustafson  wrote:

> I'm +1 also. I feel a lot more confident about this with all of the system
> testing we now have in place (including the tests covering Streams and
> Connect).
>
> -Jason
>
> On Thu, Aug 25, 2016 at 9:57 AM, Gwen Shapira  wrote:
>
> > Makes sense :)
> >
> > On Thu, Aug 25, 2016 at 9:40 AM, Neha Narkhede 
> wrote:
> > > Yeah, I'm supportive of this.
> > >
> > > On Thu, Aug 25, 2016 at 9:26 AM Ismael Juma  wrote:
> > >
> > >> Hi Gwen,
> > >>
> > >> We have a few recent stories of people using Connect and Streams in
> > >> production. That means the new Java Consumer too. :)
> > >>
> > >> Ismael
> > >>
> > >> On Thu, Aug 25, 2016 at 5:09 PM, Gwen Shapira 
> > wrote:
> > >>
> > >> > Originally, we suggested keeping the beta label until we know
> someone
> > >> > successfully uses the new consumer in production.
> > >> >
> > >> > We can consider the recent KIPs enough, but IMO it will be better if
> > >> > someone with production deployment hanging out on our mailing list
> > >> > will confirm good experience with the new consumer.
> > >> >
> > >> > Gwen
> > >> >
> > >> > On Wed, Aug 24, 2016 at 8:45 PM, Ismael Juma 
> > wrote:
> > >> > > Hi all,
> > >> > >
> > >> > > We currently say the following in our documentation:
> > >> > >
> > >> > > "As of the 0.9.0 release we have added a new Java consumer to
> > replace
> > >> our
> > >> > > existing high-level ZooKeeper-based consumer and low-level
> consumer
> > >> APIs.
> > >> > > This client is considered beta quality."[1]
> > >> > >
> > >> > > Since then, Jason and the community have done a lot of work to
> > improve
> > >> it
> > >> > > (including KIP-41 and KIP-62), we declared it API stable in
> 0.10.0.0
> > >> and
> > >> > > it's the only option for those that need security support. Yes, it
> > >> still
> > >> > > has bugs, but so does the old consumer and all development is
> > currently
> > >> > > focused on the new consumer.
> > >> > >
> > >> > > As such, I propose we remove the beta label for the next release
> and
> > >> > switch
> > >> > > our tools to use the new consumer by default unless the zookeeper
> > >> > > command-line option is present (for compatibility). This is
> similar
> > to
> > >> > what
> > >> > > we did it for the new producer in 0.9.0.0, but backwards
> compatible.
> > >> > >
> > >> > > Thoughts?
> > >> > >
> > >> > > Ismael
> > >> > >
> > >> > > [1] http://kafka.apache.org/documentation.html#consumerapi
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Gwen Shapira
> > >> > Product Manager | Confluent
> > >> > 650.450.2760 | @gwenshap
> > >> > Follow us: Twitter | blog
> > >> >
> > >>
> > > --
> > > Thanks,
> > > Neha
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


[jira] [Created] (KAFKA-4098) NetworkClient should not intercept all metadata requests on disconnect

2016-08-29 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4098:
--

 Summary: NetworkClient should not intercept all metadata requests 
on disconnect 
 Key: KAFKA-4098
 URL: https://issues.apache.org/jira/browse/KAFKA-4098
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


It looks like we're missing a check in 
{{DefaultMetadataUpdater.maybeHandleDisconnecttion}} that the request was 
initiated by {{NetworkClient}}. We should do the same thing we do in 
{{maybeHandleCompletedReceive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1787: KAFKA-3940 Log should check the return value of di...

2016-08-29 Thread imandhan
GitHub user imandhan reopened a pull request:

https://github.com/apache/kafka/pull/1787

KAFKA-3940 Log should check the return value of dir.mkdirs()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/imandhan/kafka KAFKA-3940

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1787.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1787


commit e3f5f293104013da3aa765c9ba7cbdb2553f6485
Author: Ishita Mandhan 
Date:   2016-08-25T21:38:16Z

KAFKA-3940 Log should check the return value of dir.mkdirs()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446621#comment-15446621
 ] 

ASF GitHub Bot commented on KAFKA-3940:
---

GitHub user imandhan reopened a pull request:

https://github.com/apache/kafka/pull/1787

KAFKA-3940 Log should check the return value of dir.mkdirs()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/imandhan/kafka KAFKA-3940

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1787.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1787


commit e3f5f293104013da3aa765c9ba7cbdb2553f6485
Author: Ishita Mandhan 
Date:   2016-08-25T21:38:16Z

KAFKA-3940 Log should check the return value of dir.mkdirs()




> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Batch Expired

2016-08-29 Thread Mayuresh Gharat
Hi,

RequestTimeout is used for 2 cases :
1) Timing out the batches sitting in the accumulator.
2) Requests that are already sent over the wire and you have not yet heard
from the server.

In a case where there is a network partition, the client might not detect
it, till the actual TCP timeout that is I think around 30 minutes or more.
The requestTimeout setting kicks in before the TCP timeout and then the
request is re tried with fresh metadata or errored out depending upon the
retry setting.


Thanks,

Mayuresh



On Mon, Aug 29, 2016 at 9:32 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:

> Hi Krishna,
> Thank you for your response.
>
> Connections already made but if we increase the request timeout 5 times
> let's say  request.timeout.ms= 5*6 , then the number of 'Batch
> Expired ' exception is less, so what is the recommended value for '
> request.timeout.ms '.
> If we increase more, is there any impact?
>
> Thanks
> Achintya
>
> -Original Message-
> From: R Krishna [mailto:krishna...@gmail.com]
> Sent: Friday, August 26, 2016 6:17 PM
> To: us...@kafka.apache.org
> Cc: dev@kafka.apache.org
> Subject: Re: Batch Expired
>
> Are any requests at all making it? That is a pretty big timeout.
>
> However, I noticed if there is no connections made to broker, you can
> still get batch expiry.
>
>
> On Fri, Aug 26, 2016 at 6:32 AM, Ghosh, Achintya (Contractor) <
> achintya_gh...@comcast.com> wrote:
>
> > Hi there,
> >
> > What is the recommended Producer setting for Producer as I see a lot
> > of Batch Expired exception even though I put request.timeout=6.
> >
> > Producer settings:
> > acks=1
> > retries=3
> > batch.size=16384
> > linger.ms=5
> > buffer.memory=33554432
> > request.timeout.ms=6
> > timeout.ms=6
> >
> > Thanks
> > Achintya
> >
>
>
>
> --
> Radha Krishna, Proddaturi
> 253-234-5657
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Commented] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446620#comment-15446620
 ] 

ASF GitHub Bot commented on KAFKA-3940:
---

Github user imandhan closed the pull request at:

https://github.com/apache/kafka/pull/1787


> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1787: KAFKA-3940 Log should check the return value of di...

2016-08-29 Thread imandhan
Github user imandhan closed the pull request at:

https://github.com/apache/kafka/pull/1787


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4062) Require --print-data-log if --offsets-decoder is enabled for DumpLogOffsets

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446613#comment-15446613
 ] 

ASF GitHub Bot commented on KAFKA-4062:
---

GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1797

KAFKA-4062: Require --print-data-log if --offsets-decoder is enabled for 
DumpLogOffsets

set print-data-log option when offset-decoder is set.  @hachikuji we had 
talked about this one before, does this change look ok to you?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka KAFKA-4062

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1797


commit beb9735c6a3bdcf9bb50a229caf05314fca11ffc
Author: Dustin Cote 
Date:   2016-08-29T18:09:59Z

set print-data-log option when offset-decoder is set




> Require --print-data-log if --offsets-decoder is enabled for DumpLogOffsets
> ---
>
> Key: KAFKA-4062
> URL: https://issues.apache.org/jira/browse/KAFKA-4062
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Minor
>
> When using the DumpLogOffsets tool, if you want to print out contents of 
> __consumer_offsets, you would typically use --offsets-decoder as an option.  
> This option doesn't actually do much without --print-data-log enabled, so we 
> should just require it to prevent user errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1797: KAFKA-4062: Require --print-data-log if --offsets-...

2016-08-29 Thread cotedm
GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1797

KAFKA-4062: Require --print-data-log if --offsets-decoder is enabled for 
DumpLogOffsets

set print-data-log option when offset-decoder is set.  @hachikuji we had 
talked about this one before, does this change look ok to you?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka KAFKA-4062

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1797


commit beb9735c6a3bdcf9bb50a229caf05314fca11ffc
Author: Dustin Cote 
Date:   2016-08-29T18:09:59Z

set print-data-log option when offset-decoder is set




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4092) retention.bytes should not be allowed to be less than segment.bytes

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446586#comment-15446586
 ] 

ASF GitHub Bot commented on KAFKA-4092:
---

GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1796

KAFKA-4092: retention.bytes should not be allowed to be less than 
segment.bytes

adding a LogConfig value validator.  @gwenshap or @junrao would you mind 
taking a look?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka retentionbytesvalidation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1796


commit 9312611cb6b92bf17984bf6a499206634526db38
Author: Dustin Cote 
Date:   2016-08-29T18:01:01Z

add a LogConfig value validator




> retention.bytes should not be allowed to be less than segment.bytes
> ---
>
> Key: KAFKA-4092
> URL: https://issues.apache.org/jira/browse/KAFKA-4092
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Minor
>
> Right now retention.bytes can be as small as the user wants but it doesn't 
> really get acted on for the active segment if retention.bytes is smaller than 
> segment.bytes.  We shouldn't allow retention.bytes to be less than 
> segment.bytes and validate that at startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1796: KAFKA-4092: retention.bytes should not be allowed ...

2016-08-29 Thread cotedm
GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1796

KAFKA-4092: retention.bytes should not be allowed to be less than 
segment.bytes

adding a LogConfig value validator.  @gwenshap or @junrao would you mind 
taking a look?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka retentionbytesvalidation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1796


commit 9312611cb6b92bf17984bf6a499206634526db38
Author: Dustin Cote 
Date:   2016-08-29T18:01:01Z

add a LogConfig value validator




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Reg: DefaultParititioner in Kafka

2016-08-29 Thread BigData dev
Hi All,
In DefaultPartitioner implementation, when key is null, we get the
partition number by modulo of available partitions. Below is the code
snippet.

if (availablePartitions.size() > 0)
{ int part = Utils.toPositive(nextValue) % availablePartitions.size();
return availablePartitions.get(part).partition();
}
Where as when key is not null, we get the partition number by modulo of
total no og partitions.

return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;

As if some partitions are not available,then the producer will not be able
to publish message to that partition.

Should n't we do the same as by considering only available partitions?

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67

Could any help to clarify on this issue.


Thanks,
Bharat


Re: [DISCUSS] KIP-78: Cluster Id

2016-08-29 Thread Harsha Chintalapani
Ismael,
   What happens when the cluster.id changes from initial value. Ex,
Users changed their zookeeper.root and now new cluster.id generated. Do you
think it would be useful to store this in meta.properties along with
broker.id. So that we only generate it once and store it in disk.

Thanks,
Harsha

On Sat, Aug 27, 2016 at 4:47 PM Gwen Shapira  wrote:

> Thanks Ismael, this looks great.
>
> One of the things you mentioned is that cluster ID will be useful in
> log aggregation. Perhaps it makes sense to include cluster ID in the
> log? For example, as one of the things a broker logs after startup?
> And ideally clients would log that as well after successful parsing of
> MetadataResponse?
>
> Gwen
>
>
> On Sat, Aug 27, 2016 at 4:39 AM, Ismael Juma  wrote:
> > Hi all,
> >
> > We've posted "KIP-78: Cluster Id" for discussion:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> >
> > Please take a look. Your feedback is appreciated.
> >
> > Thanks,
> > Ismael
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [DISCUSS] Remove beta label from the new Java consumer

2016-08-29 Thread Harsha Chintalapani
New consumer API haven't taken a wide adoption yet from my experience with
users. We at Storm recently shipped the new consumer API spout and its
probably good idea wait a one more minor release before removing the beta
label.  I am ok with either way.
Thanks,
Harsha

On Thu, Aug 25, 2016 at 2:05 PM Jason Gustafson  wrote:

> I'm +1 also. I feel a lot more confident about this with all of the system
> testing we now have in place (including the tests covering Streams and
> Connect).
>
> -Jason
>
> On Thu, Aug 25, 2016 at 9:57 AM, Gwen Shapira  wrote:
>
> > Makes sense :)
> >
> > On Thu, Aug 25, 2016 at 9:40 AM, Neha Narkhede 
> wrote:
> > > Yeah, I'm supportive of this.
> > >
> > > On Thu, Aug 25, 2016 at 9:26 AM Ismael Juma  wrote:
> > >
> > >> Hi Gwen,
> > >>
> > >> We have a few recent stories of people using Connect and Streams in
> > >> production. That means the new Java Consumer too. :)
> > >>
> > >> Ismael
> > >>
> > >> On Thu, Aug 25, 2016 at 5:09 PM, Gwen Shapira 
> > wrote:
> > >>
> > >> > Originally, we suggested keeping the beta label until we know
> someone
> > >> > successfully uses the new consumer in production.
> > >> >
> > >> > We can consider the recent KIPs enough, but IMO it will be better if
> > >> > someone with production deployment hanging out on our mailing list
> > >> > will confirm good experience with the new consumer.
> > >> >
> > >> > Gwen
> > >> >
> > >> > On Wed, Aug 24, 2016 at 8:45 PM, Ismael Juma 
> > wrote:
> > >> > > Hi all,
> > >> > >
> > >> > > We currently say the following in our documentation:
> > >> > >
> > >> > > "As of the 0.9.0 release we have added a new Java consumer to
> > replace
> > >> our
> > >> > > existing high-level ZooKeeper-based consumer and low-level
> consumer
> > >> APIs.
> > >> > > This client is considered beta quality."[1]
> > >> > >
> > >> > > Since then, Jason and the community have done a lot of work to
> > improve
> > >> it
> > >> > > (including KIP-41 and KIP-62), we declared it API stable in
> 0.10.0.0
> > >> and
> > >> > > it's the only option for those that need security support. Yes, it
> > >> still
> > >> > > has bugs, but so does the old consumer and all development is
> > currently
> > >> > > focused on the new consumer.
> > >> > >
> > >> > > As such, I propose we remove the beta label for the next release
> and
> > >> > switch
> > >> > > our tools to use the new consumer by default unless the zookeeper
> > >> > > command-line option is present (for compatibility). This is
> similar
> > to
> > >> > what
> > >> > > we did it for the new producer in 0.9.0.0, but backwards
> compatible.
> > >> > >
> > >> > > Thoughts?
> > >> > >
> > >> > > Ismael
> > >> > >
> > >> > > [1] http://kafka.apache.org/documentation.html#consumerapi
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Gwen Shapira
> > >> > Product Manager | Confluent
> > >> > 650.450.2760 | @gwenshap
> > >> > Follow us: Twitter | blog
> > >> >
> > >>
> > > --
> > > Thanks,
> > > Neha
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


[jira] [Commented] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446486#comment-15446486
 ] 

Jason Gustafson commented on KAFKA-156:
---

[~dpnchl] KAFKA-789 is already resolved as a duplicate of this issue itself. 
Also, as far as I know, there are no current plans to include this feature in 
0.10.1

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>Assignee: Dru Panchal
> Fix For: 0.10.1.0
>
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3538) Abstract the creation/retrieval of Producer for stream sinks for unit testing

2016-08-29 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3538:

Affects Version/s: (was: 0.10.1.0)
   0.10.0.0
   Issue Type: Improvement  (was: New Feature)

> Abstract the creation/retrieval of Producer for stream sinks for unit testing
> -
>
> Key: KAFKA-3538
> URL: https://issues.apache.org/jira/browse/KAFKA-3538
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Michael Coon
>Assignee: Guozhang Wang
>Priority: Minor
>  Labels: semantics
> Fix For: 0.10.0.1
>
>
> The StreamThread creates producer/consumers directly as KafkaProducer and 
> KafkaConsumer, thus eliminating my ability to unit test my streams code 
> without having an active Kafka nearby. Could this be abstracted in a way that 
> it relies on an optional ProducerProvider or ConsumerProvider implementation 
> that could inject a mock producer/consumer for unit testing? We do this in 
> all our kafka code for unit testing and if a provider is not offered at 
> runtime, we create the concrete KafkaProdocer/Consumer components by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4091) Unable to produce or consume on any topic

2016-08-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446460#comment-15446460
 ] 

Jason Gustafson commented on KAFKA-4091:


[~datawiz...@gmail.com] It seems possible that Kafka had not completed startup 
successfully when you saw those client errors. The log message is saying that 
port 9092 on localhost was inaccessible. If you see this again, you might want 
to verify directly (e.g. using telnet) that Kafka is actually listening on port 
9092 of localhost.

> Unable to produce or consume on any topic
> -
>
> Key: KAFKA-4091
> URL: https://issues.apache.org/jira/browse/KAFKA-4091
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: Amazon Linux, t2.micro
>Reporter: Avi Chopra
>Priority: Critical
>
> While trying to set kafka on 2 slave and 1 master box, got a weird condition 
> where I was not able to consume or produce to a topic.
> Using Mirror Maker to sync data between slave <--> Master. Getting following 
> logs unending :
> [2016-08-26 14:28:33,897] WARN Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:43,515] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:45,118] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:46,721] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:48,324] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:49,927] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:53,029] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient)
> Only way I could recover was by restarting Kafka which produced this kind of 
> logs :
> [2016-08-26 14:30:54,856] WARN Found a corrupted index file, 
> /tmp/kafka-logs/__consumer_offsets-43/.index, deleting 
> and rebuilding index... (kafka.log.Log) [2016-08-26 14:30:54,856] INFO 
> Recovering unflushed segment 0 in log __consumer_offsets-43. (kafka.log.Log) 
> [2016-08-26 14:30:54,857] INFO Completed load of log __consumer_offsets-43 
> with log end offset 0 (kafka.log.Log) [2016-08-26 14:30:54,860] WARN Found a 
> corrupted index file, 
> /tmp/kafka-logs/__consumer_offsets-26/.index, deleting 
> and rebuilding index... (kafka.log.Log) [2016-08-26 14:30:54,860] INFO 
> Recovering unflushed segment 0 in log __consumer_offsets-26. (kafka.log.Log) 
> [2016-08-26 14:30:54,861] INFO Completed load of log __consumer_offsets-26 
> with log end offset 0 (kafka.log.Log) [2016-08-26 14:30:54,864] WARN Found a 
> corrupted index file, 
> /tmp/kafka-logs/__consumer_offsets-35/.index, deleting 
> and rebuilding index... (kafka.log.Log)
> ERROR Error when sending message to topic dr_ubr_analytics_limits with key: 
> null, value: 1 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
> The consumer group command was showing a major lag.
> This is my test phase so I was able to restart and recover from the master 
> box but I want know what caused this issue and how can it be avoided. Is 
> there a way to debug this issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-77: Improve Kafka Streams Join Semantics

2016-08-29 Thread Eno Thereska
+1 (non-binding)

> On 29 Aug 2016, at 12:22, Bill Bejeck  wrote:
> 
> +1
> 
> On Mon, Aug 29, 2016 at 5:50 AM, Matthias J. Sax 
> wrote:
> 
>> I’d like to initiate the voting process for KIP-77:
>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 77%3A+Improve+Kafka+Streams+Join+Semantics
>> 
>> -Matthias
>> 
>> 
>> 



RE: Batch Expired

2016-08-29 Thread Ghosh, Achintya (Contractor)
Hi Krishna,
Thank you for your response.

Connections already made but if we increase the request timeout 5 times let's 
say  request.timeout.ms= 5*6 , then the number of 'Batch Expired ' 
exception is less, so what is the recommended value for ' request.timeout.ms '.
If we increase more, is there any impact?
 
Thanks
Achintya

-Original Message-
From: R Krishna [mailto:krishna...@gmail.com] 
Sent: Friday, August 26, 2016 6:17 PM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: Re: Batch Expired

Are any requests at all making it? That is a pretty big timeout.

However, I noticed if there is no connections made to broker, you can still get 
batch expiry.


On Fri, Aug 26, 2016 at 6:32 AM, Ghosh, Achintya (Contractor) < 
achintya_gh...@comcast.com> wrote:

> Hi there,
>
> What is the recommended Producer setting for Producer as I see a lot 
> of Batch Expired exception even though I put request.timeout=6.
>
> Producer settings:
> acks=1
> retries=3
> batch.size=16384
> linger.ms=5
> buffer.memory=33554432
> request.timeout.ms=6
> timeout.ms=6
>
> Thanks
> Achintya
>



--
Radha Krishna, Proddaturi
253-234-5657


[jira] [Assigned] (KAFKA-4095) When a topic is deleted and then created with the same name, 'committed' offsets are not reset

2016-08-29 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-4095:
--

Assignee: Vahid Hashemian

> When a topic is deleted and then created with the same name, 'committed' 
> offsets are not reset
> --
>
> Key: KAFKA-4095
> URL: https://issues.apache.org/jira/browse/KAFKA-4095
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Alex Glikson
>Assignee: Vahid Hashemian
>
> I encountered a very strange behavior of Kafka, which seems to be a bug.
> After deleting a topic and re-creating it with the same name, I produced 
> certain amount of new messages, and then opened a consumer with the same ID 
> that I used before re-creating the topic (with auto.commit=false, 
> auto.offset.reset=earliest). While the latest offsets seemed up to date, the 
> *committed* offset (returned by committed() method) was an *old* offset, from 
> the time before the topic has been deleted and created.
> I would have assumed that when a topic is deleted, all the associated 
> topic-partitions and consumer groups are recycled too.
> I am using the Java client version 0.9, with Kafka server 0.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4095) When a topic is deleted and then created with the same name, 'committed' offsets are not reset

2016-08-29 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446292#comment-15446292
 ] 

Vahid Hashemian commented on KAFKA-4095:


I can take a look at this.

> When a topic is deleted and then created with the same name, 'committed' 
> offsets are not reset
> --
>
> Key: KAFKA-4095
> URL: https://issues.apache.org/jira/browse/KAFKA-4095
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Alex Glikson
>
> I encountered a very strange behavior of Kafka, which seems to be a bug.
> After deleting a topic and re-creating it with the same name, I produced 
> certain amount of new messages, and then opened a consumer with the same ID 
> that I used before re-creating the topic (with auto.commit=false, 
> auto.offset.reset=earliest). While the latest offsets seemed up to date, the 
> *committed* offset (returned by committed() method) was an *old* offset, from 
> the time before the topic has been deleted and created.
> I would have assumed that when a topic is deleted, all the associated 
> topic-partitions and consumer groups are recycled too.
> I am using the Java client version 0.9, with Kafka server 0.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4097) "Server not found in kerberos database" issue while starting a Kafka server in a secured mode

2016-08-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4097:
-
Component/s: (was: KafkaConnect)

> "Server not found in kerberos database" issue while starting a Kafka server 
> in a secured mode
> -
>
> Key: KAFKA-4097
> URL: https://issues.apache.org/jira/browse/KAFKA-4097
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.10.0.1
>Reporter: syam prasad
>Assignee: Ewen Cheslack-Postava
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4097) "Server not found in kerberos database" issue while starting a Kafka server in a secured mode

2016-08-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4097:
-
Assignee: (was: Ewen Cheslack-Postava)

> "Server not found in kerberos database" issue while starting a Kafka server 
> in a secured mode
> -
>
> Key: KAFKA-4097
> URL: https://issues.apache.org/jira/browse/KAFKA-4097
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.10.0.1
>Reporter: syam prasad
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445771#comment-15445771
 ] 

ASF GitHub Bot commented on KAFKA-3129:
---

GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1795

KAFKA-3129: Console producer issue when request-required-acks=0

change console producer default acks to 1, update acks docs.  Also added 
the -1 config to the acks docs since that question comes up often.  @ijuma and 
@vahidhashemian, does this look reasonable to you?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka KAFKA-3129

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1795


commit bec755ffc4e4b779a6c6d45b144a7e3a87dc64d7
Author: Dustin Cote 
Date:   2016-08-29T12:44:37Z

change console producer default acks to 1, update acks docs




> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-29 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote reassigned KAFKA-3129:
--

Assignee: Dustin Cote  (was: Neha Narkhede)

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1795: KAFKA-3129: Console producer issue when request-re...

2016-08-29 Thread cotedm
GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1795

KAFKA-3129: Console producer issue when request-required-acks=0

change console producer default acks to 1, update acks docs.  Also added 
the -1 config to the acks docs since that question comes up often.  @ijuma and 
@vahidhashemian, does this look reasonable to you?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka KAFKA-3129

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1795


commit bec755ffc4e4b779a6c6d45b144a7e3a87dc64d7
Author: Dustin Cote 
Date:   2016-08-29T12:44:37Z

change console producer default acks to 1, update acks docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4097) "Server not found in kerberos database" issue while starting a Kafka server in a secured mode

2016-08-29 Thread syam prasad (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445637#comment-15445637
 ] 

syam prasad commented on KAFKA-4097:


Sorry. I have missed adding this property in my last update under 
server.properties:

"zookeeper.connect=archimedes.in.ibm.com:2182"


> "Server not found in kerberos database" issue while starting a Kafka server 
> in a secured mode
> -
>
> Key: KAFKA-4097
> URL: https://issues.apache.org/jira/browse/KAFKA-4097
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: syam prasad
>Assignee: Ewen Cheslack-Postava
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4097) "Server not found in kerberos database" issue while starting a Kafka server in a secured mode

2016-08-29 Thread syam prasad (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445644#comment-15445644
 ] 

syam prasad commented on KAFKA-4097:


By list_principals command in kadmin window, I was able to see zookeeper and 
kafka SPNs under the mentioned KDC.

> "Server not found in kerberos database" issue while starting a Kafka server 
> in a secured mode
> -
>
> Key: KAFKA-4097
> URL: https://issues.apache.org/jira/browse/KAFKA-4097
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: syam prasad
>Assignee: Ewen Cheslack-Postava
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4097) "Server not found in kerberos database" issue while starting a Kafka server in a secured mode

2016-08-29 Thread syam prasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

syam prasad updated KAFKA-4097:
---
Summary: "Server not found in kerberos database" issue while starting a 
Kafka server in a secured mode  (was: "Server not found in kerberos database" 
issue while starting Kafka broker in secured mode)

> "Server not found in kerberos database" issue while starting a Kafka server 
> in a secured mode
> -
>
> Key: KAFKA-4097
> URL: https://issues.apache.org/jira/browse/KAFKA-4097
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: syam prasad
>Assignee: Ewen Cheslack-Postava
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4097) "Server not found in kerberos database" issue while starting Kafka broker in secured mode

2016-08-29 Thread syam prasad (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445624#comment-15445624
 ] 

syam prasad commented on KAFKA-4097:


Hi,
 
 Zookeeper was started well in a secured mode (as I can see TGT starting time 
and expiry time) with the following properties:
 
 zookeeper properties:
 =
 dataDir=/tmp/zookeeper2
# the port at which the clients will connect
clientPort=2182
# disable the per-ip limit on the number of connections since this is a 
non-production config
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl

jaasLoginRenew=360

 zookeeper_jas.conf:
 ==
 Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/dsadm/syam/zookeeper-service.keytab"
storeKey=true
serviceName="zookeeper"
debug=true
useTicketCache=false
principal="zookeeper/archimedes.in.ibm@hadoopbi.com";
};

When I started the kafka server, with the following properties:

server.properties:
==
listeners=SASL_PLAINTEXT://archimedes.in.ibm.com:9093
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.kerberos.service.name=kafka
zookeeper.set.acl=true
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

kafka_broker_jass.conf:
==
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="/home/dsadm/syam/kafka_service.keytab"
principal="kafka/archimedes.in.ibm@hadoopbi.com";
};

// Zookeeper client authentication
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   storeKey=true
   debug=true
   serviceName="zookeeper"
   keyTab="/home/dsadm/syam/kafka_service.keytab"
   principal="kafka/archimedes.in.ibm@hadoopbi.com";
};

krb5 and jaas files are specified via exporting KAFKA_OPTS:
=

export KAFKA_OPTS="-Djava.security.krb5.conf=/home/dsadm/syam/krb5.conf 
-Djava.security.auth.login.config=/home/dsadm/syam/kafka_broker_jaas.conf"

export KAFKA_OPTS="-Djava.security.krb5.conf=/home/dsadm/syam/krb5.conf 
-Djava.security.auth.login.config=/home/dsadm/syam/zookeeper_jaas.conf"


I was seeing the following issue,while starting a kafka server 
(./bin/kafka_server_start.sh config/server.properties):

[2016-08-29 16:51:27,375] INFO Socket connection established to 
archimedes/9.124.101.5:2182, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2016-08-29 16:51:27,467] INFO Session establishment complete on server 
archimedes/9.124.101.5:2182, sessionid = 0x156d5ffea8a0001, negotiated timeout 
= 6000 (org.apache.zookeeper.ClientCnxn)
[2016-08-29 16:51:27,492] INFO zookeeper state changed (SyncConnected) 
(org.I0Itec.zkclient.ZkClient)
[2016-08-29 16:51:27,614] ERROR An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) 
occurred when evaluating Zookeeper Quorum Member's  received SASL token. This 
may be caused by Java's being unable to resolve the Zookeeper Quorum Member's 
hostname correctly. You may want to try to adding 
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS 
environment. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2016-08-29 16:51:27,615] ERROR SASL authentication with Zookeeper Quorum 
member failed: javax.security.sasl.SaslException: An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) 
occurred when evaluating Zookeeper Quorum Member's  received SASL token. This 
may be caused by Java's being unable to resolve the Zookeeper Quorum Member's 
hostname correctly. You may want to try to adding 
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS 
environment. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.ClientCnxn)
[2016-08-29 16:51:27,617] INFO zookeeper state changed (AuthFailed) 
(org.I0Itec.zkclient.ZkClient)
[2016-08-29 16:51:27,621] INFO Terminate ZkClient event thread. 
(org.I0Itec.zkclient.ZkEventThread)
[2016-08-29 16:51:27,646] FATAL Fatal error during KafkaServer startup. Prepare 
to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure




> "Server not found in kerberos 

[jira] [Updated] (KAFKA-4097) "Server not found in kerberos database" issue while starting Kafka broker in secured mode

2016-08-29 Thread syam prasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

syam prasad updated KAFKA-4097:
---
Summary: "Server not found in kerberos database" issue while starting Kafka 
broker in secured mode  (was: Server not found in kerberos database while 
starting Kafka broker in secured mode)

> "Server not found in kerberos database" issue while starting Kafka broker in 
> secured mode
> -
>
> Key: KAFKA-4097
> URL: https://issues.apache.org/jira/browse/KAFKA-4097
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: syam prasad
>Assignee: Ewen Cheslack-Postava
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4097) Server not found in kerberos database while starting Kafka broker in secured mode

2016-08-29 Thread syam prasad (JIRA)
syam prasad created KAFKA-4097:
--

 Summary: Server not found in kerberos database while starting 
Kafka broker in secured mode
 Key: KAFKA-4097
 URL: https://issues.apache.org/jira/browse/KAFKA-4097
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Affects Versions: 0.10.0.1
Reporter: syam prasad
Assignee: Ewen Cheslack-Postava






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-77: Improve Kafka Streams Join Semantics

2016-08-29 Thread Bill Bejeck
+1

On Mon, Aug 29, 2016 at 5:50 AM, Matthias J. Sax 
wrote:

> I’d like to initiate the voting process for KIP-77:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 77%3A+Improve+Kafka+Streams+Join+Semantics
>
> -Matthias
>
>
>


[VOTE] KIP-77: Improve Kafka Streams Join Semantics

2016-08-29 Thread Matthias J. Sax
I’d like to initiate the voting process for KIP-77:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-77%3A+Improve+Kafka+Streams+Join+Semantics

-Matthias




signature.asc
Description: OpenPGP digital signature