[jira] [Commented] (KAFKA-5094) Censor SCRAM config change logging

2017-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976067#comment-15976067
 ] 

Johan Ström commented on KAFKA-5094:


Ping [~ijuma], separate ticket as requested!

> Censor SCRAM config change logging
> --
>
> Key: KAFKA-5094
> URL: https://issues.apache.org/jira/browse/KAFKA-5094
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Johan Ström
>
> (As mentioned in comment on KAFKA-4943):
> Another possibly bad thing is that Kafka logs the credentials in the clear 
> too (0.10.2.0):
> {code}
> [2017-04-05 16:29:00,266] INFO Processing notification(s) to /config/changes 
> (kafka.common.ZkNodeChangeNotificationListener)
> [2017-04-05 16:29:00,282] INFO Processing override for entityPath: 
> users/kafka with config: 
> {SCRAM-SHA-512=salt=ZGl6dnRzeWQ5ZjJhNWo1bWdxN2draG96Ng==,stored_key=BEdel+ChGSnpdpV0f8s8J/fWlwZJbUtAD1N6FygpPLK1AiVjg0yiHCvigq1R2x+o72QSvNkyFITuVZMlrj8hZg==,server_key=/RZ/EcGAaXwAKvFknVpsBHzC4tBXBLPJQnN4tM/s0wJpMcR9qvvJTGKM9Nx+zoXCc9buNoCd+/2LpL+yWde+/w==,iterations=4096}
>  (kafka.server.DynamicConfigManager)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5094) Censor SCRAM config change logging

2017-04-19 Thread JIRA
Johan Ström created KAFKA-5094:
--

 Summary: Censor SCRAM config change logging
 Key: KAFKA-5094
 URL: https://issues.apache.org/jira/browse/KAFKA-5094
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.10.2.0
Reporter: Johan Ström


(As mentioned in comment on KAFKA-4943):

Another possibly bad thing is that Kafka logs the credentials in the clear too 
(0.10.2.0):

{code}
[2017-04-05 16:29:00,266] INFO Processing notification(s) to /config/changes 
(kafka.common.ZkNodeChangeNotificationListener)
[2017-04-05 16:29:00,282] INFO Processing override for entityPath: users/kafka 
with config: 
{SCRAM-SHA-512=salt=ZGl6dnRzeWQ5ZjJhNWo1bWdxN2draG96Ng==,stored_key=BEdel+ChGSnpdpV0f8s8J/fWlwZJbUtAD1N6FygpPLK1AiVjg0yiHCvigq1R2x+o72QSvNkyFITuVZMlrj8hZg==,server_key=/RZ/EcGAaXwAKvFknVpsBHzC4tBXBLPJQnN4tM/s0wJpMcR9qvvJTGKM9Nx+zoXCc9buNoCd+/2LpL+yWde+/w==,iterations=4096}
 (kafka.server.DynamicConfigManager)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5055) Kafka Streams skipped-records-rate sensor producing nonzero values even when FailOnInvalidTimestamp is used as extractor

2017-04-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975899#comment-15975899
 ] 

Guozhang Wang commented on KAFKA-5055:
--

[~nthean] Thanks for reporting this. Could you share your full streams config 
properties in this ticket? Also what's the non-zero value you observed on 
{{skipped-records-rate}}?

> Kafka Streams skipped-records-rate sensor producing nonzero values even when 
> FailOnInvalidTimestamp is used as extractor
> 
>
> Key: KAFKA-5055
> URL: https://issues.apache.org/jira/browse/KAFKA-5055
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Nikki Thean
>
> According to the code and the documentation for this metric, the only reason 
> for a skipped record is an invalid timestamp, except that a) I am reading 
> from a topic that is populated solely by Kafka Connect and b) I am using 
> `FailOnInvalidTimestamp` as the timestamp extractor.
> Either I'm missing something in the documentation (i.e. another reason for 
> skipped records) or there is a bug in the code that calculates this metric.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5055) Kafka Streams skipped-records-rate sensor producing nonzero values even when FailOnInvalidTimestamp is used as extractor

2017-04-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5055:
-
Component/s: streams

> Kafka Streams skipped-records-rate sensor producing nonzero values even when 
> FailOnInvalidTimestamp is used as extractor
> 
>
> Key: KAFKA-5055
> URL: https://issues.apache.org/jira/browse/KAFKA-5055
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Nikki Thean
>
> According to the code and the documentation for this metric, the only reason 
> for a skipped record is an invalid timestamp, except that a) I am reading 
> from a topic that is populated solely by Kafka Connect and b) I am using 
> `FailOnInvalidTimestamp` as the timestamp extractor.
> Either I'm missing something in the documentation (i.e. another reason for 
> skipped records) or there is a bug in the code that calculates this metric.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4850) RocksDb cannot use Bloom Filters

2017-04-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4850:
-
Priority: Major  (was: Minor)

> RocksDb cannot use Bloom Filters
> 
>
> Key: KAFKA-4850
> URL: https://issues.apache.org/jira/browse/KAFKA-4850
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Bharat Viswanadham
> Fix For: 0.11.0.0
>
>
> Bloom Filters would speed up RocksDb lookups. However they currently do not 
> work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait 
> until that is released and tested. 
> Then we can add the line in RocksDbStore.java in openDb:
> tableConfig.setFilter(new BloomFilter(10));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4850) RocksDb cannot use Bloom Filters

2017-04-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975884#comment-15975884
 ] 

Guozhang Wang commented on KAFKA-4850:
--

Bumping up to "major" as this is a critical perf improvement so do not want to 
drop it on the floor.

> RocksDb cannot use Bloom Filters
> 
>
> Key: KAFKA-4850
> URL: https://issues.apache.org/jira/browse/KAFKA-4850
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Bharat Viswanadham
> Fix For: 0.11.0.0
>
>
> Bloom Filters would speed up RocksDb lookups. However they currently do not 
> work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait 
> until that is released and tested. 
> Then we can add the line in RocksDbStore.java in openDb:
> tableConfig.setFilter(new BloomFilter(10));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4848) Stream thread getting into deadlock state while trying to get rocksdb lock in retryWithBackoff

2017-04-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975831#comment-15975831
 ] 

Guozhang Wang commented on KAFKA-4848:
--

[~sjmittal] It is cherry-picked into 0.10.2 now I think.

> Stream thread getting into deadlock state while trying to get rocksdb lock in 
> retryWithBackoff
> --
>
> Key: KAFKA-4848
> URL: https://issues.apache.org/jira/browse/KAFKA-4848
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Sachin Mittal
>Assignee: Sachin Mittal
> Fix For: 0.11.0.0, 0.10.2.1
>
> Attachments: thr-1
>
>
> We see a deadlock state when streams thread to process a task takes longer 
> than MAX_POLL_INTERVAL_MS_CONFIG time. In this case this threads partitions 
> are assigned to some other thread including rocksdb lock. When it tries to 
> process the next task it cannot get rocks db lock and simply keeps waiting 
> for that lock forever.
> in retryWithBackoff for AbstractTaskCreator we have a backoffTimeMs = 50L.
> If it does not get lock the we simply increase the time by 10x and keep 
> trying inside the while true loop.
> We need to have a upper bound for this backoffTimeM. If the time is greater 
> than  MAX_POLL_INTERVAL_MS_CONFIG and it still hasn't got the lock means this 
> thread's partitions are moved somewhere else and it may not get the lock 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-04-19 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5093:
--

 Summary: Load only batch header when rebuilding producer ID map
 Key: KAFKA-5093
 URL: https://issues.apache.org/jira/browse/KAFKA-5093
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.11.0.0


When rebuilding the producer ID map for KIP-98, we unnecessarily load the full 
record data into memory when scanning through the log. It would be better to 
only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5092) KIP 141 - ProducerRecordBuilder Interface

2017-04-19 Thread Stephane Maarek (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephane Maarek updated KAFKA-5092:
---
Summary: KIP 141 - ProducerRecordBuilder Interface  (was: KIP 141- 
ProducerRecordBuilder Interface improvements)

> KIP 141 - ProducerRecordBuilder Interface
> -
>
> Key: KAFKA-5092
> URL: https://issues.apache.org/jira/browse/KAFKA-5092
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stephane Maarek
> Fix For: 0.11.0.0
>
>
> See KIP here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+141+-+ProducerRecordBuilder+Interface



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5092) KIP 141- ProducerRecordBuilder Interface improvements

2017-04-19 Thread Stephane Maarek (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephane Maarek updated KAFKA-5092:
---
Summary: KIP 141- ProducerRecordBuilder Interface improvements  (was: 
Producer record interface)

> KIP 141- ProducerRecordBuilder Interface improvements
> -
>
> Key: KAFKA-5092
> URL: https://issues.apache.org/jira/browse/KAFKA-5092
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stephane Maarek
> Fix For: 0.11.0.0
>
>
> See KIP here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+141+-+ProducerRecordBuilder+Interface



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5092) Producer record interface

2017-04-19 Thread Stephane Maarek (JIRA)
Stephane Maarek created KAFKA-5092:
--

 Summary: Producer record interface
 Key: KAFKA-5092
 URL: https://issues.apache.org/jira/browse/KAFKA-5092
 Project: Kafka
  Issue Type: Improvement
Reporter: Stephane Maarek
 Fix For: 0.11.0.0


See KIP here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP+141+-+ProducerRecordBuilder+Interface



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2800: added interface to allow producers to create a Pro...

2017-04-19 Thread simplesteph
Github user simplesteph closed the pull request at:

https://github.com/apache/kafka/pull/2800


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS] KIP 141 - ProducerRecordBuilder Interface

2017-04-19 Thread Stephane Maarek
Hi all,

My first KIP, let me know your thoughts!
https://cwiki.apache.org/confluence/display/KAFKA/KIP+141+-+ProducerRecordBuilder+Interface


Cheers,
Stephane


[jira] [Commented] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2017-04-19 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975603#comment-15975603
 ] 

Jeff Widman commented on KAFKA-3042:


[~lindong] thanks for the offer to help and sorry for the slow response. 

I'm not exactly sure how to repro, but below I copied a sanitized version of 
our internal wiki page documenting our findings as we tried to figure out what 
was happening and how we got into the state of mis-matched controller epoch for 
controller vs random partition. It's not the most polished, more of a train of 
thought put to paper as we debugged.

Reading through it, it appeared that broker 3 lost connection to zookeeper, 
then when it came back, it elected itself controller, but somehow ended up in a 
state where the broker 3 controller had a list of brokers that was completely 
empty. This doesn't make logical sense because if a broker is controller, then 
it should list itself in active brokers. But somehow it happened. Then 
following that, the active epoch for the controller is 134, but the active 
epoch listed by a random partition in zookeeper is 133. So that created the 
version mismatch. 

More details below, and I also have access to the detailed Kafka logs (but not 
ZK logs) beyond just the snippets if you need anything else. They will get 
rotated out of elasticsearch within a few months and disappear, so hopefully we 
can get to the bottom of this before that.


{code}
3 node cluster. 
Broker 1 is controller.
Zookeeper GC pause meant that broker 3 lost connection. 
When it came back, broker 3 thought it was controller, but thought there were 
no alive brokers--see the empty set referenced in the logs below. This alone 
seems incorrect because if a broker is a controller, you'd think it would 
include itself in the set.


See the following in the logs:


[2017-03-17 21:32:15,812] ERROR Controller 3 epoch 134 initiated state change 
for partition [topic_name,626] from OfflinePartition to OnlinePartition failed 
(s
tate.change.logger)
kafka.common.NoReplicaOnlineException: No replica for partition 
[topic_name,626] is alive. Live brokers are: [Set()], Assigned replicas are: 
[List(1, 3)]
at 
kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
at 
kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345)
at 
kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at 
kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at 
kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70)
at 
kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335)
at 
kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141)
at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:824)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)

Looking at the code + error message, the controller is unaware of active 
brokers. However, there are assigned replicas. We checked the log files under 
/data/kafka and they had m_times greater than the exception timestamp, plus our 
producers and consumers seemed to be working, so the cluster is successfully 
passing data around. The controller situation is just screwed up.


[2017-03-17 21:32:43,976] ERROR Controller 3 epoch 134 initiated state 
change for partition 

Re: [DISCUSS] KIP 130: Expose states of active tasks to KafkaStreams public API

2017-04-19 Thread Florian Hussonnois
Hi Matthias,

So sorry for the delay in replying to you. For now, I think we can keep
KafkaStreams#toString() as it is.
It's always preferable to have an implementation for toString method.

2017-04-14 4:08 GMT+02:00 Matthias J. Sax :

> Florian,
>
> >>> What about KafkaStreams#toString() method?
> >>>
> >>> I think, we want to deprecate it as with KIP-120 and the changes of
> this
> >>> KIP, is gets obsolete.
>
> Any thoughts about this? For me, this is the last open point to discuss
> (or what should be reflected in the KIP in case you agree) before I can
> put my vote on the VOTE thread do did start already.
>
> -Matthias
>
>
> On 4/11/17 12:18 AM, Damian Guy wrote:
> > Hi Florian,
> >
> > Thanks for the updates. The KIP is looking good.
> >
> > Cheers,
> > Damian
> >
> > On Fri, 7 Apr 2017 at 22:41 Matthias J. Sax 
> wrote:
> >
> >> What about KafkaStreams#toString() method?
> >>
> >> I think, we want to deprecate it as with KIP-120 and the changes of this
> >> KIP, is gets obsolete.
> >>
> >> If we do so, please update the KIP accordingly.
> >>
> >>
> >> -Matthias
> >>
> >> On 3/28/17 7:00 PM, Matthias J. Sax wrote:
> >>> Thanks for updating the KIP!
> >>>
> >>> I think it's good as is -- I would not add anything more to
> TaskMetadata.
> >>>
> >>> About subtopologies and tasks. We do have the concept of subtopologies
> >>> already in KIP-120. It's only missing and ID that allow to link a
> >>> subtopology to a task.
> >>>
> >>> IMHO, adding a simple variable to `Subtopoloy` that provide the id
> >>> should be sufficient. We can simply document in the JavaDocs how
> >>> Subtopology and TaskMetadata can be linked to each other.
> >>>
> >>> I did update KIP-120 accordingly.
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 3/28/17 3:45 PM, Florian Hussonnois wrote:
>  Hi all,
> 
>  I've updated the KIP and the PR to reflect your suggestions.
> 
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> 130%3A+Expose+states+of+active+tasks+to+KafkaStreams+public+API
>  https://github.com/apache/kafka/pull/2612
> 
>  Also, I've exposed property StreamThread#state as a string through the
>  new class ThreadMetadata.
> 
>  Thanks,
> 
>  2017-03-27 23:40 GMT+02:00 Florian Hussonnois   >:
> 
>  Hi Guozhang, Matthias,
> 
>  It's a great idea to add sub topologies descriptions. This would
>  help developers to better understand topology concept.
> 
>  I agree that is not really user-friendly to check if
>  `StreamsMetadata#streamThreads` is not returning null.
> 
>  The method name localThreadsMetadata looks good. In addition, it's
>  more simple to build ThreadMetadata instances from the
> `StreamTask`
>  class than from `StreamPartitionAssignor` class.
> 
>  I will work on modifications. As I understand, I have to add the
>  property subTopologyId property to the TaskMetadata class - Am I
> >> right ?
> 
>  Thanks,
> 
>  2017-03-26 0:25 GMT+01:00 Guozhang Wang   >:
> 
>  Re 1): this is a good point. May be we can move
>  `StreamsMetadata#streamThreads` as
>  `KafkaStreams#localThreadsMetadata`?
> 
>  3): this is a minor suggestion about function name of
>  `assignedPartitions`, to `topicPartitions` to be consistent
> with
>  `StreamsMetadata`?
> 
> 
>  Guozhang
> 
>  On Thu, Mar 23, 2017 at 4:30 PM, Matthias J. Sax
>  > wrote:
> 
>  Thanks for the progress on this KIP. I think we are on the
>  right path!
> 
>  Couple of comments/questions:
> 
>  (1) Why do we not consider the "rejected alternative" to
> add
>  the method
>  to KafkaStreams? The comment on #streamThreads() says:
> 
>  "Note this method will return null if called
> on
>  {@link
>  StreamsMetadata} which represent a remote application."
> 
>  Thus, if we cannot get any remote metadata, it seems not
>  straight
>  forward to not add it to KafkaStreams directly -- this
> would
>  avoid
>  invalid calls and `null` return value in the first place.
> 
>  I like the idea about exposing sub-topologies.:
> 
>  (2a) I would recommend to rename `topicsGroupId` to
>  `subTopologyId` :)
> 
>  (2b) We could add this to KIP-120 already. However, I
> would
>  not just
>  link both via name, but leverage KIP-120 

[jira] [Resolved] (KAFKA-4892) kafka 0.8.2.1 getOffsetsBefore API returning correct offset given timestamp in unix epoch format

2017-04-19 Thread Chris Bedford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Bedford resolved KAFKA-4892.
--
Resolution: Not A Problem

this was due to cockpit error.i thought the timestamp was based on create 
time of log segment. not last-modified.. when i started using that attribute 
everything started to make sense.  sorry for the false alarm !



> kafka 0.8.2.1 getOffsetsBefore API returning correct offset given timestamp 
> in unix epoch format
> 
>
> Key: KAFKA-4892
> URL: https://issues.apache.org/jira/browse/KAFKA-4892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, offset manager
>Affects Versions: 0.8.2.1
> Environment: ubuntu 16.04
>Reporter: Chris Bedford
>
> I am seeing unexpected behavior in the getOffsetsBefore method of the client 
> API.
> I understand the granularity of 'start-from-offset' via kafka spout is based 
> on how many log segments you have.  
> I have created a demo program that repro's this on my 
> git hub account [ 
> https://github.com/buildlackey/kafkaOffsetBug/blob/master/README.md ], 
> and I have also posted this same question to stack overflow with 
>  a detailed set of steps for how to repro this issue 
> (using my test program and scripts).  
> See: 
> http://stackoverflow.com/questions/42775128/kafka-0-8-2-1-getoffsetsbefore-api-returning-correct-offset-given-timestamp-in-u
> Thanks in advance for your help !



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-135 : Send of null key to a compacted topic should throw non-retriable error back to user

2017-04-19 Thread Mayuresh Gharat
Hi Ismael,

I went ahead and created a patch so that we get on same page regarding what
we want to do. We can make changes if you have  any comments.

Thanks,

Mayuresh

On Mon, Apr 10, 2017 at 11:32 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Got it.
> We can probably extend the InvalidRecordException with a more specific
> exception this use case and make it first class for produce side OR we can
> add an error code for InvalidRecordException in the Errors class and make
> it first class. I am fine either ways.
> What do you prefer?
>
> Thanks,
>
> Mayuresh
>
> On Mon, Apr 10, 2017 at 10:16 AM, Ismael Juma  wrote:
>
>> Hi Mayuresh,
>>
>> I was suggesting that we introduce a new error code for non retriable
>> invalid record exceptions (not sure what's a good name). We would then
>> change LogValidator and Log to use this new exception wherever it makes
>> sense (errors that are not retriable). One of many such cases is
>> https://github.com/apache/kafka/blob/5cf64f06a877a181d12a2ae2390516
>> ba1a572135/core/src/main/scala/kafka/log/LogValidator.scala#L78
>> 
>>
>> Does that make sense?
>>
>> Ismael
>>
>> On Thu, Apr 6, 2017 at 5:50 PM, Mayuresh Gharat <
>> gharatmayures...@gmail.com>
>> wrote:
>>
>> > Hi Ismael,
>> >
>> > Are you suggesting to use the InvalidRecordException when the key is
>> null?
>> >
>> > Thanks,
>> >
>> > Mayuresh
>> >
>> > On Thu, Apr 6, 2017 at 8:49 AM, Ismael Juma  wrote:
>> >
>> > > Hi Mayuresh,
>> > >
>> > > I took a closer look at the code and we seem to throw
>> > > `InvalidRecordException` in a number of cases where retrying doesn't
>> seem
>> > > to make sense. For example:
>> > >
>> > > throw new InvalidRecordException(s"Log record magic does not match
>> outer
>> > > magic ${batch.magic}")
>> > > throw new InvalidRecordException("Found invalid number of record
>> headers
>> > "
>> > > + numHeaders);
>> > > throw new InvalidRecordException("Found invalid record count " +
>> > numRecords
>> > > + " in magic v" + magic() + " batch");
>> > >
>> > > It seems like most of the usage of InvalidRecordException is for non
>> > > retriable errors. Maybe we need to introduce a non retriable version
>> of
>> > > this exception and use it in the various places where it makes sense.
>> > >
>> > > Ismael
>> > >
>> > > On Tue, Apr 4, 2017 at 12:22 AM, Mayuresh Gharat <
>> > > gharatmayures...@gmail.com
>> > > > wrote:
>> > >
>> > > > Hi All,
>> > > >
>> > > > It seems that there is no further concern with the KIP-135. At this
>> > point
>> > > > we would like to start the voting process. The KIP can be found at
>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > 135+%3A+Send+of+null+key+to+a+compacted+topic+should+throw+
>> > > > non-retriable+error+back+to+user
>> > > > > > > action?pageId=67638388
>> > > > >
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Mayuresh
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > -Regards,
>> > Mayuresh R. Gharat
>> > (862) 250-7125
>> >
>>
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Commented] (KAFKA-4808) send of null key to a compacted topic should throw error back to user

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975450#comment-15975450
 ] 

ASF GitHub Bot commented on KAFKA-4808:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/2875

KAFKA-4808 : Send of null key to a compacted topic should throw 
non-retriable error back to user



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka KAFKA-4808

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2875.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2875


commit 731f1d1fedda5e09a8bd7094baf2e1572a3ba06e
Author: MayureshGharat 
Date:   2017-04-19T17:45:08Z

Added non retriable exception for producing record with null key to a 
compacted topic




> send of null key to a compacted topic should throw error back to user
> -
>
> Key: KAFKA-4808
> URL: https://issues.apache.org/jira/browse/KAFKA-4808
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
>Reporter: Ismael Juma
>Assignee: Mayuresh Gharat
> Fix For: 0.11.0.0
>
>
> If a message with a null key is produced to a compacted topic, the broker 
> returns `CorruptRecordException`, which is a retriable exception. As such, 
> the producer keeps retrying until retries are exhausted or request.timeout.ms 
> expires and eventually throws a TimeoutException. This is confusing and not 
> user-friendly.
> We should throw a meaningful error back to the user. From an implementation 
> perspective, we would have to use a non retriable error code to avoid this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2875: KAFKA-4808 : Send of null key to a compacted topic...

2017-04-19 Thread MayureshGharat
GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/2875

KAFKA-4808 : Send of null key to a compacted topic should throw 
non-retriable error back to user



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka KAFKA-4808

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2875.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2875


commit 731f1d1fedda5e09a8bd7094baf2e1572a3ba06e
Author: MayureshGharat 
Date:   2017-04-19T17:45:08Z

Added non retriable exception for producing record with null key to a 
compacted topic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5090:
--

Assignee: Michal Borowiecki

> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Michal Borowiecki
>Assignee: Michal Borowiecki
>Priority: Trivial
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is  
> earliestEndTime and the sessions
>  * start is  latestStartTime
>  */
> KeyValueIterator findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Subscribe to mailing list

2017-04-19 Thread Matthias J. Sax
It's self service:

http://kafka.apache.org/contact


-Matthias

On 4/19/17 7:55 AM, Arunkumar wrote:
> Hi There
> I would like to subscribe to this mailing list and know more about kafka. 
> Please add me to the list. Thanks in advance
> 
> Thanks
> Arunkumar Pichaimuthu, PMP
> 



signature.asc
Description: OpenPGP digital signature


Jenkins build is back to normal : kafka-trunk-jdk7 #2102

2017-04-19 Thread Apache Jenkins Server
See 




[jira] [Assigned] (KAFKA-5088) some spelling error in code comment

2017-04-19 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5088:
--

Assignee: Xin

> some spelling error in code comment 
> 
>
> Key: KAFKA-5088
> URL: https://issues.apache.org/jira/browse/KAFKA-5088
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.0
>Reporter: Xin
>Assignee: Xin
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> some spelling error in code comment :
> metadata==》metatdata...
> metadata==》metatadata
> propogated==》propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5088) some spelling error in code comment

2017-04-19 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5088:
---
Fix Version/s: 0.11.0.0
   Status: Patch Available  (was: Open)

> some spelling error in code comment 
> 
>
> Key: KAFKA-5088
> URL: https://issues.apache.org/jira/browse/KAFKA-5088
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.0
>Reporter: Xin
>Assignee: Xin
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> some spelling error in code comment :
> metadata==》metatdata...
> metadata==》metatadata
> propogated==》propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2860: kafka-5068: Optionally print out metrics after run...

2017-04-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2860


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5068) Optionally print out metrics after running the perf tests

2017-04-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-5068.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 2860
[https://github.com/apache/kafka/pull/2860]

> Optionally print out metrics after running the perf tests
> -
>
> Key: KAFKA-5068
> URL: https://issues.apache.org/jira/browse/KAFKA-5068
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Jun Rao
>Assignee: huxi
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> Often, we run ProducerPerformance/ConsumerPerformance tests to investigate 
> performance issues. It's useful for the tool to print out the metrics in the 
> producer/consumer at the end of the tests. We can make this optional to 
> preserve the current behavior by default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1432

2017-04-19 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix some re-raising of exceptions in system tests

--
[...truncated 372.59 KB...]
kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas 
STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas 
PASSED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned 
STARTED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion STARTED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers STARTED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks STARTED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack STARTED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment STARTED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks STARTED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK STARTED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK STARTED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK STARTED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount STARTED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist STARTED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions STARTED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas STARTED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.ConfigCommandTest > testScramCredentials STARTED

kafka.admin.ConfigCommandTest > testScramCredentials PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType PASSED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts STARTED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig STARTED


[jira] [Updated] (KAFKA-4667) Connect should create internal topics

2017-04-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4667:
-
 Priority: Critical  (was: Major)
Fix Version/s: 0.11.0.0

Also increasing priority and optimistically setting the fix version to 0.11 
since this issue has affected quite a few people now.

> Connect should create internal topics
> -
>
> Key: KAFKA-4667
> URL: https://issues.apache.org/jira/browse/KAFKA-4667
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Emanuele Cesena
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> I'm reporting this as an issue but in fact it requires more investigation 
> (which unfortunately I'm not able to perform at this time).
> Repro steps:
> - configure Kafka for consistency, for example:
> default.replication.factor=3
> min.insync.replicas=2
> unclean.leader.election.enable=false
> - run Connect for the first time, which should create its internal topics
> I believe these topics are created with the broker's default, in particular:
> min.insync.replicas=2
> unclean.leader.election.enable=false
> but connect doesn't produce with acks=all, which in turn may cause the 
> cluster to go in a bad state (see, e.g., 
> https://issues.apache.org/jira/browse/KAFKA-4666).
> Solution would be to force availability mode, i.e. force:
> unclean.leader.election.enable=true
> when creating the connect topics, or viceversa detect availability vs 
> consistency mode and turn acks=all if needed.
> I assume the same happens with other kafka-based services such as streams.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5087) Kafka Connect's configuration topics should always be compacted

2017-04-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5087.
--
Resolution: Duplicate
  Assignee: Ewen Cheslack-Postava

> Kafka Connect's configuration topics should always be compacted
> ---
>
> Key: KAFKA-5087
> URL: https://issues.apache.org/jira/browse/KAFKA-5087
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>
> New users frequently lose their connector configuration because they did not 
> manually set the topic deletion policy to "compact". Not a good first 
> experience with our system.
> 1. If the topics do not exist, Kafka Connect should create them with the 
> correct configuration.
> 2. If the topics do exist, Kafka Connect should check their deletion policy 
> and refuse to start if it isn't "compact"
> I'd love to do it (or have someone else do it) the moment the AdminClient is 
> merged and to have it in 0.11.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4667) Connect should create internal topics

2017-04-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975068#comment-15975068
 ] 

Ewen Cheslack-Postava commented on KAFKA-4667:
--

For potential implementer, see notes in KAFKA-5087 -- setting up compaction 
properly is important, and as soon as the AdminClient is merged someone could 
start work on this.

> Connect should create internal topics
> -
>
> Key: KAFKA-4667
> URL: https://issues.apache.org/jira/browse/KAFKA-4667
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Emanuele Cesena
>
> I'm reporting this as an issue but in fact it requires more investigation 
> (which unfortunately I'm not able to perform at this time).
> Repro steps:
> - configure Kafka for consistency, for example:
> default.replication.factor=3
> min.insync.replicas=2
> unclean.leader.election.enable=false
> - run Connect for the first time, which should create its internal topics
> I believe these topics are created with the broker's default, in particular:
> min.insync.replicas=2
> unclean.leader.election.enable=false
> but connect doesn't produce with acks=all, which in turn may cause the 
> cluster to go in a bad state (see, e.g., 
> https://issues.apache.org/jira/browse/KAFKA-4666).
> Solution would be to force availability mode, i.e. force:
> unclean.leader.election.enable=true
> when creating the connect topics, or viceversa detect availability vs 
> consistency mode and turn acks=all if needed.
> I assume the same happens with other kafka-based services such as streams.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5087) Kafka Connect's configuration topics should always be compacted

2017-04-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975066#comment-15975066
 ] 

Ewen Cheslack-Postava commented on KAFKA-5087:
--

I think this is really part of the general "create internal topics 
automatically" issue that is already filed in KAFKA-4667. Going to close this 
as duplicate, but copy over some of the notes since compaction wasn't mentioned 
in that JIRA yet.

> Kafka Connect's configuration topics should always be compacted
> ---
>
> Key: KAFKA-5087
> URL: https://issues.apache.org/jira/browse/KAFKA-5087
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Priority: Critical
>
> New users frequently lose their connector configuration because they did not 
> manually set the topic deletion policy to "compact". Not a good first 
> experience with our system.
> 1. If the topics do not exist, Kafka Connect should create them with the 
> correct configuration.
> 2. If the topics do exist, Kafka Connect should check their deletion policy 
> and refuse to start if it isn't "compact"
> I'd love to do it (or have someone else do it) the moment the AdminClient is 
> merged and to have it in 0.11.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5089) JAR mismatch in KafkaConnect leads to NoSuchMethodError

2017-04-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975051#comment-15975051
 ] 

Ewen Cheslack-Postava commented on KAFKA-5089:
--

[~off...@chaosmail.at] Usually this type of error occurs if there is a 
conflicting jar on the CLASSPATH, in this case it looks like it might be a 
conflicting version of Jersey. The CLASSPATH environment variable isn't logged 
here, but you could try editing the start script (it actually delegates to 
kafka-run-class.sh) to echo it or catch the full command being executed with ps.

Just to note, however -- you mention the environment is HDP's Kafka package. 
Since this is a CLASSPATH conflict, the issue may very well be affected by 
other jars in that package or related packages. This issue hasn't popped up 
with the Apache artifacts, so there might not be much we can do to fix it 
directly here. This may end up requiring Hortonworks to make a fix to their 
package.

> JAR mismatch in KafkaConnect leads to NoSuchMethodError
> ---
>
> Key: KAFKA-5089
> URL: https://issues.apache.org/jira/browse/KAFKA-5089
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: HDP 2.6, Centos 7.3.1611, 
> kafka-0.10.1.2.6.0.3-8.el6.noarch
>Reporter: Christoph Körner
>
> When I follow the steps on the Getting Started Guide of KafkaConnect 
> (https://kafka.apache.org/quickstart#quickstart_kafkaconnect), it throws an 
> NoSuchMethodError error. 
> {code:borderStyle=solid}
> [root@devbox kafka-broker]# ./bin/connect-standalone.sh 
> config/connect-standalone.properties config/connect-file-source.properties 
> config/ connect-file-sink.properties
> [2017-04-19 14:38:36,583] INFO StandaloneConfig values:
> access.control.allow.methods =
> access.control.allow.origin =
> bootstrap.servers = [localhost:6667]
> internal.key.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> internal.value.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> key.converter = class org.apache.kafka.connect.json.JsonConverter
> offset.flush.interval.ms = 1
> offset.flush.timeout.ms = 5000
> offset.storage.file.filename = /tmp/connect.offsets
> rest.advertised.host.name = null
> rest.advertised.port = null
> rest.host.name = null
> rest.port = 8083
> task.shutdown.graceful.timeout.ms = 5000
> value.converter = class org.apache.kafka.connect.json.JsonConverter
>  (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:180)
> [2017-04-19 14:38:36,756] INFO Logging initialized @714ms 
> (org.eclipse.jetty.util.log:186)
> [2017-04-19 14:38:36,871] INFO Kafka Connect starting 
> (org.apache.kafka.connect.runtime.Connect:52)
> [2017-04-19 14:38:36,872] INFO Herder starting 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:70)
> [2017-04-19 14:38:36,872] INFO Worker starting 
> (org.apache.kafka.connect.runtime.Worker:114)
> [2017-04-19 14:38:36,873] INFO Starting FileOffsetBackingStore with file 
> /tmp/connect.offsets 
> (org.apache.kafka.connect.storage.FileOffsetBackingStore:60)
> [2017-04-19 14:38:36,877] INFO Worker started 
> (org.apache.kafka.connect.runtime.Worker:119)
> [2017-04-19 14:38:36,878] INFO Herder started 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:72)
> [2017-04-19 14:38:36,878] INFO Starting REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer:98)
> [2017-04-19 14:38:37,077] INFO jetty-9.2.15.v20160210 
> (org.eclipse.jetty.server.Server:327)
> [2017-04-19 14:38:37,154] WARN FAILED 
> o.e.j.s.ServletContextHandler@3c46e67a{/,null,STARTING}: 
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map; 
> (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
> at 
> org.glassfish.jersey.server.ApplicationHandler.(ApplicationHandler.java:331)
> at 
> org.glassfish.jersey.servlet.WebComponent.(WebComponent.java:392)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369)
> at javax.servlet.GenericServlet.init(GenericServlet.java:241)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:616)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:396)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 

[jira] [Commented] (KAFKA-5057) "Big Message Log"

2017-04-19 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15975037#comment-15975037
 ] 

Colin P. McCabe commented on KAFKA-5057:


I think it makes sense to make this configurable.  If it's not set, perhaps the 
default could be set based on the Java heap size?

> "Big Message Log"
> -
>
> Key: KAFKA-5057
> URL: https://issues.apache.org/jira/browse/KAFKA-5057
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>  Labels: Needs-kip
>
> Really large requests can cause significant GC pauses which can cause quite a 
> few other symptoms on a broker. Will be nice to be able to catch them.
> Lets add the option to log details (client id, topic, partition) for every 
> produce request that is larger than a configurable threshold.
> /cc [~apurva]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-126 - Allow KafkaProducer to batch based on uncompressed size

2017-04-19 Thread Becket Qin
Thanks for the comment, Dong. I think the batch-split-ratio makes sense but
is kind of redundant to batch-split-rate.

Also the batch-split-ratio may be a little more involved to make right:
1. A all-time batch split ratio is easy to get but not that useful.
2. A time-windowed batch-split-ratio is more complicated to make accurate.
This is because it is kind of a "stateful" metric relies on the number of
batches sent in a time window and number of batches got split in the same
time window. But the sending and the splitting time are not necessarily
falling in the same window.

Besides, a rough estimation of the batch split ratio can be derived from
the existing metrics. And I think batch-split-rate is already a good
indication on whether the batch split has caused performance problem or
not.

So I am not sure if it is worth having an explicit batch-split-ratio metric
in this case.

Thanks,

Jiangjie (Becket) Qin

On Wed, Mar 22, 2017 at 10:54 AM, Dong Lin  wrote:

> Never mind about my second comment. I misunderstood the semantics of
> producer's batch.size.
>
> On Wed, Mar 22, 2017 at 10:20 AM, Dong Lin  wrote:
>
> > Hey Becket,
> >
> > In addition to the batch-split-rate, should we also add batch-split-ratio
> > sensor to gauge the probability that we have to split batch?
> >
> > Also, in the case that the batch size configured for the producer is
> > smaller than the max message size configured for the broker, why can't we
> > just split the batch if its size exceeds the configured batch size? The
> > benefit of this approach is that the semantics of producer is
> > straightforward because we enforce the batch size that user has
> configured.
> > The implementation would also be simpler because we don't have to reply
> on
> > KIP-4 to fetch the max message size from broker. I guess you are worrying
> > about the overhead of "unnecessary" split if a batch size is between
> > user-configured batch size and broker's max message size. But is overhead
> > really a concern? If overhead is too large because user has configured a
> > very low batch size for producer, shouldn't user adjust produce config?
> >
> > Thanks,
> > Dong
> >
> > On Wed, Mar 15, 2017 at 2:50 PM, Becket Qin 
> wrote:
> >
> >> I see, then we are thinking about the same thing :)
> >>
> >> On Wed, Mar 15, 2017 at 2:26 PM, Ismael Juma  wrote:
> >>
> >> > I meant finishing what's described in the following section and then
> >> > starting a discussion followed by a vote:
> >> >
> >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 4+-+Command+line+and+centralized+administrative+operations#KIP-4-
> >> > Commandlineandcentralizedadministrativeoperations-DescribeCo
> >> nfigsRequest
> >> >
> >> > We have only voted on KIP-4 Metadata, KIP-4 Create Topics, KIP-4
> Delete
> >> > Topics so far.
> >> >
> >> > Ismael
> >> >
> >> > On Wed, Mar 15, 2017 at 8:58 PM, Becket Qin 
> >> wrote:
> >> >
> >> > > Hi Ismael,
> >> > >
> >> > > KIP-4 is also the one that I was thinking about. We have introduced
> a
> >> > > DescribeConfigRequest there so the producer can easily get the
> >> > > configurations. By "another KIP" do you mean a new (or maybe
> extended)
> >> > > protocol or using that protocol in clients?
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Jiangjie (Becket) Qin
> >> > >
> >> > > On Wed, Mar 15, 2017 at 1:21 PM, Ismael Juma 
> >> wrote:
> >> > >
> >> > > > Hi Becket,
> >> > > >
> >> > > > How were you thinking of retrieving the configuration items you
> >> > > mentioned?
> >> > > > I am asking because I was planning to post a KIP for Describe
> >> Configs
> >> > > (one
> >> > > > of the protocols in KIP-4), which would expose such information.
> But
> >> > > maybe
> >> > > > you are thinking of extending Metadata request?
> >> > > >
> >> > > > Ismael
> >> > > >
> >> > > > On Wed, Mar 15, 2017 at 7:33 PM, Becket Qin  >
> >> > > wrote:
> >> > > >
> >> > > > > Hi Jason,
> >> > > > >
> >> > > > > Good point. I was thinking about that, too. I was not sure if
> >> that is
> >> > > the
> >> > > > > right thing to do by default.
> >> > > > >
> >> > > > > If we assume people always set the batch size to max message
> size,
> >> > > > > splitting the oversized batch makes a lot of sense. But it seems
> >> > > possible
> >> > > > > that users want to control the memory footprint so they would
> set
> >> the
> >> > > > batch
> >> > > > > size to smaller than the max message size so the producer can
> have
> >> > hold
> >> > > > > batches for more partitions. In this case, splitting the batch
> >> might
> >> > > not
> >> > > > be
> >> > > > > the desired behavior.
> >> > > > >
> >> > > > > I think the most intuitive approach to this is allow the
> producer
> >> to
> >> > > get
> >> > > > > the max message size configuration (as well as some other
> >> > > configurations
> >> > > > > such as 

[jira] [Assigned] (KAFKA-4380) Remove CleanShutdownFile as 0.8.2 has been released

2017-04-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira reassigned KAFKA-4380:
---

Assignee: holdenk

> Remove CleanShutdownFile as 0.8.2 has been released
> ---
>
> Key: KAFKA-4380
> URL: https://issues.apache.org/jira/browse/KAFKA-4380
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: holdenk
>Assignee: holdenk
>Priority: Trivial
>
> There is a TODO in the code to remove CleanShutdownFile after 0.8.2 is 
> shipped, which has happened.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2101

2017-04-19 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix some re-raising of exceptions in system tests

--
[...truncated 183.67 KB...]
kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testOldProducerEpoch STARTED

kafka.log.LogTest > testOldProducerEpoch PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > shouldUpdateOffsetForLeaderEpochsWhenDeletingSegments 
STARTED

kafka.log.LogTest > shouldUpdateOffsetForLeaderEpochsWhenDeletingSegments PASSED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages STARTED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages PASSED

kafka.log.LogTest > testLogRecoversForLeaderEpoch STARTED

kafka.log.LogTest > testLogRecoversForLeaderEpoch PASSED

kafka.log.LogTest > testSizeBasedLogRoll STARTED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
STARTED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter STARTED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName STARTED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo STARTED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > shouldApplyEpochToMessageOnAppendIfLeader STARTED

kafka.log.LogTest > shouldApplyEpochToMessageOnAppendIfLeader PASSED

kafka.log.LogTest > testCleanShutdownFile STARTED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets STARTED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED


[jira] [Created] (KAFKA-5091) ReassignPartitionsCommand should protect against empty replica list assignment

2017-04-19 Thread Ryan P (JIRA)
Ryan P created KAFKA-5091:
-

 Summary: ReassignPartitionsCommand should protect against empty 
replica list assignment 
 Key: KAFKA-5091
 URL: https://issues.apache.org/jira/browse/KAFKA-5091
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Ryan P


Currently it is possible to lower a topics replication factor to 0 through the 
use of the kafka-reassign-partitions command. 

i.e. 

cat increase-replication-factor.json
  {"version":1,
  "partitions":[{"topic":"foo","partition":0,"replicas":[]}]}

kafka-reassign-partitions --zookeeper localhost:2181 --reassignment-json-file 
increase-replication-factor.json --execute

Topic:test  PartitionCount:1ReplicationFactor:0 Configs:
Topic: foo  Partition: 0Leader: -1  Replicas:   Isr:

I for one can't think of a reason why this is something someone would do 
intentionally. That said I think it's worth validating that at least 1 replica 
remains within the replica list prior to executing the partition reassignment. 





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5087) Kafka Connect's configuration topics should always be compacted

2017-04-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974949#comment-15974949
 ] 

Gwen Shapira commented on KAFKA-5087:
-

The AdminClient pull request, if someone wants an early start :)

https://github.com/apache/kafka/pull/2472

> Kafka Connect's configuration topics should always be compacted
> ---
>
> Key: KAFKA-5087
> URL: https://issues.apache.org/jira/browse/KAFKA-5087
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Priority: Critical
>
> New users frequently lose their connector configuration because they did not 
> manually set the topic deletion policy to "compact". Not a good first 
> experience with our system.
> 1. If the topics do not exist, Kafka Connect should create them with the 
> correct configuration.
> 2. If the topics do exist, Kafka Connect should check their deletion policy 
> and refuse to start if it isn't "compact"
> I'd love to do it (or have someone else do it) the moment the AdminClient is 
> merged and to have it in 0.11.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread Michal Borowiecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Borowiecki updated KAFKA-5090:
-
Affects Version/s: 0.10.2.1

> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Michal Borowiecki
>Priority: Trivial
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is  
> earliestEndTime and the sessions
>  * start is  latestStartTime
>  */
> KeyValueIterator findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread Michal Borowiecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Borowiecki updated KAFKA-5090:
-
Comment: was deleted

(was: https://github.com/apache/kafka/pull/2874)

> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Michal Borowiecki
>Priority: Trivial
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is  
> earliestEndTime and the sessions
>  * start is  latestStartTime
>  */
> KeyValueIterator findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread Michal Borowiecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Borowiecki updated KAFKA-5090:
-
Status: Patch Available  (was: Open)

https://github.com/apache/kafka/pull/2874

> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Michal Borowiecki
>Priority: Trivial
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is  
> earliestEndTime and the sessions
>  * start is  latestStartTime
>  */
> KeyValueIterator findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2852: MINOR: Fix some re-raising of exceptions in system...

2017-04-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2852


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2874: KAFKA-5090 Kafka Streams SessionStore.findSessions...

2017-04-19 Thread mihbor
GitHub user mihbor opened a pull request:

https://github.com/apache/kafka/pull/2874

KAFKA-5090 Kafka Streams SessionStore.findSessions javadoc broken



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mihbor/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2874.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2874


commit 0bcb6fba658826964589fe409f80511a31c3164b
Author: mihbor 
Date:   2017-04-19T15:18:04Z

KAFKA-5090 Kafka Streams SessionStore.findSessions javadoc broken




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974897#comment-15974897
 ] 

ASF GitHub Bot commented on KAFKA-5090:
---

GitHub user mihbor opened a pull request:

https://github.com/apache/kafka/pull/2874

KAFKA-5090 Kafka Streams SessionStore.findSessions javadoc broken



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mihbor/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2874.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2874


commit 0bcb6fba658826964589fe409f80511a31c3164b
Author: mihbor 
Date:   2017-04-19T15:18:04Z

KAFKA-5090 Kafka Streams SessionStore.findSessions javadoc broken




> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Michal Borowiecki
>Priority: Trivial
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is  
> earliestEndTime and the sessions
>  * start is  latestStartTime
>  */
> KeyValueIterator findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4912) Add check for topic name length

2017-04-19 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974892#comment-15974892
 ] 

Soumabrata Chakraborty commented on KAFKA-4912:
---

Hi [~sharad.develop], have you already started working on the JIRA? Asking 
since the status is still "Open".  In case you haven't, I was planning to pick 
this up.  Let me know.

> Add check for topic name length
> ---
>
> Key: KAFKA-4912
> URL: https://issues.apache.org/jira/browse/KAFKA-4912
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Soumabrata Chakraborty
>Priority: Minor
>  Labels: newbie
>
> We should check topic name length (if internal topics, and maybe for source 
> topics? -> in cause, {{topic.auto.create}} is enabled this might prevent 
> problems), and raise an exception if they are too long. Cf. KAFKA-4893



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5075) Defer exception to the next pollOnce() if consumer's fetch position has already increased

2017-04-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-5075.
-
Resolution: Fixed

Resolving so it will show up in 0.10.2.1 release notes

> Defer exception to the next pollOnce() if consumer's fetch position has 
> already increased
> -
>
> Key: KAFKA-5075
> URL: https://issues.apache.org/jira/browse/KAFKA-5075
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.2.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> In Fetcher.fetchRecords() we iterate over the partition data to collect the 
> ConsumerRecords, after we collect some consumer records from a partition, we 
> advance the position of that partition then move on to the next partition. If 
> the next partition throws exceptions (e.g. OffsetOutOfRangeException), the 
> messages that have already been read out of the buffer will not be delivered 
> to the users. Since the positions of the previous partitions have been be 
> updated, those messages will not be consumed again either.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4912) Add check for topic name length

2017-04-19 Thread Soumabrata Chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumabrata Chakraborty reassigned KAFKA-4912:
-

Assignee: Soumabrata Chakraborty  (was: Sharad)

> Add check for topic name length
> ---
>
> Key: KAFKA-4912
> URL: https://issues.apache.org/jira/browse/KAFKA-4912
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Soumabrata Chakraborty
>Priority: Minor
>  Labels: newbie
>
> We should check topic name length (if internal topics, and maybe for source 
> topics? -> in cause, {{topic.auto.create}} is enabled this might prevent 
> problems), and raise an exception if they are too long. Cf. KAFKA-4893



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread Michal Borowiecki (JIRA)
Michal Borowiecki created KAFKA-5090:


 Summary: Kafka Streams SessionStore.findSessions javadoc broken
 Key: KAFKA-5090
 URL: https://issues.apache.org/jira/browse/KAFKA-5090
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Michal Borowiecki
Priority: Trivial


{code}
/**
 * Fetch any sessions with the matching key and the sessions end is  
earliestEndTime and the sessions
 * start is  latestStartTime
 */
KeyValueIterator findSessions(final K key, long 
earliestSessionEndTime, final long latestSessionStartTime);
{code}

The conditions in the javadoc comment are inverted (le should be ge and ge 
shoudl be le), since this is what the code does. They were correct in the 
original KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
{code}

/**
 * Find any aggregated session values with the matching key and where the
 * session’s end time is >= earliestSessionEndTime, i.e, the oldest session 
to
 * merge with, and the session’s start time is <= latestSessionStartTime, 
i.e,
 * the newest session to merge with.
 */
   KeyValueIterator findSessionsToMerge(final K key, final 
long earliestSessionEndTime, final long latestSessionStartTime);
{code}

Also, the escaped html character references are missing the trailing semicolon 
making them render as-is.

Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5072) Kafka topics should allow custom metadata configs within some config namespace

2017-04-19 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974881#comment-15974881
 ] 

Soumabrata Chakraborty commented on KAFKA-5072:
---

PR created: https://github.com/apache/kafka/pull/2873

The PR has a WIP tag since I am not sure how to add this to the documentation 
given that the list of kafka topic properties is auto-generated

Before the change:
[soumabrata@Krishna bin]$ ./kafka-configs.sh --zookeeper localhost:2181 
--entity-type topics --entity-name demo --alter --add-config 
'metadata.contact.info=soumabr...@gmail.com'
Error while executing config command Unknown Log configuration 
metadata.contact.info.
org.apache.kafka.common.errors.InvalidConfigurationException: Unknown Log 
configuration metadata.contact.info.

After the Change:
[soumabrata@Krishna bin]$ ./kafka-configs.sh --zookeeper localhost:2181 
--entity-type topics --entity-name demo --alter --add-config 
'metadata.contact.info=soumabr...@gmail.com'
Completed Updating config for entity: topic 'demo'.
[soumabrata@Krishna bin]$ ./kafka-configs.sh --zookeeper localhost:2181 
--entity-type topics --entity-name demo --describe
Configs for topic 'demo' are metadata.contact.info=soumabr...@gmail.com

> Kafka topics should allow custom metadata configs within some config namespace
> --
>
> Key: KAFKA-5072
> URL: https://issues.apache.org/jira/browse/KAFKA-5072
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 0.10.2.0
>Reporter: Soumabrata Chakraborty
>Assignee: Soumabrata Chakraborty
>Priority: Minor
>
> Kafka topics should allow custom metadata configs
> Such config properties may have some fixed namespace e.g. metadata* or custom*
> This is handy for governance.  For example, in large organizations sharing a 
> kafka cluster - it might be helpful to be able to configure properties like 
> metadata.contact.info, metadata.project, metadata.description on a topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5075) Defer exception to the next pollOnce() if consumer's fetch position has already increased

2017-04-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-5075:

Fix Version/s: 0.10.2.1

> Defer exception to the next pollOnce() if consumer's fetch position has 
> already increased
> -
>
> Key: KAFKA-5075
> URL: https://issues.apache.org/jira/browse/KAFKA-5075
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.2.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> In Fetcher.fetchRecords() we iterate over the partition data to collect the 
> ConsumerRecords, after we collect some consumer records from a partition, we 
> advance the position of that partition then move on to the next partition. If 
> the next partition throws exceptions (e.g. OffsetOutOfRangeException), the 
> messages that have already been read out of the buffer will not be delivered 
> to the users. Since the positions of the previous partitions have been be 
> updated, those messages will not be consumed again either.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5072) Kafka topics should allow custom metadata configs within some config namespace

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974878#comment-15974878
 ] 

ASF GitHub Bot commented on KAFKA-5072:
---

GitHub user soumabrata-chakraborty opened a pull request:

https://github.com/apache/kafka/pull/2873

KAFKA-5072[WIP]: Kafka topics should allow custom metadata configs within 
some config namespace

@benstopford @ijuma @granthenke @junrao 

This change allows one to define any topic property within the namespace 
"metadata.*" - for e.g. metadata.description, metadata.project, 
metadata.contact.info, etc (More details on the JIRA)

Raising a PR with [WIP] tag since I am not sure how to add this to the 
documentation given that the list of topic properties is auto-generated for the 
documentation.

This contribution is my original work and I license the work to the Kafka 
@project under the project's open source license

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/soumabrata-chakraborty/kafka KAFKA-5072

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2873


commit 6d7abaf33d1d3754b49c599f55c505f3b1929237
Author: Soumabrata Chakraborty 
Date:   2017-04-19T02:55:38Z

Allow custom metadata configs within the namespace "metadata"




> Kafka topics should allow custom metadata configs within some config namespace
> --
>
> Key: KAFKA-5072
> URL: https://issues.apache.org/jira/browse/KAFKA-5072
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 0.10.2.0
>Reporter: Soumabrata Chakraborty
>Assignee: Soumabrata Chakraborty
>Priority: Minor
>
> Kafka topics should allow custom metadata configs
> Such config properties may have some fixed namespace e.g. metadata* or custom*
> This is handy for governance.  For example, in large organizations sharing a 
> kafka cluster - it might be helpful to be able to configure properties like 
> metadata.contact.info, metadata.project, metadata.description on a topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2873: KAFKA-5072[WIP]: Kafka topics should allow custom ...

2017-04-19 Thread soumabrata-chakraborty
GitHub user soumabrata-chakraborty opened a pull request:

https://github.com/apache/kafka/pull/2873

KAFKA-5072[WIP]: Kafka topics should allow custom metadata configs within 
some config namespace

@benstopford @ijuma @granthenke @junrao 

This change allows one to define any topic property within the namespace 
"metadata.*" - for e.g. metadata.description, metadata.project, 
metadata.contact.info, etc (More details on the JIRA)

Raising a PR with [WIP] tag since I am not sure how to add this to the 
documentation given that the list of topic properties is auto-generated for the 
documentation.

This contribution is my original work and I license the work to the Kafka 
@project under the project's open source license

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/soumabrata-chakraborty/kafka KAFKA-5072

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2873


commit 6d7abaf33d1d3754b49c599f55c505f3b1929237
Author: Soumabrata Chakraborty 
Date:   2017-04-19T02:55:38Z

Allow custom metadata configs within the namespace "metadata"




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.10.2.1 RC2

2017-04-19 Thread Swen Moczarski
Hi Gwen,
thanks for the new release candidate. Did a quick test, used the RC2 in my
recent project on client side, integration test against server version
0.10.1.1 worked well.

+1 (non-binding)

Regards,
Swen

2017-04-19 16:26 GMT+02:00 Gwen Shapira :

> Oops, good catch. I think we mislabeled it. Since it is in the release
> source/binaries, I'll track it down and just re-generate the release notes.
>
> Gwen
>
> On Tue, Apr 18, 2017 at 11:38 AM, Edoardo Comar  wrote:
>
> > Thanks Gwen
> >  KAFKA-5075 is not included in the
> > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/RELEASE_NOTES.html
> >
> > --
> > Edoardo Comar
> > IBM MessageHub
> > eco...@uk.ibm.com
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> > IBM United Kingdom Limited Registered in England and Wales with number
> > 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants.
> PO6
> > 3AU
> >
> >
> >
> > From:   Gwen Shapira 
> > To: dev@kafka.apache.org, Users , Alexander
> > Ayars 
> > Date:   18/04/2017 15:59
> > Subject:[VOTE] 0.10.2.1 RC2
> >
> >
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 0.10.2.1.
> >
> > It is a bug fix release, so we have lots of bug fixes, some super
> > important.
> >
> > Release notes for the 0.10.2.1 release:
> > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Friday, 8am PST. ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/javadoc/
> >
> > * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > dea3da5b31cc310974685a8bbccc34a2ec2ac5c8
> >
> >
> >
> > * Documentation:
> > http://kafka.apache.org/0102/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0102/protocol.html
> >
> > /**
> >
> > Your help in validating this bugfix release is super valuable, so
> > please take the time to test and vote!
> >
> > Suggested tests:
> >  * Grab the source archive and make sure it compiles
> >  * Grab one of the binary distros and run the quickstarts against them
> >  * Extract and verify one of the site docs jars
> >  * Build a sample against jars in the staging repo
> >  * Validate GPG signatures on at least one file
> >  * Validate the javadocs look ok
> >  * The 0.10.2 documentation was updated for this bugfix release
> > (especially upgrade, streams and connect portions) - please make sure
> > it looks ok: http://kafka.apache.org/documentation.html
> >
> > Thanks,
> >
> > Gwen
> >
> >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Subscribe to mailing list

2017-04-19 Thread Arunkumar
Hi There
I would like to subscribe to this mailing list and know more about kafka. 
Please add me to the list. Thanks in advance

Thanks
Arunkumar Pichaimuthu, PMP


[jira] [Commented] (KAFKA-3594) Kafka new producer retries doesn't work in 0.9.0.1

2017-04-19 Thread Shannon Carey (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974827#comment-15974827
 ] 

Shannon Carey commented on KAFKA-3594:
--

Does this problem result in failure to push messages to Kafka? Does the whole 
batch get lost, or what?

Is there a workaround?

> Kafka new producer retries doesn't work in 0.9.0.1
> --
>
> Key: KAFKA-3594
> URL: https://issues.apache.org/jira/browse/KAFKA-3594
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
> Environment: Debian 7.8 Wheezy / Kafka 0.9.0.1 (Confluent 2.0.1)
>Reporter: Nicolas PHUNG
>Assignee: Manikumar
>Priority: Critical
>  Labels: kafka, new, producer, replication, retry
> Fix For: 0.9.0.2, 0.10.0.0
>
>
> Hello,
> I'm encountering an issue with the new Producer on 0.9.0.1 client with a 
> 0.9.0.1 Kafka broker when Kafka broker are offline for example. It seems the 
> retries doesn't work anymore and I got the following error logs :
> {noformat}
> play.api.Application$$anon$1: Execution exception[[IllegalStateException: 
> Memory records is not writable]]
> at play.api.Application$class.handleError(Application.scala:296) 
> ~[com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
> at play.api.DefaultApplication.handleError(Application.scala:402) 
> [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
> at 
> play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
>  [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
> at 
> play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
>  [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
> at scala.Option.map(Option.scala:146) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:320)
>  [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
> at 
> play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:316)
>  [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
> at 
> scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:46) 
> [com.typesafe.play.play-iteratees_2.11-2.3.10.jar:2.3.10]
> at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at scala.concurrent.Promise$class.complete(Promise.scala:55) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>  [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>  [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>  [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>  [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
> at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 
> [org.scala-lang.scala-library-2.11.8.jar:na]
> at 
> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) 
> [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) 
> [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
> at 
> 

Re: [VOTE] 0.10.2.1 RC2

2017-04-19 Thread Gwen Shapira
Oops, good catch. I think we mislabeled it. Since it is in the release
source/binaries, I'll track it down and just re-generate the release notes.

Gwen

On Tue, Apr 18, 2017 at 11:38 AM, Edoardo Comar  wrote:

> Thanks Gwen
>  KAFKA-5075 is not included in the
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/RELEASE_NOTES.html
>
> --
> Edoardo Comar
> IBM MessageHub
> eco...@uk.ibm.com
> IBM UK Ltd, Hursley Park, SO21 2JN
>
> IBM United Kingdom Limited Registered in England and Wales with number
> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
>
>
>
> From:   Gwen Shapira 
> To: dev@kafka.apache.org, Users , Alexander
> Ayars 
> Date:   18/04/2017 15:59
> Subject:[VOTE] 0.10.2.1 RC2
>
>
>
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 0.10.2.1.
>
> It is a bug fix release, so we have lots of bug fixes, some super
> important.
>
> Release notes for the 0.10.2.1 release:
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, 8am PST. ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/javadoc/
>
> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> dea3da5b31cc310974685a8bbccc34a2ec2ac5c8
>
>
>
> * Documentation:
> http://kafka.apache.org/0102/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0102/protocol.html
>
> /**
>
> Your help in validating this bugfix release is super valuable, so
> please take the time to test and vote!
>
> Suggested tests:
>  * Grab the source archive and make sure it compiles
>  * Grab one of the binary distros and run the quickstarts against them
>  * Extract and verify one of the site docs jars
>  * Build a sample against jars in the staging repo
>  * Validate GPG signatures on at least one file
>  * Validate the javadocs look ok
>  * The 0.10.2 documentation was updated for this bugfix release
> (especially upgrade, streams and connect portions) - please make sure
> it looks ok: http://kafka.apache.org/documentation.html
>
> Thanks,
>
> Gwen
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Jenkins build is back to normal : kafka-trunk-jdk8 #1431

2017-04-19 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-19 Thread anugrah (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974579#comment-15974579
 ] 

anugrah commented on KAFKA-5049:


[~ijuma] Can we move this to done ? Its been merged right ? Thank you

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Request to be contributor

2017-04-19 Thread Raul Estrada
Hi dev team,

Please add me as contributor in Kafka JIRA.
My JIRA ID is uurl

Thanks,
Raul


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-19 Thread anugrah (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974503#comment-15974503
 ] 

anugrah commented on KAFKA-5049:


Ty

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5089) JAR mismatch in KafkaConnect leads to NoSuchMethodError

2017-04-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christoph Körner updated KAFKA-5089:

Environment: HDP 2.6, Centos 7.3.1611, kafka-0.10.1.2.6.0.3-8.el6.noarch  
(was: HDP 2.6, Centos 7.3.1611)

> JAR mismatch in KafkaConnect leads to NoSuchMethodError
> ---
>
> Key: KAFKA-5089
> URL: https://issues.apache.org/jira/browse/KAFKA-5089
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: HDP 2.6, Centos 7.3.1611, 
> kafka-0.10.1.2.6.0.3-8.el6.noarch
>Reporter: Christoph Körner
>
> When I follow the steps on the Getting Started Guide of KafkaConnect 
> (https://kafka.apache.org/quickstart#quickstart_kafkaconnect), it throws an 
> NoSuchMethodError error. 
> {code:borderStyle=solid}
> [root@devbox kafka-broker]# ./bin/connect-standalone.sh 
> config/connect-standalone.properties config/connect-file-source.properties 
> config/ connect-file-sink.properties
> [2017-04-19 14:38:36,583] INFO StandaloneConfig values:
> access.control.allow.methods =
> access.control.allow.origin =
> bootstrap.servers = [localhost:6667]
> internal.key.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> internal.value.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> key.converter = class org.apache.kafka.connect.json.JsonConverter
> offset.flush.interval.ms = 1
> offset.flush.timeout.ms = 5000
> offset.storage.file.filename = /tmp/connect.offsets
> rest.advertised.host.name = null
> rest.advertised.port = null
> rest.host.name = null
> rest.port = 8083
> task.shutdown.graceful.timeout.ms = 5000
> value.converter = class org.apache.kafka.connect.json.JsonConverter
>  (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:180)
> [2017-04-19 14:38:36,756] INFO Logging initialized @714ms 
> (org.eclipse.jetty.util.log:186)
> [2017-04-19 14:38:36,871] INFO Kafka Connect starting 
> (org.apache.kafka.connect.runtime.Connect:52)
> [2017-04-19 14:38:36,872] INFO Herder starting 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:70)
> [2017-04-19 14:38:36,872] INFO Worker starting 
> (org.apache.kafka.connect.runtime.Worker:114)
> [2017-04-19 14:38:36,873] INFO Starting FileOffsetBackingStore with file 
> /tmp/connect.offsets 
> (org.apache.kafka.connect.storage.FileOffsetBackingStore:60)
> [2017-04-19 14:38:36,877] INFO Worker started 
> (org.apache.kafka.connect.runtime.Worker:119)
> [2017-04-19 14:38:36,878] INFO Herder started 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:72)
> [2017-04-19 14:38:36,878] INFO Starting REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer:98)
> [2017-04-19 14:38:37,077] INFO jetty-9.2.15.v20160210 
> (org.eclipse.jetty.server.Server:327)
> [2017-04-19 14:38:37,154] WARN FAILED 
> o.e.j.s.ServletContextHandler@3c46e67a{/,null,STARTING}: 
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map; 
> (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
> at 
> org.glassfish.jersey.server.ApplicationHandler.(ApplicationHandler.java:331)
> at 
> org.glassfish.jersey.servlet.WebComponent.(WebComponent.java:392)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369)
> at javax.servlet.GenericServlet.init(GenericServlet.java:241)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:616)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:396)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
> at 
> 

[jira] [Created] (KAFKA-5089) JAR mismatch in KafkaConnect leads to NoSuchMethodError

2017-04-19 Thread JIRA
Christoph Körner created KAFKA-5089:
---

 Summary: JAR mismatch in KafkaConnect leads to NoSuchMethodError
 Key: KAFKA-5089
 URL: https://issues.apache.org/jira/browse/KAFKA-5089
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.1.1
 Environment: HDP 2.6, Centos 7.3.1611
Reporter: Christoph Körner


When I follow the steps on the Getting Started Guide of KafkaConnect 
(https://kafka.apache.org/quickstart#quickstart_kafkaconnect), it throws an 
NoSuchMethodError error. 

{code:borderStyle=solid}
[root@devbox kafka-broker]# ./bin/connect-standalone.sh 
config/connect-standalone.properties config/connect-file-source.properties 
config/ connect-file-sink.properties
[2017-04-19 14:38:36,583] INFO StandaloneConfig values:
access.control.allow.methods =
access.control.allow.origin =
bootstrap.servers = [localhost:6667]
internal.key.converter = class 
org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class 
org.apache.kafka.connect.json.JsonConverter
key.converter = class org.apache.kafka.connect.json.JsonConverter
offset.flush.interval.ms = 1
offset.flush.timeout.ms = 5000
offset.storage.file.filename = /tmp/connect.offsets
rest.advertised.host.name = null
rest.advertised.port = null
rest.host.name = null
rest.port = 8083
task.shutdown.graceful.timeout.ms = 5000
value.converter = class org.apache.kafka.connect.json.JsonConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:180)
[2017-04-19 14:38:36,756] INFO Logging initialized @714ms 
(org.eclipse.jetty.util.log:186)
[2017-04-19 14:38:36,871] INFO Kafka Connect starting 
(org.apache.kafka.connect.runtime.Connect:52)
[2017-04-19 14:38:36,872] INFO Herder starting 
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:70)
[2017-04-19 14:38:36,872] INFO Worker starting 
(org.apache.kafka.connect.runtime.Worker:114)
[2017-04-19 14:38:36,873] INFO Starting FileOffsetBackingStore with file 
/tmp/connect.offsets 
(org.apache.kafka.connect.storage.FileOffsetBackingStore:60)
[2017-04-19 14:38:36,877] INFO Worker started 
(org.apache.kafka.connect.runtime.Worker:119)
[2017-04-19 14:38:36,878] INFO Herder started 
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:72)
[2017-04-19 14:38:36,878] INFO Starting REST server 
(org.apache.kafka.connect.runtime.rest.RestServer:98)
[2017-04-19 14:38:37,077] INFO jetty-9.2.15.v20160210 
(org.eclipse.jetty.server.Server:327)
[2017-04-19 14:38:37,154] WARN FAILED 
o.e.j.s.ServletContextHandler@3c46e67a{/,null,STARTING}: 
java.lang.NoSuchMethodError: 
javax.ws.rs.core.Application.getProperties()Ljava/util/Map; 
(org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.lang.NoSuchMethodError: 
javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
at 
org.glassfish.jersey.server.ApplicationHandler.(ApplicationHandler.java:331)
at 
org.glassfish.jersey.servlet.WebComponent.(WebComponent.java:392)
at 
org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177)
at 
org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369)
at javax.servlet.GenericServlet.init(GenericServlet.java:241)
at 
org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:616)
at 
org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:396)
at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at 
org.eclipse.jetty.server.handler.StatisticsHandler.doStart(StatisticsHandler.java:232)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at 

Build failed in Jenkins: kafka-trunk-jdk7 #2100

2017-04-19 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5049; Chroot check should be done for each ZkUtils instance

--
[...truncated 184.98 KB...]
kafka.admin.ConfigCommandTest > shouldAddClientConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity STARTED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues STARTED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount STARTED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist STARTED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions STARTED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas STARTED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection STARTED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData STARTED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition STARTED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas STARTED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig STARTED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment STARTED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange STARTED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted STARTED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment STARTED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testConcurrentTopicCreation STARTED

kafka.admin.AdminTest > testConcurrentTopicCreation PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > 

[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974435#comment-15974435
 ] 

ASF GitHub Bot commented on KAFKA-5049:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2857


> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2857: KAFKA-5049 Chroot check should be done for each Zk...

2017-04-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2857


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3083) a soft failure in controller may leave a topic partition in an inconsistent state

2017-04-19 Thread Dibyendu Bhattacharya (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974338#comment-15974338
 ] 

Dibyendu Bhattacharya commented on KAFKA-3083:
--

Hi [~junrao] We also had this issue in Kafka 0.9.x. Any idea when this can be 
fixed. 

> a soft failure in controller may leave a topic partition in an inconsistent 
> state
> -
>
> Key: KAFKA-3083
> URL: https://issues.apache.org/jira/browse/KAFKA-3083
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Mayuresh Gharat
>  Labels: reliability
>
> The following sequence can happen.
> 1. Broker A is the controller and is in the middle of processing a broker 
> change event. As part of this process, let's say it's about to shrink the isr 
> of a partition.
> 2. Then broker A's session expires and broker B takes over as the new 
> controller. Broker B sends the initial leaderAndIsr request to all brokers.
> 3. Broker A continues by shrinking the isr of the partition in ZK and sends 
> the new leaderAndIsr request to the broker (say C) that leads the partition. 
> Broker C will reject this leaderAndIsr since the request comes from a 
> controller with an older epoch. Now we could be in a situation that Broker C 
> thinks the isr has all replicas, but the isr stored in ZK is different.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2872: MINOR: Update dependencies for 0.11

2017-04-19 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2872

MINOR: Update dependencies for 0.11

Worth special mention:

1. Update Scala to 2.11.11 and 2.12.2
2. Update Gradle to 3.5
3. Update ZooKeeper to 3.4.10
4. Update reflections to 0.9.11, which:
* Switches to jsr305 annotations with a provided scope
* Updates Guava from 18 to 20
* Updates javaassist from 3.18 to 3.21

There’s a separate PR for updating RocksDb, so
I didn’t include that here.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka update-deps-for-0.11

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2872


commit fb337b6c923785af6d0208f1895d3e42fa2ca699
Author: Ismael Juma 
Date:   2017-04-19T07:35:13Z

MINOR: Update dependencies for 0.11

Worth special mention:

1. Update Scala to 2.11.11 and 2.12.2
2. Update Gradle to 3.5
3. Update ZooKeeper to 3.4.10
4. Update reflections to 0.9.11, which:
* Switches to jsr305 annotations with a provided scope
* Updates Guava from 18 to 20
* Updates javaassist from 3.18 to 3.21

There’s a separate PR for updating RocksDb, so
I didn’t include that here.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Newbie. Want to get started

2017-04-19 Thread Yaswanth Kumar
Thanks Matt! I will find a JIRA and start working on it.

On Mon, Apr 17, 2017 at 10:10 PM, Matthias J. Sax 
wrote:

> Hi Yaswanth,
>
> thanks for your interest in Kafka!
>
> First, I would recommend to check out the wiki:
> https://cwiki.apache.org/confluence/display/KAFKA/
> Contributing+Code+Changes
>
> Here is a list of component maintainers (but it seems not to be up to
> date...)
> https://cwiki.apache.org/confluence/display/KAFKA/Maintainers
>
> Also check out the JIRA boards:
> https://issues.apache.org/jira/browse/KAFKA-1
>
> You can just filter for components you are interested in and also filter
> for JIRAs with label "newbie" (or similar) or priority "trival"/"minor"
> to get started. If you found something interesting, just assign it to
> yourself if unassigned (I guess you need to get permission first -- what
> is you JIRA id? -- please share here at dev list, so we can add you as a
> contributor). If a ticket is already assigned but it seems that nobody
> is actively working on it, just post a comment and ask if you can talk
> it over. All other question, can be discussed in the ticket comments, too.
>
> Hope this helps to get started. Welcome to the community!
>
>
>
> -Matthias
>
> On 4/17/17 1:58 AM, Yaswanth Kumar wrote:
> > Hi,
> >
> > I am new to open source contribution. I want to get started with apache
> > kafka. Can anyone help me find some bugs to work on and a mentor I can
> ask
> > for help?
> >
> > Thanks in advance.
> >
> > Regards,
> > Yaswanth Kumar
> >
>
>


[jira] [Commented] (KAFKA-5088) some spelling error in code comment

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974182#comment-15974182
 ] 

ASF GitHub Bot commented on KAFKA-5088:
---

GitHub user auroraxlh opened a pull request:

https://github.com/apache/kafka/pull/2871

KAFKA-5088: some spelling error in code comment

fix some spelling errors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auroraxlh/kafka fix_spellingerror

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2871


commit 9ad468b401765060c8d116001b73c1c9db0c6e56
Author: xinlihua 
Date:   2017-04-19T06:49:53Z

KAFKA-5088: some spelling error in code comment




> some spelling error in code comment 
> 
>
> Key: KAFKA-5088
> URL: https://issues.apache.org/jira/browse/KAFKA-5088
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.0
>Reporter: Xin
>Priority: Trivial
>
> some spelling error in code comment :
> metadata==》metatdata...
> metadata==》metatadata
> propogated==》propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2871: KAFKA-5088: some spelling error in code comment

2017-04-19 Thread auroraxlh
GitHub user auroraxlh opened a pull request:

https://github.com/apache/kafka/pull/2871

KAFKA-5088: some spelling error in code comment

fix some spelling errors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auroraxlh/kafka fix_spellingerror

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2871


commit 9ad468b401765060c8d116001b73c1c9db0c6e56
Author: xinlihua 
Date:   2017-04-19T06:49:53Z

KAFKA-5088: some spelling error in code comment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5088) some spelling error in code comment

2017-04-19 Thread Xin (JIRA)
Xin created KAFKA-5088:
--

 Summary: some spelling error in code comment 
 Key: KAFKA-5088
 URL: https://issues.apache.org/jira/browse/KAFKA-5088
 Project: Kafka
  Issue Type: Bug
  Components: streams, tools
Affects Versions: 0.10.2.0
Reporter: Xin
Priority: Trivial


some spelling error in code comment :
metadata==》metatdata...
metadata==》metatadata
propogated==》propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2870: MINOR: Fix open file leak in log cleaner integrati...

2017-04-19 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2870

MINOR: Fix open file leak in log cleaner integration tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka fix-log-cleaner-test-leak

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2870


commit 9430a5eb5e792b66e4d71febbbed3e841eb2f94a
Author: Jason Gustafson 
Date:   2017-04-19T06:03:47Z

MINOR: Fix open file leak in log cleaner integration tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4996) Fix findbugs multithreaded correctness warnings for streams

2017-04-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4996:
-
Labels: newbie  (was: )

> Fix findbugs multithreaded correctness warnings for streams
> ---
>
> Key: KAFKA-4996
> URL: https://issues.apache.org/jira/browse/KAFKA-4996
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Colin P. McCabe
>  Labels: newbie
>
> Fix findbugs multithreaded correctness warnings for streams
> {code}
> Multithreaded correctness Warnings
>   
>   
> 
>   
>   
>   
> 
>Code Warning   
>   
>   
> 
>AT   Sequence of calls to java.util.concurrent.ConcurrentHashMap may not 
> be atomic in 
> org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(long, 
> ProcessorContext) 
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.KafkaStreams.stateListener; locked 66% of time   
>   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.processor.internals.StreamThread.stateListener; 
> locked 66% of time
>   
>  
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.processor.TopologyBuilder.applicationId; locked 50% 
> of time   
>   
>  
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.context; locked 
> 66% of time   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.cache; locked 60% 
> of time   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.context; locked 
> 66% of time   
>   
>   
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.name; locked 60% 
> of time   
>   
>  
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.serdes; locked 
> 70% of time   
>   
>
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.RocksDBStore.db; locked 63% of time  
>   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.RocksDBStore.serdes; locked 76% of 
> time  
>   
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4994:
-
Labels: newbie  (was: )

> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)