Re: [DISCUSS] Kafka 0.10.1.0 Release Plan

2016-09-09 Thread Sriram Subramanian
+1

Plan looks good to me. Thanks a lot for putting this together!


On Fri, Sep 9, 2016 at 3:45 PM, Jason Gustafson  wrote:

> Hi All,
>
> I've volunteered to be release manager for the upcoming 0.10.1 release and
> would like to propose the following timeline:
>
> Feature Freeze (Sep. 19): The 0.10.1 release branch will be created.
> Code Freeze (Oct. 3): The first RC will go out.
> Final Release (~Oct. 17): Assuming no blocking issues remain, the final
> release will be cut.
>
> The purpose of the time between the feature freeze and code freeze is to
> stabilize the set of release features. We will continue to accept bug fixes
> during this time and new system tests, but no new features will be merged
> into the release branch (they will continue to be accepted in trunk,
> however). After the code freeze, only blocking bug fixes will be accepted.
> Features which cannot be completed in time will have to await the next
> release cycle.
>
> This is the first iteration of the time-based release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan.
> Note
> that the final release is scheduled for October 17, so we have a little
> more than a month to prepare.
>
> Features which have already been merged to trunk and will be included in
> this release include the following:
>
> KIP-4 (partial): Add request APIs to create and delete topics
> KIP-33: Add time-based index
> KIP-60: Make Java client classloading more flexible
> KIP-62: Allow consumer to send heartbeats from a background thread
> KIP-65: Expose timestamps to Connect
> KIP-67: Queryable state for Kafka Streams
> KIP-71: Enable log compaction and deletion to co-exist
> KIP-75 - Add per-connector Converters
>
> Since this is the first time-based release, we propose to also include the
> following KIPs which already have a patch available and have undergone some
> review:
>
> KIP-58: Make log compaction point configurable
> KIP-63: Unify store and downstream caching in streams
> KIP-70: Revise consumer partition assignment semantics
> KIP-73: Replication quotas
> KIP-74: Add fetch response size limit in bytes
> KIP-78: Add clusterId
>
> One of the goals of time-based releases is to avoid the rush to get
> unstable features in before the release deadline. If a feature is not ready
> now, the next release window is never far away. This helps to ensure the
> overall quality of the release. We've drawn the line for this release based
> on feature progress and code review. For features which can't get in this
> time, don't worry since January will be here soon!
>
> Please let me know if you have any feedback on this plan.
>
> Thanks!
> Jason
>


[jira] [Updated] (KAFKA-4131) Multiple Regex KStream-Consumers cause Null pointer exception in addRawRecords in RecordQueue class

2016-09-09 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-4131:
---
Status: Patch Available  (was: In Progress)

Working on getting an integration test for this bug fix.

> Multiple Regex KStream-Consumers cause Null pointer exception in 
> addRawRecords in RecordQueue class
> ---
>
> Key: KAFKA-4131
> URL: https://issues.apache.org/jira/browse/KAFKA-4131
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
> Environment: Servers: Confluent Distribution 3.0.0 (i.e. kafka 0.10.0 
> release)
> Client: Kafka-streams and Kafka-client... commit: 
> 6fb33afff976e467bfa8e0b29eb827
> 70a2a3aaec
>Reporter: David J. Garcia
>Assignee: Bill Bejeck
>
> When you start two consumer processes with a regex topic (with 2 or more
> partitions for the matching topics), the second (i.e. nonleader) consumer
> will fail with a null pointer exception.
> Exception in thread "StreamThread-4" java.lang.NullPointerException
>  at org.apache.kafka.streams.processor.internals.
> RecordQueue.addRawRecords(RecordQueue.java:78)
>  at org.apache.kafka.streams.processor.internals.
> PartitionGroup.addRawRecords(PartitionGroup.java:117)
>  at org.apache.kafka.streams.processor.internals.
> StreamTask.addRecords(StreamTask.java:139)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.runLoop(StreamThread.java:299)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:208)
> The issue may be in the TopologyBuilder line 832:
> String[] topics = (sourceNodeFactory.pattern != null) ?
> sourceNodeFactory.getTopics(subscriptionUpdates.getUpdates()) :
> sourceNodeFactory.getTopics();
> Because the 2nd consumer joins as a follower, “getUpdates” returns an
> empty collection and the regular expression doesn’t get applied to any
> topics.
> Steps to Reproduce:
> 1.) Create at least two topics with at least 2 partitions each.  And start 
> sending messages to them.
> 2.) Start a single threaded Regex KStream-Consumer (i.e. becomes the leader)
> 3)  Start a new instance of this consumer (i.e. it should receive some of the 
> partitions)
> The second consumer will die with the above exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4131) Multiple Regex KStream-Consumers cause Null pointer exception in addRawRecords in RecordQueue class

2016-09-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478891#comment-15478891
 ] 

ASF GitHub Bot commented on KAFKA-4131:
---

GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/1843

KAFKA-4131: moved updates for topics matching regex subscription meth…

…od, allows for followers to get subscribed topics

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
KAFKA-4131_mulitple_regex_consumers_cause_npe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1843.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1843


commit 0816ecc454cb9d7504fbc2c48549e3f4eb2b5f22
Author: bbejeck 
Date:   2016-09-10T01:40:12Z

KAFKA-4131: moved updates for topics matching regex subscription method, 
allows for followers to get subscribed topics




> Multiple Regex KStream-Consumers cause Null pointer exception in 
> addRawRecords in RecordQueue class
> ---
>
> Key: KAFKA-4131
> URL: https://issues.apache.org/jira/browse/KAFKA-4131
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
> Environment: Servers: Confluent Distribution 3.0.0 (i.e. kafka 0.10.0 
> release)
> Client: Kafka-streams and Kafka-client... commit: 
> 6fb33afff976e467bfa8e0b29eb827
> 70a2a3aaec
>Reporter: David J. Garcia
>Assignee: Bill Bejeck
>
> When you start two consumer processes with a regex topic (with 2 or more
> partitions for the matching topics), the second (i.e. nonleader) consumer
> will fail with a null pointer exception.
> Exception in thread "StreamThread-4" java.lang.NullPointerException
>  at org.apache.kafka.streams.processor.internals.
> RecordQueue.addRawRecords(RecordQueue.java:78)
>  at org.apache.kafka.streams.processor.internals.
> PartitionGroup.addRawRecords(PartitionGroup.java:117)
>  at org.apache.kafka.streams.processor.internals.
> StreamTask.addRecords(StreamTask.java:139)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.runLoop(StreamThread.java:299)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:208)
> The issue may be in the TopologyBuilder line 832:
> String[] topics = (sourceNodeFactory.pattern != null) ?
> sourceNodeFactory.getTopics(subscriptionUpdates.getUpdates()) :
> sourceNodeFactory.getTopics();
> Because the 2nd consumer joins as a follower, “getUpdates” returns an
> empty collection and the regular expression doesn’t get applied to any
> topics.
> Steps to Reproduce:
> 1.) Create at least two topics with at least 2 partitions each.  And start 
> sending messages to them.
> 2.) Start a single threaded Regex KStream-Consumer (i.e. becomes the leader)
> 3)  Start a new instance of this consumer (i.e. it should receive some of the 
> partitions)
> The second consumer will die with the above exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1843: KAFKA-4131: moved updates for topics matching rege...

2016-09-09 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/1843

KAFKA-4131: moved updates for topics matching regex subscription meth…

…od, allows for followers to get subscribed topics

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
KAFKA-4131_mulitple_regex_consumers_cause_npe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1843.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1843


commit 0816ecc454cb9d7504fbc2c48549e3f4eb2b5f22
Author: bbejeck 
Date:   2016-09-10T01:40:12Z

KAFKA-4131: moved updates for topics matching regex subscription method, 
allows for followers to get subscribed topics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #873

2016-09-09 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3807; Fix transient test failure caused by race on future

--
[...truncated 4771 lines...]
kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK STARTED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign STARTED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists STARTED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware STARTED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion STARTED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
STARTED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists STARTED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists STARTED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection STARTED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData STARTED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition STARTED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas STARTED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig STARTED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment STARTED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange STARTED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted STARTED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment STARTED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testShutdownBroker STARTED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision STARTED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK STARTED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner STARTED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:931)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:404)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:609)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:574)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1446)
at hudson.model.AbstractProject.poll(AbstractProject.java:1349)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:528)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:557)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover STARTED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecove

[VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-09 Thread Becket Qin
Hi all,

I'd like to start the voting for KIP-79

In short we propose to :
1. add a ListOffsetRequest/ListOffsetResponse v1, and
2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in the
new consumer.

The KIP wiki is the following:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090

Thanks,

Jiangjie (Becket) Qin


[jira] [Created] (KAFKA-4148) KIP-79 add ListOffsetRequest v1 and search by timestamp interface to consumer.

2016-09-09 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-4148:
---

 Summary: KIP-79 add ListOffsetRequest v1 and search by timestamp 
interface to consumer.
 Key: KAFKA-4148
 URL: https://issues.apache.org/jira/browse/KAFKA-4148
 Project: Kafka
  Issue Type: Task
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


This ticket is to implement KIP-79.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-09 Thread Becket Qin
Hey Jun,

I am thinking about the interface. It seems that returning a timestamp for
earliestOffsets() and latestOffsets() is not that useful because the
timestamp of the first message is not necessarily the earliest timestamp,
and the timestamp of the last offset might be -1. Also exposing the
timestamp of high watermark seems unwanted. It seems better to return the
offsets without timestamps.

There are probably use cases that people want to get the timestamp of the
last message, but that requires users to consume the last message even if
we return the timestamp in the latestOffsets() because the returned
timestamp could be -1. So we can probably just make the interface only
return the offsets.

I'll make this change and start a voting thread.

Thanks for all the feedback.

Jiangjie (Becket) Qin

On Fri, Sep 9, 2016 at 10:55 AM, Becket Qin  wrote:

> Completely agree that we should have a consistent representation of
> something missing/unknown in the code.
>
> My understanding of the convention is that -1 means "not
> available"/"unknown". For example, when producer request failed, the offset
> we used in the callback is -1. And Message.NoTimestamp is also -1. Using -1
> instead of null has at least two benefits.
> 1) it works for primitive type as well as classes.
> 2) it is easy to send via wire protocols
>
> For the other use cases, Option/Null could be better.
>
> Thanks,
> Jiangjie (Becket) Qin
>
> On Fri, Sep 9, 2016 at 8:06 AM, Ismael Juma  wrote:
>
>> Our inconsistent usage of magic values like `-1` and `null` to represent
>> missing values makes it hard to reason about code. We had a recent major
>> regression in MirrorMaker as a result of this[1], for example. `null` by
>> itself is hard enough (often called the billion dollar mistake), so can we
>> decide what we prefer and stick with that going forward?
>>
>> My take is: `Option`/`Optional` > `null` > `-1` unless it's a particularly
>> performance sensitive area where the boxing incurred by having `null` as a
>> possible value is a problem. Since we're still on Java 7, we can't use
>> `Optional` in Java code.
>>
>> Ismael
>>
>> [1] The fix:
>> https://github.com/apache/kafka/commit/4e4e2fb5085758ee9ccf6
>> 307433ad531a33198d3
>>
>> On Fri, Sep 9, 2016 at 3:48 PM, Jun Rao  wrote:
>>
>> > Jiangjie,
>> >
>> > Returning TimestampOffset(-1, -1) sounds reasonable to me.
>> >
>> > Thanks,
>> >
>> > Jun
>> >
>> > On Thu, Sep 8, 2016 at 8:46 PM, Becket Qin 
>> wrote:
>> >
>> > > Hi Jun,
>> > >
>> > > > 1. latestOffsets() returns the next offset. So there won't be a
>> > timestamp
>> > > > associated with it. Would we use something like -1 for timestamp?
>> > >
>> > > The returned value would high watermark, so there might be an
>> associated
>> > > timestamp. But if it is log end offset, it seems that -1 is a
>> reasonable
>> > > value.
>> > >
>> > > > 2. Jason mentioned that if no message has timestamp >= the provided
>> > > > timestamp, we return a null value for that partition. Could we
>> document
>> > > > that in the wiki?
>> > >
>> > > I made a minor change to return a TimestampOffset(-1, -1) in that
>> case.
>> > Not
>> > > sure which on is better but there seems only minor difference. What do
>> > you
>> > > think?
>> > >
>> > > I haven't seen a planned release date yet, but I can probably get it
>> done
>> > > in 2-3 weeks with reasonable rounds of reviews.
>> > >
>> > > Thanks,
>> > >
>> > > Jiangjie (Becket) Qin
>> > >
>> > >
>> > > On Thu, Sep 8, 2016 at 6:24 PM, Jun Rao  wrote:
>> > >
>> > > > Hi, Jiangjie,
>> > > >
>> > > > Thanks for the updated KIP. A couple of minor comments.
>> > > >
>> > > > 1. latestOffsets() returns the next offset. So there won't be a
>> > timestamp
>> > > > associated with it. Would we use something like -1 for timestamp?
>> > > >
>> > > > 2. Jason mentioned that if no message has timestamp >= the provided
>> > > > timestamp, we return a null value for that partition. Could we
>> document
>> > > > that in the wiki?
>> > > >
>> > > > BTW, we are getting close to the next release. This is a really nice
>> > > > feature to have. Do you think you will have a patch ready for the
>> next
>> > > > release?
>> > > >
>> > > > Thanks.
>> > > >
>> > > > Jun
>> > > >
>> > > > On Wed, Sep 7, 2016 at 2:47 PM, Becket Qin 
>> > wrote:
>> > > >
>> > > > > That sounds reasonable to me. I'll update the KIP wiki page.
>> > > > >
>> > > > > On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson <
>> ja...@confluent.io>
>> > > > > wrote:
>> > > > >
>> > > > > > Hey Becket,
>> > > > > >
>> > > > > > I don't have a super strong preference, but I think this
>> > > > > >
>> > > > > > earliestOffset(singleton(partition));
>> > > > > >
>> > > > > > captures the intent more clearly than this:
>> > > > > >
>> > > > > > offsetsForTimes(singletonMap(partition, -1));
>> > > > > >
>> > > > > > I can understand the desire to keep the API footprint small,
>> but I
>> > > > think
>> > > > > > the use case is common enough to justify separate APIs. A coup

[jira] [Updated] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4145:
---
Status: Patch Available  (was: In Progress)

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478602#comment-15478602
 ] 

ASF GitHub Bot commented on KAFKA-4145:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1842

KAFKA-4145: Avoid redundant integration testing in ProducerSendTests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4145

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1842.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1842


commit 101ab22d130c9b03ca914edd14f41ada2e7f374b
Author: Vahid Hashemian 
Date:   2016-09-09T20:43:50Z

KAFKA-4145: Avoid redundant integration testing in ProducerSendTests




> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1842: KAFKA-4145: Avoid redundant integration testing in...

2016-09-09 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1842

KAFKA-4145: Avoid redundant integration testing in ProducerSendTests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4145

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1842.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1842


commit 101ab22d130c9b03ca914edd14f41ada2e7f374b
Author: Vahid Hashemian 
Date:   2016-09-09T20:43:50Z

KAFKA-4145: Avoid redundant integration testing in ProducerSendTests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4146) Kafka Stream ignores Serde exceptions leading to silently broken apps

2016-09-09 Thread Elias Levy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elias Levy resolved KAFKA-4146.
---
Resolution: Invalid

> Kafka Stream ignores Serde exceptions leading to silently broken apps
> -
>
> Key: KAFKA-4146
> URL: https://issues.apache.org/jira/browse/KAFKA-4146
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> It appears that Kafka Streams silently ignores Serde exceptions, leading to 
> app that are silently broken.
> E.g. if you make use of {{Stream.throough("topic")}} and the default Serde is 
> inappropriate for the type, the app will silently drop the data in the floor 
> without even the courtesy of printing a single error message.
> At the very least an initial error message should be generated, with the 
> option to generate messages for each such failure or a sampling of such 
> failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1530

2016-09-09 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3807; Fix transient test failure caused by race on future

--
[...truncated 12377 lines...]
org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs ST

[jira] [Commented] (KAFKA-4133) Provide a configuration to control consumer max in-flight fetches

2016-09-09 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478521#comment-15478521
 ] 

Jason Gustafson commented on KAFKA-4133:


[~mimaison] You should have permission now.

> Provide a configuration to control consumer max in-flight fetches
> -
>
> Key: KAFKA-4133
> URL: https://issues.apache.org/jira/browse/KAFKA-4133
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Mickael Maison
>
> With KIP-74, we now have a good way to limit the size of fetch responses, but 
> it may still be difficult for users to control overall memory since the 
> consumer will send fetches in parallel to all the brokers which own 
> partitions that it is subscribed to. To give users finer control, it might 
> make sense to add a `max.in.flight.fetches` setting to limit the total number 
> of concurrent fetches at any time. This would require a KIP since it's a new 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] Kafka 0.10.1.0 Release Plan

2016-09-09 Thread Jason Gustafson
Hi All,

I've volunteered to be release manager for the upcoming 0.10.1 release and
would like to propose the following timeline:

Feature Freeze (Sep. 19): The 0.10.1 release branch will be created.
Code Freeze (Oct. 3): The first RC will go out.
Final Release (~Oct. 17): Assuming no blocking issues remain, the final
release will be cut.

The purpose of the time between the feature freeze and code freeze is to
stabilize the set of release features. We will continue to accept bug fixes
during this time and new system tests, but no new features will be merged
into the release branch (they will continue to be accepted in trunk,
however). After the code freeze, only blocking bug fixes will be accepted.
Features which cannot be completed in time will have to await the next
release cycle.

This is the first iteration of the time-based release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan. Note
that the final release is scheduled for October 17, so we have a little
more than a month to prepare.

Features which have already been merged to trunk and will be included in
this release include the following:

KIP-4 (partial): Add request APIs to create and delete topics
KIP-33: Add time-based index
KIP-60: Make Java client classloading more flexible
KIP-62: Allow consumer to send heartbeats from a background thread
KIP-65: Expose timestamps to Connect
KIP-67: Queryable state for Kafka Streams
KIP-71: Enable log compaction and deletion to co-exist
KIP-75 - Add per-connector Converters

Since this is the first time-based release, we propose to also include the
following KIPs which already have a patch available and have undergone some
review:

KIP-58: Make log compaction point configurable
KIP-63: Unify store and downstream caching in streams
KIP-70: Revise consumer partition assignment semantics
KIP-73: Replication quotas
KIP-74: Add fetch response size limit in bytes
KIP-78: Add clusterId

One of the goals of time-based releases is to avoid the rush to get
unstable features in before the release deadline. If a feature is not ready
now, the next release window is never far away. This helps to ensure the
overall quality of the release. We've drawn the line for this release based
on feature progress and code review. For features which can't get in this
time, don't worry since January will be here soon!

Please let me know if you have any feedback on this plan.

Thanks!
Jason


[jira] [Commented] (KAFKA-4147) Transient failure in ConsumerCoordinatorTest.testAutoCommitDynamicAssignment

2016-09-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478406#comment-15478406
 ] 

ASF GitHub Bot commented on KAFKA-4147:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1841

KAFKA-4147: Fix transient failure in 
ConsumerCoordinatorTest.testAutoCommitDynamicAssignment



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4147

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1841.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1841


commit 18619cb46a82f4835ba63f237b39d62120a6
Author: Jason Gustafson 
Date:   2016-09-09T22:02:41Z

KAFKA-4147: Fix transient failure in 
ConsumerCoordinatorTest.testAutoCommitDynamicAssignment




> Transient failure in ConsumerCoordinatorTest.testAutoCommitDynamicAssignment
> 
>
> Key: KAFKA-4147
> URL: https://issues.apache.org/jira/browse/KAFKA-4147
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> Seen recently: 
> https://jenkins.confluent.io/job/kafka-trunk/1143/testReport/org.apache.kafka.clients.consumer.internals/ConsumerCoordinatorTest/testAutoCommitDynamicAssignment/.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.testAutoCommitDynamicAssignment(ConsumerCoordinatorTest.java:821)
> {code}
> Looks like it's caused by a race condition with the heartbeat thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1841: KAFKA-4147: Fix transient failure in ConsumerCoord...

2016-09-09 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1841

KAFKA-4147: Fix transient failure in 
ConsumerCoordinatorTest.testAutoCommitDynamicAssignment



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4147

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1841.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1841


commit 18619cb46a82f4835ba63f237b39d62120a6
Author: Jason Gustafson 
Date:   2016-09-09T22:02:41Z

KAFKA-4147: Fix transient failure in 
ConsumerCoordinatorTest.testAutoCommitDynamicAssignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478402#comment-15478402
 ] 

Ashish K Singh commented on KAFKA-4145:
---

+1, testFlush has been one of the most flaky tests as well.

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4147) Transient failure in ConsumerCoordinatorTest.testAutoCommitDynamicAssignment

2016-09-09 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4147:
--

 Summary: Transient failure in 
ConsumerCoordinatorTest.testAutoCommitDynamicAssignment
 Key: KAFKA-4147
 URL: https://issues.apache.org/jira/browse/KAFKA-4147
 Project: Kafka
  Issue Type: Sub-task
  Components: unit tests
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Seen recently: 
https://jenkins.confluent.io/job/kafka-trunk/1143/testReport/org.apache.kafka.clients.consumer.internals/ConsumerCoordinatorTest/testAutoCommitDynamicAssignment/.

{code}
java.lang.NullPointerException
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.testAutoCommitDynamicAssignment(ConsumerCoordinatorTest.java:821)
{code}

Looks like it's caused by a race condition with the heartbeat thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-09 Thread Aditya Auradkar
Congrats Jason.

On Fri, Sep 9, 2016 at 5:30 AM, Michael Noll  wrote:

> My compliments, Jason -- well deserved! :-)
>
> -Michael
>
>
>
> On Wed, Sep 7, 2016 at 6:49 PM, Grant Henke  wrote:
>
> > Congratulations and thank you for all of your contributions to Apache
> > Kafka Jason!
> >
> > On Wed, Sep 7, 2016 at 10:12 AM, Mayuresh Gharat <
> > gharatmayures...@gmail.com
> > > wrote:
> >
> > > congrats Jason !
> > >
> > > Thanks,
> > >
> > > Mayuresh
> > >
> > > On Wed, Sep 7, 2016 at 5:16 AM, Eno Thereska 
> > > wrote:
> > >
> > > > Congrats Jason!
> > > >
> > > > Eno
> > > > > On 7 Sep 2016, at 10:00, Rajini Sivaram <
> > rajinisiva...@googlemail.com>
> > > > wrote:
> > > > >
> > > > > Congrats, Jason!
> > > > >
> > > > > On Wed, Sep 7, 2016 at 8:29 AM, Flavio P JUNQUEIRA  >
> > > > wrote:
> > > > >
> > > > >> Congrats, Jason. Well done and great to see this project inviting
> > new
> > > > >> committers.
> > > > >>
> > > > >> -Flavio
> > > > >>
> > > > >> On 7 Sep 2016 04:58, "Ashish Singh"  wrote:
> > > > >>
> > > > >>> Congrats, Jason!
> > > > >>>
> > > > >>> On Tuesday, September 6, 2016, Jason Gustafson <
> ja...@confluent.io
> > >
> > > > >> wrote:
> > > > >>>
> > > >  Thanks all!
> > > > 
> > > >  On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin <
> becket@gmail.com
> > > >  > wrote:
> > > > 
> > > > > Congrats, Jason!
> > > > >
> > > > > On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
> > > >   > > > >>
> > > > > wrote:
> > > > >
> > > > >> congrats jason!
> > > > >>
> > > > >> On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian <
> > > > >> r...@confluent.io
> > > >  >
> > > > >> wrote:
> > > > >>
> > > > >>> Congratulations Jason!
> > > > >>>
> > > > >>> On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
> > > > >>> vahidhashem...@us.ibm.com 
> > > >  wrote:
> > > > >>>
> > > >  Congratulations Jason on this very well deserved
> recognition.
> > > > 
> > > >  --Vahid
> > > > 
> > > > 
> > > > 
> > > >  From:   Neha Narkhede >
> > > >  To: "dev@kafka.apache.org " <
> > > >  dev@kafka.apache.org >,
> > > >  "us...@kafka.apache.org " <
> > > > >> us...@kafka.apache.org
> > > >  >
> > > >  Cc: "priv...@kafka.apache.org " <
> > > >  priv...@kafka.apache.org >
> > > >  Date:   09/06/2016 03:26 PM
> > > >  Subject:[ANNOUNCE] New committer: Jason Gustafson
> > > > 
> > > > 
> > > > 
> > > >  The PMC for Apache Kafka has invited Jason Gustafson to join
> > > > >> as a
> > > >  committer and
> > > >  we are pleased to announce that he has accepted!
> > > > 
> > > >  Jason has contributed numerous patches to a wide range of
> > > > >> areas,
> > > > >> notably
> > > >  within the new consumer and the Kafka Connect layers. He has
> > > > > displayed
> > > >  great taste and judgement which has been apparent through
> his
> > > > >> involvement
> > > >  across the board from mailing lists, JIRA, code reviews to
> > > > > contributing
> > > >  features, bug fixes and code and documentation improvements.
> > > > 
> > > >  Thank you for your contribution and welcome to Apache Kafka,
> > > > >>> Jason!
> > > >  --
> > > >  Thanks,
> > > >  Neha
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > >>>
> > > > >>
> > > > >
> > > > 
> > > > >>>
> > > > >>>
> > > > >>> --
> > > > >>> Ashish 🎤h
> > > > >>>
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > >
> > > >
> > >
> > >
> > > --
> > > -Regards,
> > > Mayuresh R. Gharat
> > > (862) 250-7125
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>


[jira] [Resolved] (KAFKA-3807) OffsetValidationTest - transient failure on test_broker_rolling_bounce

2016-09-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3807.

   Resolution: Fixed
 Reviewer: Ismael Juma
Fix Version/s: 0.10.1.0

> OffsetValidationTest - transient failure on test_broker_rolling_bounce
> --
>
> Key: KAFKA-3807
> URL: https://issues.apache.org/jira/browse/KAFKA-3807
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> {code}
> test_id:
> 2016-05-28--001.kafkatest.tests.client.consumer_test.OffsetValidationTest.test_broker_rolling_bounce
> status: FAIL
> run time:   3 minutes 8.042 seconds
> Broker rolling bounce caused 2 unexpected group rebalances
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 106, in run_all_tests
> data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 162, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/client/consumer_test.py",
>  line 108, in test_broker_rolling_bounce
> "Broker rolling bounce caused %d unexpected group rebalances" % 
> unexpected_rebalances
> AssertionError: Broker rolling bounce caused 2 unexpected group rebalances
> {code}
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-05-28--001.1464455059--apache--trunk--7b7c4a7/report.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1821: KAFKA-3807: Fix transient test failure caused by r...

2016-09-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1821


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3807) OffsetValidationTest - transient failure on test_broker_rolling_bounce

2016-09-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478255#comment-15478255
 ] 

ASF GitHub Bot commented on KAFKA-3807:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1821


> OffsetValidationTest - transient failure on test_broker_rolling_bounce
> --
>
> Key: KAFKA-3807
> URL: https://issues.apache.org/jira/browse/KAFKA-3807
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Jason Gustafson
>
> {code}
> test_id:
> 2016-05-28--001.kafkatest.tests.client.consumer_test.OffsetValidationTest.test_broker_rolling_bounce
> status: FAIL
> run time:   3 minutes 8.042 seconds
> Broker rolling bounce caused 2 unexpected group rebalances
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 106, in run_all_tests
> data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 162, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/client/consumer_test.py",
>  line 108, in test_broker_rolling_bounce
> "Broker rolling bounce caused %d unexpected group rebalances" % 
> unexpected_rebalances
> AssertionError: Broker rolling bounce caused 2 unexpected group rebalances
> {code}
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-05-28--001.1464455059--apache--trunk--7b7c4a7/report.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478241#comment-15478241
 ] 

Ismael Juma commented on KAFKA-4145:


I'd probably remove `testFlush`. The rest sounds good.

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478226#comment-15478226
 ] 

Vahid Hashemian commented on KAFKA-4145:


How about reducing the above list to:

# testSendCompressedMessageWithLogAppendTime
# testSendNonCompressedMessageWithLogApendTime
# testAutoCreateTopic
# testFlush
# testSendWithInvalidCreateTime


> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478195#comment-15478195
 ] 

Ismael Juma commented on KAFKA-4145:


I am not sure about the `testClose` ones. It seems like it would be good to 
test that with SSL as well. We should have a few `send` tests that are tested 
with multiple protocols.

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4145 started by Vahid Hashemian.
--
> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-09 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-4145:
--

Assignee: Vahid Hashemian

> Avoid redundant integration testing in ProducerSendTests
> 
>
> Key: KAFKA-4145
> URL: https://issues.apache.org/jira/browse/KAFKA-4145
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> We have a few test cases in {{BaseProducerSendTest}} which probably have 
> little value being tested for both Plaintext and SSL. We can move them to 
> {{PlaintextProducerSendTest}} and save a little bit on the build time. The 
> following tests seem like possible candidates:
> 1. testSendCompressedMessageWithCreateTime
> 2. testSendNonCompressedMessageWithCreateTime
> 3. testSendCompressedMessageWithLogAppendTime
> 4. testSendNonCompressedMessageWithLogApendTime
> 5. testAutoCreateTopic
> 6. testFlush
> 7. testSendWithInvalidCreateTime
> 8. testCloseWithZeroTimeoutFromCallerThread
> 9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4135) Inconsistent javadoc for KafkaConsumer.poll behavior when there is no subscription

2016-09-09 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4135:
---
Status: Patch Available  (was: Open)

> Inconsistent javadoc for KafkaConsumer.poll behavior when there is no 
> subscription
> --
>
> Key: KAFKA-4135
> URL: https://issues.apache.org/jira/browse/KAFKA-4135
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
>
> Currently, the javadoc for {{KafkaConsumer.poll}} says the following: 
> "It is an error to not have subscribed to any topics or partitions before 
> polling for data." However, we don't actually raise an exception if this is 
> the case. Perhaps we should?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1840: MINOR: catch InvalidStateStoreException in Queryab...

2016-09-09 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1840

MINOR: catch InvalidStateStoreException in QueryableStateIntegrationTest

A couple of the tests may transiently fail in QueryableStateIntegrationTest 
as they are not catching InvalidStateStoreException. This exception is expected 
during rebalance.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1840.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1840


commit 2ad606c77ecebd314b4f2fe2e20cf7038e1ac980
Author: Damian Guy 
Date:   2016-09-09T19:23:59Z

catch InvalidStateStoreException in test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4135) Inconsistent javadoc for KafkaConsumer.poll behavior when there is no subscription

2016-09-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15477973#comment-15477973
 ] 

ASF GitHub Bot commented on KAFKA-4135:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1839

KAFKA-4135: Consumer polls when not subscribed to any topic or assigned any 
partition should raise an exception

When the consumer is not subscribed to any topic or, in the case of manual 
assignment, is not assigned any partition, calling `poll()` should raise an 
exception.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4135

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1839


commit f66891a59f8c5e93fd128cdaae179c6cef4eb025
Author: Vahid Hashemian 
Date:   2016-09-09T18:56:56Z

KAFKA-4135: Consumer polls when not subscribed to any topic or assigned any 
partition should raise an exception

When consumer is not subscribed to any topic or, in the case of manual 
assignment, is not assigned any partition calling `poll()` should lead to an 
exception.




> Inconsistent javadoc for KafkaConsumer.poll behavior when there is no 
> subscription
> --
>
> Key: KAFKA-4135
> URL: https://issues.apache.org/jira/browse/KAFKA-4135
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
>
> Currently, the javadoc for {{KafkaConsumer.poll}} says the following: 
> "It is an error to not have subscribed to any topics or partitions before 
> polling for data." However, we don't actually raise an exception if this is 
> the case. Perhaps we should?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1839: KAFKA-4135: Consumer polls when not subscribed to ...

2016-09-09 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1839

KAFKA-4135: Consumer polls when not subscribed to any topic or assigned any 
partition should raise an exception

When the consumer is not subscribed to any topic or, in the case of manual 
assignment, is not assigned any partition, calling `poll()` should raise an 
exception.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4135

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1839


commit f66891a59f8c5e93fd128cdaae179c6cef4eb025
Author: Vahid Hashemian 
Date:   2016-09-09T18:56:56Z

KAFKA-4135: Consumer polls when not subscribed to any topic or assigned any 
partition should raise an exception

When consumer is not subscribed to any topic or, in the case of manual 
assignment, is not assigned any partition calling `poll()` should lead to an 
exception.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Subscribe me

2016-09-09 Thread Guozhang Wang
Hi Kiran,

You can follow the instructions here to subscribe:
http://kafka.apache.org/contact.html

Guozhang

On Fri, Sep 9, 2016 at 6:57 AM, Kiran Potladurthi <
kiran.potladur...@gmail.com> wrote:

> Please subscribe me for this mailing list.
>
> Regards
> Kiran
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-09 Thread Becket Qin
Completely agree that we should have a consistent representation of
something missing/unknown in the code.

My understanding of the convention is that -1 means "not
available"/"unknown". For example, when producer request failed, the offset
we used in the callback is -1. And Message.NoTimestamp is also -1. Using -1
instead of null has at least two benefits.
1) it works for primitive type as well as classes.
2) it is easy to send via wire protocols

For the other use cases, Option/Null could be better.

Thanks,
Jiangjie (Becket) Qin

On Fri, Sep 9, 2016 at 8:06 AM, Ismael Juma  wrote:

> Our inconsistent usage of magic values like `-1` and `null` to represent
> missing values makes it hard to reason about code. We had a recent major
> regression in MirrorMaker as a result of this[1], for example. `null` by
> itself is hard enough (often called the billion dollar mistake), so can we
> decide what we prefer and stick with that going forward?
>
> My take is: `Option`/`Optional` > `null` > `-1` unless it's a particularly
> performance sensitive area where the boxing incurred by having `null` as a
> possible value is a problem. Since we're still on Java 7, we can't use
> `Optional` in Java code.
>
> Ismael
>
> [1] The fix:
> https://github.com/apache/kafka/commit/4e4e2fb5085758ee9ccf6307433ad5
> 31a33198d3
>
> On Fri, Sep 9, 2016 at 3:48 PM, Jun Rao  wrote:
>
> > Jiangjie,
> >
> > Returning TimestampOffset(-1, -1) sounds reasonable to me.
> >
> > Thanks,
> >
> > Jun
> >
> > On Thu, Sep 8, 2016 at 8:46 PM, Becket Qin  wrote:
> >
> > > Hi Jun,
> > >
> > > > 1. latestOffsets() returns the next offset. So there won't be a
> > timestamp
> > > > associated with it. Would we use something like -1 for timestamp?
> > >
> > > The returned value would high watermark, so there might be an
> associated
> > > timestamp. But if it is log end offset, it seems that -1 is a
> reasonable
> > > value.
> > >
> > > > 2. Jason mentioned that if no message has timestamp >= the provided
> > > > timestamp, we return a null value for that partition. Could we
> document
> > > > that in the wiki?
> > >
> > > I made a minor change to return a TimestampOffset(-1, -1) in that case.
> > Not
> > > sure which on is better but there seems only minor difference. What do
> > you
> > > think?
> > >
> > > I haven't seen a planned release date yet, but I can probably get it
> done
> > > in 2-3 weeks with reasonable rounds of reviews.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > >
> > > On Thu, Sep 8, 2016 at 6:24 PM, Jun Rao  wrote:
> > >
> > > > Hi, Jiangjie,
> > > >
> > > > Thanks for the updated KIP. A couple of minor comments.
> > > >
> > > > 1. latestOffsets() returns the next offset. So there won't be a
> > timestamp
> > > > associated with it. Would we use something like -1 for timestamp?
> > > >
> > > > 2. Jason mentioned that if no message has timestamp >= the provided
> > > > timestamp, we return a null value for that partition. Could we
> document
> > > > that in the wiki?
> > > >
> > > > BTW, we are getting close to the next release. This is a really nice
> > > > feature to have. Do you think you will have a patch ready for the
> next
> > > > release?
> > > >
> > > > Thanks.
> > > >
> > > > Jun
> > > >
> > > > On Wed, Sep 7, 2016 at 2:47 PM, Becket Qin 
> > wrote:
> > > >
> > > > > That sounds reasonable to me. I'll update the KIP wiki page.
> > > > >
> > > > > On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson <
> ja...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hey Becket,
> > > > > >
> > > > > > I don't have a super strong preference, but I think this
> > > > > >
> > > > > > earliestOffset(singleton(partition));
> > > > > >
> > > > > > captures the intent more clearly than this:
> > > > > >
> > > > > > offsetsForTimes(singletonMap(partition, -1));
> > > > > >
> > > > > > I can understand the desire to keep the API footprint small, but
> I
> > > > think
> > > > > > the use case is common enough to justify separate APIs. A couple
> > > > > additional
> > > > > > points:
> > > > > >
> > > > > > 1. If we had separate methods, it might make sense to treat
> > negative
> > > > > > timestamps as illegal in offsetsForTimes. That seems safer from
> the
> > > > user
> > > > > > perspective since legitimate timestamps should always be
> positive.
> > > > > > 2. The expected behavior of offsetsForTimes is to return the
> > earliest
> > > > > > offset which is greater than or equal to the passed offset, so
> > having
> > > > > > Long.MAX_VALUE return the latest value doesn't seem very
> intuitive
> > to
> > > > > me. I
> > > > > > would actually expect it to return null.
> > > > > >
> > > > > > Given that, I think I prefer having the custom methods. What do
> you
> > > > > think?
> > > > > >
> > > > > > Thanks,
> > > > > > Jason
> > > > > >
> > > > > > On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin  >
> > > > wrote:
> > > > > >
> > > > > > > Hi Jason,
> > > > > > >
> > > > > > > Thanks for the feedback. That is a good point. For

Build failed in Jenkins: kafka-trunk-jdk7 #1529

2016-09-09 Thread Apache Jenkins Server
See 

Changes:

[ismael] HOTFIX: Temporarily ignoring this test until fixed

--
[...truncated 6229 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize STARTED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
STARTED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog STARTED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails STARTED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment STARTED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments STARTED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints STARTED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush STARTED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog STARTED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash STARTED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testDoesntCleanLogsWithCompactDeletePolicy STARTED

kafka.log.LogManagerTest > testDoesntCleanLogsWithCompactDeletePolicy PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic STARTED

kafk

[jira] [Created] (KAFKA-4146) Kafka Stream ignores Serde exceptions leading to silently broken apps

2016-09-09 Thread Elias Levy (JIRA)
Elias Levy created KAFKA-4146:
-

 Summary: Kafka Stream ignores Serde exceptions leading to silently 
broken apps
 Key: KAFKA-4146
 URL: https://issues.apache.org/jira/browse/KAFKA-4146
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.0.1
Reporter: Elias Levy
Assignee: Guozhang Wang


It appears that Kafka Streams silently ignores Serde exceptions, leading to app 
that are silently broken.

E.g. if you make use of {{Stream.throough("topic")}} and the default Serde is 
inappropriate for the type, the app will silently drop the data in the floor 
without even the courtesy of printing a single error message.

At the very least an initial error message should be generated, with the option 
to generate messages for each such failure or a sampling of such failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Agenda item for next KIP meeting

2016-09-09 Thread radai
Hi,

I'd like to discuss KIP-72. could you please add me?

On Fri, Sep 9, 2016 at 7:48 AM, Ismael Juma  wrote:

> Hi Vahid,
>
> Sounds good. Jun is unavailable on the day, so I'll send the invite. Any
> other KIPs that people would like to discuss on the day (Tuesday at 11am
> PT)? The following KIPs are under discussion and have either been submitted
> or updated recently:
>
> KIP-68: Add a consumed log retention before log retention
> KIP-72: Allow Sizing Incoming Request Queue in Bytes
> KIP-79: ListOffsetRequest/ListOffsetResponse v1 and add timestamp search
> methods to the new consumer
>
> If the authors of any KIPs are available and interested to discuss them on
> the next KIP call, please let me know so that I can add them to the agenda.
>
> Ismael
>
> On Wed, Sep 7, 2016 at 12:14 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi,
> >
> > I'd like to add KIP-54 to the agenda for next KIP meeting.
> > This KIP has been in discussion phase for a long time, and it would be
> > nice to have an online discussion about it, collect additional feedback,
> > and move forward, if possible.
> >
> > Thanks.
> >
> > Regards,
> > --Vahid
> >
> >
>


Re: Queryable state client read guarantees

2016-09-09 Thread Damian Guy
Hi Mikael,

During rebalance both instances should throw IllegalStateStoreException
until the rebalance has completed. Once the rebalance has completed if the
key is not found on the local store, then you would get a null value. You
can always find the Kafka Streams instance that will have that key
(assuming it exists) by using:

StreamsMetadata KafkaStreams.metadataForKey(String storeName, K key,
Serializer keySerializer)

The StreamsMetadata will tell you which instance, via HostInfo, has the
given key.

HTH,
Damian




On Fri, 9 Sep 2016 at 16:56 Mikael Högqvist  wrote:

> Hi Damian,
>
> thanks for fixing this so quickly, I re-ran the test and it works fine.
>
> The next test I tried was to read from two service instances implementing
> the same string count topology. First, the client is started sending two
> read requests, one per instance, every second. Next, I start the first
> instance and let it complete the store init before the next instance is
> started.
>
> Below is the initial part of the trace when going from 0 to 1 instance. The
> trace log has the following columns: request id, instance, response code
> and value.
>
> 3,localhost:2030,503,
> 3,localhost:2031,503,
> 4,localhost:2030,503,
> 4,localhost:2031,503,
> 5,localhost:2030,200,2
> 5,localhost:2031,503,
> 6,localhost:2030,200,2
> 6,localhost:2031,503,
>
> Before the instance is started, both return 503, the status returned by the
> client when it cannot connect to an instance. When the first instance has
> started it returns the expected value 2 for request pair 5, 6 and so on.
> The trace below is from when the second instance starts.
>
> 18,localhost:2030,200,2
> 18,localhost:2031,503,
> 19,localhost:2030,404,
> 19,localhost:2031,503,
> 20,localhost:2030,404,
> 20,localhost:2031,503,
> 21,localhost:2030,404,
> 21,localhost:2031,200,2
> 22,localhost:2030,404,
> 22,localhost:2031,200,2
>
> The new instance takes over responsibility for the partition containing the
> key "hello". During this period the new instance returns 503 as expected
> until the store is ready. The issue is that the first instance that stored
> the value starts returning 404 from request pair 19. A client doing
> requests for this key would then have the following sequence:
>
> 18 -> 2
> 19 -> Not found
> 20 -> Not found
> 21 -> 2
>
> From the client perspective, I think this violates the guarantee of always
> reading the latest value.
>
> Am I making the wrong assumptions or is there some way to detect that the
> local store is not responsible for the key anymore?
>
> Best,
> Mikael
>
> On Thu, Sep 8, 2016 at 11:03 AM Damian Guy  wrote:
>
> > Hi Mikael,
> >
> > A fix for KAFKA-4123 
> > (the
> > issue you found with receiving null values) has now been committed to
> > trunk. I've tried it with your github repo and it appears to be working.
> > You will have to make a small change to your code as we now throw
> > InvalidStateStoreException when the Stores are unavailable (previously we
> > returned null).
> >
> > We added a test here
> > <
> >
> https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/QueryableStateIntegrationTest.java#L431
> > >
> > to
> > make sure we only get a value once the store has been (re-)initialized.
> > Please give it a go and thanks for your help in finding this issue.
> >
> > Thanks,
> > Damian
> >
> > On Mon, 5 Sep 2016 at 22:07 Mikael Högqvist  wrote:
> >
> > > Hi Damian,
> > >
> > > > > Failed to read key hello, org.mkhq.kafka.Topology$StoreUnavailable
> > > > > > Failed to read key hello, org.mkhq.kafka.Topology$KeyNotFound
> > > > > > hello -> 10
> > > > >
> > > > >
> > > > The case where you get KeyNotFound looks like a bug to me. This
> > shouldn't
> > > > happen. I can see why it might happen and we will create a JIRA and
> fix
> > > it
> > > > right away.
> > > >
> > >
> > > Great, thanks for looking into this. I'll try again once the PR is
> > merged.
> > >
> > >
> > > >
> > > > I'm not sure how you end up with (hello -> 10). It could indicate
> that
> > > the
> > > > offsets for the topic you are consuming from weren't committed so the
> > > data
> > > > gets processed again on the restart.
> > > >
> > >
> > > Yes, it didn't commit the offsets since streams.close() was not called
> on
> > > ctrl-c. Fixed by adding a shutdown hook.
> > >
> > > Thanks,
> > > Mikael
> > >
> > >
> > > > Thanks,
> > > > Damian
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #872

2016-09-09 Thread Apache Jenkins Server
See 

Changes:

[ismael] HOTFIX: Temporarily ignoring this test until fixed

--
[...truncated 6794 lines...]

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic STARTED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > tes

Re: Queryable state client read guarantees

2016-09-09 Thread Mikael Högqvist
Hi Damian,

thanks for fixing this so quickly, I re-ran the test and it works fine.

The next test I tried was to read from two service instances implementing
the same string count topology. First, the client is started sending two
read requests, one per instance, every second. Next, I start the first
instance and let it complete the store init before the next instance is
started.

Below is the initial part of the trace when going from 0 to 1 instance. The
trace log has the following columns: request id, instance, response code
and value.

3,localhost:2030,503,
3,localhost:2031,503,
4,localhost:2030,503,
4,localhost:2031,503,
5,localhost:2030,200,2
5,localhost:2031,503,
6,localhost:2030,200,2
6,localhost:2031,503,

Before the instance is started, both return 503, the status returned by the
client when it cannot connect to an instance. When the first instance has
started it returns the expected value 2 for request pair 5, 6 and so on.
The trace below is from when the second instance starts.

18,localhost:2030,200,2
18,localhost:2031,503,
19,localhost:2030,404,
19,localhost:2031,503,
20,localhost:2030,404,
20,localhost:2031,503,
21,localhost:2030,404,
21,localhost:2031,200,2
22,localhost:2030,404,
22,localhost:2031,200,2

The new instance takes over responsibility for the partition containing the
key "hello". During this period the new instance returns 503 as expected
until the store is ready. The issue is that the first instance that stored
the value starts returning 404 from request pair 19. A client doing
requests for this key would then have the following sequence:

18 -> 2
19 -> Not found
20 -> Not found
21 -> 2

>From the client perspective, I think this violates the guarantee of always
reading the latest value.

Am I making the wrong assumptions or is there some way to detect that the
local store is not responsible for the key anymore?

Best,
Mikael

On Thu, Sep 8, 2016 at 11:03 AM Damian Guy  wrote:

> Hi Mikael,
>
> A fix for KAFKA-4123 
> (the
> issue you found with receiving null values) has now been committed to
> trunk. I've tried it with your github repo and it appears to be working.
> You will have to make a small change to your code as we now throw
> InvalidStateStoreException when the Stores are unavailable (previously we
> returned null).
>
> We added a test here
> <
> https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/QueryableStateIntegrationTest.java#L431
> >
> to
> make sure we only get a value once the store has been (re-)initialized.
> Please give it a go and thanks for your help in finding this issue.
>
> Thanks,
> Damian
>
> On Mon, 5 Sep 2016 at 22:07 Mikael Högqvist  wrote:
>
> > Hi Damian,
> >
> > > > Failed to read key hello, org.mkhq.kafka.Topology$StoreUnavailable
> > > > > Failed to read key hello, org.mkhq.kafka.Topology$KeyNotFound
> > > > > hello -> 10
> > > >
> > > >
> > > The case where you get KeyNotFound looks like a bug to me. This
> shouldn't
> > > happen. I can see why it might happen and we will create a JIRA and fix
> > it
> > > right away.
> > >
> >
> > Great, thanks for looking into this. I'll try again once the PR is
> merged.
> >
> >
> > >
> > > I'm not sure how you end up with (hello -> 10). It could indicate that
> > the
> > > offsets for the topic you are consuming from weren't committed so the
> > data
> > > gets processed again on the restart.
> > >
> >
> > Yes, it didn't commit the offsets since streams.close() was not called on
> > ctrl-c. Fixed by adding a shutdown hook.
> >
> > Thanks,
> > Mikael
> >
> >
> > > Thanks,
> > > Damian
> > >
> >
>


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-09 Thread Ismael Juma
Our inconsistent usage of magic values like `-1` and `null` to represent
missing values makes it hard to reason about code. We had a recent major
regression in MirrorMaker as a result of this[1], for example. `null` by
itself is hard enough (often called the billion dollar mistake), so can we
decide what we prefer and stick with that going forward?

My take is: `Option`/`Optional` > `null` > `-1` unless it's a particularly
performance sensitive area where the boxing incurred by having `null` as a
possible value is a problem. Since we're still on Java 7, we can't use
`Optional` in Java code.

Ismael

[1] The fix:
https://github.com/apache/kafka/commit/4e4e2fb5085758ee9ccf6307433ad531a33198d3

On Fri, Sep 9, 2016 at 3:48 PM, Jun Rao  wrote:

> Jiangjie,
>
> Returning TimestampOffset(-1, -1) sounds reasonable to me.
>
> Thanks,
>
> Jun
>
> On Thu, Sep 8, 2016 at 8:46 PM, Becket Qin  wrote:
>
> > Hi Jun,
> >
> > > 1. latestOffsets() returns the next offset. So there won't be a
> timestamp
> > > associated with it. Would we use something like -1 for timestamp?
> >
> > The returned value would high watermark, so there might be an associated
> > timestamp. But if it is log end offset, it seems that -1 is a reasonable
> > value.
> >
> > > 2. Jason mentioned that if no message has timestamp >= the provided
> > > timestamp, we return a null value for that partition. Could we document
> > > that in the wiki?
> >
> > I made a minor change to return a TimestampOffset(-1, -1) in that case.
> Not
> > sure which on is better but there seems only minor difference. What do
> you
> > think?
> >
> > I haven't seen a planned release date yet, but I can probably get it done
> > in 2-3 weeks with reasonable rounds of reviews.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> >
> > On Thu, Sep 8, 2016 at 6:24 PM, Jun Rao  wrote:
> >
> > > Hi, Jiangjie,
> > >
> > > Thanks for the updated KIP. A couple of minor comments.
> > >
> > > 1. latestOffsets() returns the next offset. So there won't be a
> timestamp
> > > associated with it. Would we use something like -1 for timestamp?
> > >
> > > 2. Jason mentioned that if no message has timestamp >= the provided
> > > timestamp, we return a null value for that partition. Could we document
> > > that in the wiki?
> > >
> > > BTW, we are getting close to the next release. This is a really nice
> > > feature to have. Do you think you will have a patch ready for the next
> > > release?
> > >
> > > Thanks.
> > >
> > > Jun
> > >
> > > On Wed, Sep 7, 2016 at 2:47 PM, Becket Qin 
> wrote:
> > >
> > > > That sounds reasonable to me. I'll update the KIP wiki page.
> > > >
> > > > On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson 
> > > > wrote:
> > > >
> > > > > Hey Becket,
> > > > >
> > > > > I don't have a super strong preference, but I think this
> > > > >
> > > > > earliestOffset(singleton(partition));
> > > > >
> > > > > captures the intent more clearly than this:
> > > > >
> > > > > offsetsForTimes(singletonMap(partition, -1));
> > > > >
> > > > > I can understand the desire to keep the API footprint small, but I
> > > think
> > > > > the use case is common enough to justify separate APIs. A couple
> > > > additional
> > > > > points:
> > > > >
> > > > > 1. If we had separate methods, it might make sense to treat
> negative
> > > > > timestamps as illegal in offsetsForTimes. That seems safer from the
> > > user
> > > > > perspective since legitimate timestamps should always be positive.
> > > > > 2. The expected behavior of offsetsForTimes is to return the
> earliest
> > > > > offset which is greater than or equal to the passed offset, so
> having
> > > > > Long.MAX_VALUE return the latest value doesn't seem very intuitive
> to
> > > > me. I
> > > > > would actually expect it to return null.
> > > > >
> > > > > Given that, I think I prefer having the custom methods. What do you
> > > > think?
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > > > On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin 
> > > wrote:
> > > > >
> > > > > > Hi Jason,
> > > > > >
> > > > > > Thanks for the feedback. That is a good point. For the -1 and -2
> > > > > semantics,
> > > > > > I was just thinking we will preserve the semantics in the wire
> > > > protocol.
> > > > > > For the user facing API, I agree that is not intuitive. We can do
> > one
> > > > of
> > > > > > the following:
> > > > > > 1. Add two separate methods: earliestOffsets() and
> latestOffsets().
> > > > > > 2. just have offsetsForTimes() and return the earliest if the
> > > timestamp
> > > > > is
> > > > > > negative and the latest if the timestamp is Long.MAX_VALUE.
> > > > > >
> > > > > > The good thing about doing (1) is that we kind of have symmetric
> > > > function
> > > > > > signatures like seekToBeginning() and seekToEnd(). However, even
> if
> > > we
> > > > do
> > > > > > (1), we may still need to do (2) to handle the negative timestamp
> > and
> > > > the
> > > > > > Long.MAX_VALUE timestamp in offsetsForTimes(). Then they
> > esse

Re: Agenda item for next KIP meeting

2016-09-09 Thread Ismael Juma
Hi Vahid,

Sounds good. Jun is unavailable on the day, so I'll send the invite. Any
other KIPs that people would like to discuss on the day (Tuesday at 11am
PT)? The following KIPs are under discussion and have either been submitted
or updated recently:

KIP-68: Add a consumed log retention before log retention
KIP-72: Allow Sizing Incoming Request Queue in Bytes
KIP-79: ListOffsetRequest/ListOffsetResponse v1 and add timestamp search
methods to the new consumer

If the authors of any KIPs are available and interested to discuss them on
the next KIP call, please let me know so that I can add them to the agenda.

Ismael

On Wed, Sep 7, 2016 at 12:14 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi,
>
> I'd like to add KIP-54 to the agenda for next KIP meeting.
> This KIP has been in discussion phase for a long time, and it would be
> nice to have an online discussion about it, collect additional feedback,
> and move forward, if possible.
>
> Thanks.
>
> Regards,
> --Vahid
>
>


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-09 Thread Jun Rao
Jiangjie,

Returning TimestampOffset(-1, -1) sounds reasonable to me.

Thanks,

Jun

On Thu, Sep 8, 2016 at 8:46 PM, Becket Qin  wrote:

> Hi Jun,
>
> > 1. latestOffsets() returns the next offset. So there won't be a timestamp
> > associated with it. Would we use something like -1 for timestamp?
>
> The returned value would high watermark, so there might be an associated
> timestamp. But if it is log end offset, it seems that -1 is a reasonable
> value.
>
> > 2. Jason mentioned that if no message has timestamp >= the provided
> > timestamp, we return a null value for that partition. Could we document
> > that in the wiki?
>
> I made a minor change to return a TimestampOffset(-1, -1) in that case. Not
> sure which on is better but there seems only minor difference. What do you
> think?
>
> I haven't seen a planned release date yet, but I can probably get it done
> in 2-3 weeks with reasonable rounds of reviews.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
> On Thu, Sep 8, 2016 at 6:24 PM, Jun Rao  wrote:
>
> > Hi, Jiangjie,
> >
> > Thanks for the updated KIP. A couple of minor comments.
> >
> > 1. latestOffsets() returns the next offset. So there won't be a timestamp
> > associated with it. Would we use something like -1 for timestamp?
> >
> > 2. Jason mentioned that if no message has timestamp >= the provided
> > timestamp, we return a null value for that partition. Could we document
> > that in the wiki?
> >
> > BTW, we are getting close to the next release. This is a really nice
> > feature to have. Do you think you will have a patch ready for the next
> > release?
> >
> > Thanks.
> >
> > Jun
> >
> > On Wed, Sep 7, 2016 at 2:47 PM, Becket Qin  wrote:
> >
> > > That sounds reasonable to me. I'll update the KIP wiki page.
> > >
> > > On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > Hey Becket,
> > > >
> > > > I don't have a super strong preference, but I think this
> > > >
> > > > earliestOffset(singleton(partition));
> > > >
> > > > captures the intent more clearly than this:
> > > >
> > > > offsetsForTimes(singletonMap(partition, -1));
> > > >
> > > > I can understand the desire to keep the API footprint small, but I
> > think
> > > > the use case is common enough to justify separate APIs. A couple
> > > additional
> > > > points:
> > > >
> > > > 1. If we had separate methods, it might make sense to treat negative
> > > > timestamps as illegal in offsetsForTimes. That seems safer from the
> > user
> > > > perspective since legitimate timestamps should always be positive.
> > > > 2. The expected behavior of offsetsForTimes is to return the earliest
> > > > offset which is greater than or equal to the passed offset, so having
> > > > Long.MAX_VALUE return the latest value doesn't seem very intuitive to
> > > me. I
> > > > would actually expect it to return null.
> > > >
> > > > Given that, I think I prefer having the custom methods. What do you
> > > think?
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > > On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin 
> > wrote:
> > > >
> > > > > Hi Jason,
> > > > >
> > > > > Thanks for the feedback. That is a good point. For the -1 and -2
> > > > semantics,
> > > > > I was just thinking we will preserve the semantics in the wire
> > > protocol.
> > > > > For the user facing API, I agree that is not intuitive. We can do
> one
> > > of
> > > > > the following:
> > > > > 1. Add two separate methods: earliestOffsets() and latestOffsets().
> > > > > 2. just have offsetsForTimes() and return the earliest if the
> > timestamp
> > > > is
> > > > > negative and the latest if the timestamp is Long.MAX_VALUE.
> > > > >
> > > > > The good thing about doing (1) is that we kind of have symmetric
> > > function
> > > > > signatures like seekToBeginning() and seekToEnd(). However, even if
> > we
> > > do
> > > > > (1), we may still need to do (2) to handle the negative timestamp
> and
> > > the
> > > > > Long.MAX_VALUE timestamp in offsetsForTimes(). Then they
> essentially
> > > > become
> > > > > redundant to earliestOffsets() and latestOffsets().
> > > > >
> > > > > Personally I prefer option (2) because of the conciseness and it
> > seems
> > > > > intuitive enough. But I am open to option (1) as well.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > > > On Wed, Sep 7, 2016 at 11:06 AM, Jason Gustafson <
> ja...@confluent.io
> > >
> > > > > wrote:
> > > > >
> > > > > > Hey Becket,
> > > > > >
> > > > > > Thanks for the KIP. As I understand, the intention is to preserve
> > the
> > > > > > current behavior with a timestamp of -1 indicating latest
> timestamp
> > > and
> > > > > -2
> > > > > > indicating earliest timestamp. So users can query these offsets
> > using
> > > > the
> > > > > > offsetsForTimes API if they know the magic values. I'm wondering
> if
> > > it
> > > > > > would make the usage a little nicer to have a separate API
> instead
> > > for
> > > > > > these special cases? Sort of in the way that we expose a

[GitHub] kafka pull request #1838: HOTFIX: Temporarily ignoring this test until fixed

2016-09-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1838


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Subscribe me

2016-09-09 Thread Kiran Potladurthi
Please subscribe me for this mailing list.

Regards
Kiran


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-09 Thread Michael Noll
My compliments, Jason -- well deserved! :-)

-Michael



On Wed, Sep 7, 2016 at 6:49 PM, Grant Henke  wrote:

> Congratulations and thank you for all of your contributions to Apache
> Kafka Jason!
>
> On Wed, Sep 7, 2016 at 10:12 AM, Mayuresh Gharat <
> gharatmayures...@gmail.com
> > wrote:
>
> > congrats Jason !
> >
> > Thanks,
> >
> > Mayuresh
> >
> > On Wed, Sep 7, 2016 at 5:16 AM, Eno Thereska 
> > wrote:
> >
> > > Congrats Jason!
> > >
> > > Eno
> > > > On 7 Sep 2016, at 10:00, Rajini Sivaram <
> rajinisiva...@googlemail.com>
> > > wrote:
> > > >
> > > > Congrats, Jason!
> > > >
> > > > On Wed, Sep 7, 2016 at 8:29 AM, Flavio P JUNQUEIRA 
> > > wrote:
> > > >
> > > >> Congrats, Jason. Well done and great to see this project inviting
> new
> > > >> committers.
> > > >>
> > > >> -Flavio
> > > >>
> > > >> On 7 Sep 2016 04:58, "Ashish Singh"  wrote:
> > > >>
> > > >>> Congrats, Jason!
> > > >>>
> > > >>> On Tuesday, September 6, 2016, Jason Gustafson  >
> > > >> wrote:
> > > >>>
> > >  Thanks all!
> > > 
> > >  On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin  > >  > wrote:
> > > 
> > > > Congrats, Jason!
> > > >
> > > > On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
> > >   > > >>
> > > > wrote:
> > > >
> > > >> congrats jason!
> > > >>
> > > >> On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian <
> > > >> r...@confluent.io
> > >  >
> > > >> wrote:
> > > >>
> > > >>> Congratulations Jason!
> > > >>>
> > > >>> On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
> > > >>> vahidhashem...@us.ibm.com 
> > >  wrote:
> > > >>>
> > >  Congratulations Jason on this very well deserved recognition.
> > > 
> > >  --Vahid
> > > 
> > > 
> > > 
> > >  From:   Neha Narkhede >
> > >  To: "dev@kafka.apache.org " <
> > >  dev@kafka.apache.org >,
> > >  "us...@kafka.apache.org " <
> > > >> us...@kafka.apache.org
> > >  >
> > >  Cc: "priv...@kafka.apache.org " <
> > >  priv...@kafka.apache.org >
> > >  Date:   09/06/2016 03:26 PM
> > >  Subject:[ANNOUNCE] New committer: Jason Gustafson
> > > 
> > > 
> > > 
> > >  The PMC for Apache Kafka has invited Jason Gustafson to join
> > > >> as a
> > >  committer and
> > >  we are pleased to announce that he has accepted!
> > > 
> > >  Jason has contributed numerous patches to a wide range of
> > > >> areas,
> > > >> notably
> > >  within the new consumer and the Kafka Connect layers. He has
> > > > displayed
> > >  great taste and judgement which has been apparent through his
> > > >> involvement
> > >  across the board from mailing lists, JIRA, code reviews to
> > > > contributing
> > >  features, bug fixes and code and documentation improvements.
> > > 
> > >  Thank you for your contribution and welcome to Apache Kafka,
> > > >>> Jason!
> > >  --
> > >  Thanks,
> > >  Neha
> > > 
> > > 
> > > 
> > > 
> > > 
> > > >>>
> > > >>
> > > >
> > > 
> > > >>>
> > > >>>
> > > >>> --
> > > >>> Ashish 🎤h
> > > >>>
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > >
> > > > Rajini
> > >
> > >
> >
> >
> > --
> > -Regards,
> > Mayuresh R. Gharat
> > (862) 250-7125
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[GitHub] kafka pull request #1838: HOTFIX: Temporarily ignoring this test until fixed

2016-09-09 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/1838

HOTFIX: Temporarily ignoring this test until fixed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka hotfix-ignore-smoke-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1838.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1838


commit 153b5561782192f36f15032c54bb06791ec8f62c
Author: Eno Thereska 
Date:   2016-09-09T12:05:44Z

Temporarily ignoring this test until fixed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4133) Provide a configuration to control consumer max in-flight fetches

2016-09-09 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15476427#comment-15476427
 ] 

Mickael Maison commented on KAFKA-4133:
---

I don't seem to have permissions to edit pages on the wiki. My profile is: 
https://cwiki.apache.org/confluence/display/~mickael.maison

> Provide a configuration to control consumer max in-flight fetches
> -
>
> Key: KAFKA-4133
> URL: https://issues.apache.org/jira/browse/KAFKA-4133
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Mickael Maison
>
> With KIP-74, we now have a good way to limit the size of fetch responses, but 
> it may still be difficult for users to control overall memory since the 
> consumer will send fetches in parallel to all the brokers which own 
> partitions that it is subscribed to. To give users finer control, it might 
> make sense to add a `max.in.flight.fetches` setting to limit the total number 
> of concurrent fetches at any time. This would require a KIP since it's a new 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Transaction Support in Kafka

2016-09-09 Thread Thamaraikannan Subramanian
Hi,

I have use case where in I want to Publish the message only after my
Primary Transaction getting committed into SQL Store. My Primary store is
update may involve multiple tables under single Transactions.

At the same time, if Publish in Kafka  fails for some reason, I should be
able to get rollback and if possible rollback my Primary Store. I
understand this involves two different stores and transactions across them.
I would like to get the Suggestions and input from this Group about if
there is any Transactional Logic in Producer.

Thanks in advance.