[jira] [Updated] (KAFKA-3664) When subscription set changes on new consumer, the partitions may be removed without offset being committed.

2016-09-08 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3664:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> When subscription set changes on new consumer, the partitions may be removed 
> without offset being committed.
> 
>
> Key: KAFKA-3664
> URL: https://issues.apache.org/jira/browse/KAFKA-3664
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
>
> When users are using group management, if they call consumer.subscribe() to 
> change the subscription, the removed subscriptions will be immediately 
> removed and their offset will not be commit. Also the revoked partitions 
> passed to the ConsumerRebalanceListener.onPartitionsRevoked() will not 
> include those partitions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3664) When subscription set changes on new consumer, the partitions may be removed without offset being committed.

2016-09-08 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475944#comment-15475944
 ] 

Vahid Hashemian commented on KAFKA-3664:


Fix was provided under 
[KIP-70|https://cwiki.apache.org/confluence/display/KAFKA/KIP-70%3A+Revise+Partition+Assignment+Semantics+on+New+Consumer's+Subscription+Change].

> When subscription set changes on new consumer, the partitions may be removed 
> without offset being committed.
> 
>
> Key: KAFKA-3664
> URL: https://issues.apache.org/jira/browse/KAFKA-3664
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
>
> When users are using group management, if they call consumer.subscribe() to 
> change the subscription, the removed subscriptions will be immediately 
> removed and their offset will not be commit. Also the revoked partitions 
> passed to the ConsumerRebalanceListener.onPartitionsRevoked() will not 
> include those partitions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3664) When subscription set changes on new consumer, the partitions may be removed without offset being committed.

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475941#comment-15475941
 ] 

ASF GitHub Bot commented on KAFKA-3664:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1363


> When subscription set changes on new consumer, the partitions may be removed 
> without offset being committed.
> 
>
> Key: KAFKA-3664
> URL: https://issues.apache.org/jira/browse/KAFKA-3664
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
>
> When users are using group management, if they call consumer.subscribe() to 
> change the subscription, the removed subscriptions will be immediately 
> removed and their offset will not be commit. Also the revoked partitions 
> passed to the ConsumerRebalanceListener.onPartitionsRevoked() will not 
> include those partitions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1363: KAFKA-3664: Commit offset of unsubscribed partitio...

2016-09-08 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1363


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1528

2016-09-08 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4033; Revise partition assignment semantics on consumer

--
[...truncated 12725 lines...]
org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #871

2016-09-08 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4033; Revise partition assignment semantics on consumer

--
[...truncated 3474 lines...]

kafka.coordinator.GroupMetadataManagerTest > testStoreNonEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreNonEmptyGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols STARTED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata STARTED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
STARTED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol STARTED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
STARTED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure STARTED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocol STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocol PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
PASSED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync STARTED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers 
STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers PASSED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
PASSED


Building Kafka from source and publishing to Maven

2016-09-08 Thread Jaikiran Pai
I am trying to build Kafka 0.10 branch from source to get a SSL fix and 
use it for our internal tests. In order to use it internally in our 
application, we'll need these built artifacts to be published to our 
internal Maven repo (backed by Nexus). I build instructions in README.md 
says this:


### Publishing the jar for all version of Scala and for all projects to 
maven ###

./gradlew uploadArchivesAll

Please note for this to work you should create/update 
`~/.gradle/gradle.properties` and assign the following variables


mavenUrl=
mavenUsername=
mavenPassword=
signing.keyId=
signing.password=
signing.secretKeyRingFile=


I'm skipping signing of the artifacts and the command I use is:

./gradlew -PscalaVersion=2.11 uploadArchivesAll -x signArchives

My gradle.properties in ~/.gradle folder looks like:

mavenUrl=http://:/content/repositories/snapshots
mavenUsername=name
mavenPassword=pass


This fails during the upload of archives during the build (the command 
that I pasted above):


.
:kafka:core:uploadArchives FAILED
:uploadCoreArchives_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:uploadArchives'.
> Could not publish configuration 'archives'
   > Failed to deploy artifacts/metadata: No connector available to 
access repository remote () of type default using the available 
factories WagonRepositoryConnectorFactory




Is my mavenUrl property wrong or is something else wrong? Could one of 
you who has successfully uploaded this, help me understand how to do it 
and what the mavenUrl should point to?



-Jaikiran



Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-08 Thread Becket Qin
Hi Jun,

> 1. latestOffsets() returns the next offset. So there won't be a timestamp
> associated with it. Would we use something like -1 for timestamp?

The returned value would high watermark, so there might be an associated
timestamp. But if it is log end offset, it seems that -1 is a reasonable
value.

> 2. Jason mentioned that if no message has timestamp >= the provided
> timestamp, we return a null value for that partition. Could we document
> that in the wiki?

I made a minor change to return a TimestampOffset(-1, -1) in that case. Not
sure which on is better but there seems only minor difference. What do you
think?

I haven't seen a planned release date yet, but I can probably get it done
in 2-3 weeks with reasonable rounds of reviews.

Thanks,

Jiangjie (Becket) Qin


On Thu, Sep 8, 2016 at 6:24 PM, Jun Rao  wrote:

> Hi, Jiangjie,
>
> Thanks for the updated KIP. A couple of minor comments.
>
> 1. latestOffsets() returns the next offset. So there won't be a timestamp
> associated with it. Would we use something like -1 for timestamp?
>
> 2. Jason mentioned that if no message has timestamp >= the provided
> timestamp, we return a null value for that partition. Could we document
> that in the wiki?
>
> BTW, we are getting close to the next release. This is a really nice
> feature to have. Do you think you will have a patch ready for the next
> release?
>
> Thanks.
>
> Jun
>
> On Wed, Sep 7, 2016 at 2:47 PM, Becket Qin  wrote:
>
> > That sounds reasonable to me. I'll update the KIP wiki page.
> >
> > On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson 
> > wrote:
> >
> > > Hey Becket,
> > >
> > > I don't have a super strong preference, but I think this
> > >
> > > earliestOffset(singleton(partition));
> > >
> > > captures the intent more clearly than this:
> > >
> > > offsetsForTimes(singletonMap(partition, -1));
> > >
> > > I can understand the desire to keep the API footprint small, but I
> think
> > > the use case is common enough to justify separate APIs. A couple
> > additional
> > > points:
> > >
> > > 1. If we had separate methods, it might make sense to treat negative
> > > timestamps as illegal in offsetsForTimes. That seems safer from the
> user
> > > perspective since legitimate timestamps should always be positive.
> > > 2. The expected behavior of offsetsForTimes is to return the earliest
> > > offset which is greater than or equal to the passed offset, so having
> > > Long.MAX_VALUE return the latest value doesn't seem very intuitive to
> > me. I
> > > would actually expect it to return null.
> > >
> > > Given that, I think I prefer having the custom methods. What do you
> > think?
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin 
> wrote:
> > >
> > > > Hi Jason,
> > > >
> > > > Thanks for the feedback. That is a good point. For the -1 and -2
> > > semantics,
> > > > I was just thinking we will preserve the semantics in the wire
> > protocol.
> > > > For the user facing API, I agree that is not intuitive. We can do one
> > of
> > > > the following:
> > > > 1. Add two separate methods: earliestOffsets() and latestOffsets().
> > > > 2. just have offsetsForTimes() and return the earliest if the
> timestamp
> > > is
> > > > negative and the latest if the timestamp is Long.MAX_VALUE.
> > > >
> > > > The good thing about doing (1) is that we kind of have symmetric
> > function
> > > > signatures like seekToBeginning() and seekToEnd(). However, even if
> we
> > do
> > > > (1), we may still need to do (2) to handle the negative timestamp and
> > the
> > > > Long.MAX_VALUE timestamp in offsetsForTimes(). Then they essentially
> > > become
> > > > redundant to earliestOffsets() and latestOffsets().
> > > >
> > > > Personally I prefer option (2) because of the conciseness and it
> seems
> > > > intuitive enough. But I am open to option (1) as well.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > > On Wed, Sep 7, 2016 at 11:06 AM, Jason Gustafson  >
> > > > wrote:
> > > >
> > > > > Hey Becket,
> > > > >
> > > > > Thanks for the KIP. As I understand, the intention is to preserve
> the
> > > > > current behavior with a timestamp of -1 indicating latest timestamp
> > and
> > > > -2
> > > > > indicating earliest timestamp. So users can query these offsets
> using
> > > the
> > > > > offsetsForTimes API if they know the magic values. I'm wondering if
> > it
> > > > > would make the usage a little nicer to have a separate API instead
> > for
> > > > > these special cases? Sort of in the way that we expose a generic
> > seek()
> > > > and
> > > > > a seekToBeginning(), maybe we could have an earliestOffset() in
> > > addition
> > > > to
> > > > > offsetsForTimes()?
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > > > On Wed, Sep 7, 2016 at 10:04 AM, Becket Qin 
> > > > wrote:
> > > > >
> > > > > > Thanks 

[jira] [Updated] (KAFKA-4033) KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-09-08 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4033:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1726
[https://github.com/apache/kafka/pull/1726]

> KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription 
> Change
> ---
>
> Key: KAFKA-4033
> URL: https://issues.apache.org/jira/browse/KAFKA-4033
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0
>
>
> Modify the new consumer's implementation of topics subscribe and unsubscribe 
> interfaces so that they do not cause an immediate assignment update (this is 
> how the regex subscribe interface is implemented). Instead, the assignment 
> remains valid until it has been revoked in the next rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4033) KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475716#comment-15475716
 ] 

ASF GitHub Bot commented on KAFKA-4033:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1726


> KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription 
> Change
> ---
>
> Key: KAFKA-4033
> URL: https://issues.apache.org/jira/browse/KAFKA-4033
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0
>
>
> Modify the new consumer's implementation of topics subscribe and unsubscribe 
> interfaces so that they do not cause an immediate assignment update (this is 
> how the regex subscribe interface is implemented). Instead, the assignment 
> remains valid until it has been revoked in the next rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1726: KAFKA-4033: KIP-70: Revise Partition Assignment Se...

2016-09-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1726


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-72 Allow Sizing Incoming Request Queue in Bytes

2016-09-08 Thread radai
Hi Jun,

1. yes, it is my own personal opinion that people use queued.max.requests
as an indirect way to bound memory consumption. once a more direct memory
bound mechanism exists (and works) i dont think queued.max.requests woul
dbe required. having said that I was not planning on making any changes
w.r.t queued.max.requests support (so I was aiming to get to a situation
where both configs are supported) to allow gathering enough data/feedback.

2. Selector.poll() calls into KafkaChannel.read() to maybe get a
NetworkReceive. multiple such read() calls may be required until a Receive
is produced already in the current code base. my pool implementation is
non-blocking so if there's no memory available the read() call will return
null. poll() would then move on to try and service other selection keys.
the pool will be checked for available memory again the next time the
SocketServer.run() loop gets to poll(). and so right now I dont communicate
memory becoming available to the selector - it will just go on to try and
make progress elsewhere and come back again. i never block it or send it to
sleep. I think for efficiency what could maybe be done is if there's not
enough memory to service a readable selection key we may want to skip all
other read-ready selection keys for that iteration of pollSelectionKeys().
that would require rather invasive changes around
Selector.pollSelectionKeys() that I'd rather avoid. also different
KafkaChannels may be backed by different memory pool (under some sort of
future QoS scheme?), which would complicate such an optimization further.

3. i added the pool interface and implementation under kafka.common.memory,
and the API is "thin" enough to be generally useful (currently its
non-blocking only, but a get(long maxWait) is definitely doable). having
said that, I'm not really familiar enough with the code to say



On Fri, Sep 2, 2016 at 2:04 PM, Jun Rao  wrote:

> Hi, Radi,
>
> Thanks for the update. At the high level, this looks promising. A few
> comments below.
>
> 1. If we can bound the requests by bytes, it seems that we don't need
> queued.max.requests
> any more? Could we just deprecate the config and make the queue size
> unbounded?
> 2. How do we communicate back to the selector when some memory is freed up?
> We probably need to wake up the selector. For efficiency, perhaps we only
> need to wake up the selector if the bufferpool is full?
> 3. We talked about bounding the consumer's memory before. To fully support
> that, we will need to bound the memory used by different fetch responses in
> the consumer. Do you think the changes that you propose here can be
> leveraged to bound the memory in the consumer as well?
>
> Jun
>
>
> On Tue, Aug 30, 2016 at 10:41 AM, radai 
> wrote:
>
> > My apologies for the delay in response.
> >
> > I agree with the concerns about OOM reading from the actual sockets and
> > blocking the network threads - messing with the request queue itself
> would
> > not do.
> >
> > I propose instead a memory pool approach - the broker would have a non
> > blocking memory pool. upon reading the first 4 bytes out of a socket an
> > attempt would be made to acquire enough memory and if that attempt fails
> > the processing thread will move on to try and make progress with other
> > tasks.
> >
> > I think Its simpler than mute/unmute because using mute/unmute would
> > require differentiating between sockets muted due to a request in
> progress
> > (normal current operation) and sockets muted due to lack of memory.
> sockets
> > of the 1st kind would be unmuted at the end of request processing (as it
> > happens right now) but the 2nd kind would require some sort of "unmute
> > watchdog" which is (i claim) more complicated than a memory pool. also a
> > memory pool is a more generic solution.
> >
> > I've updated the KIP page (
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 72%3A+Allow+putting+a+bound+on+memory+consumed+by+Incoming+requests)
> > to reflect the new proposed implementation, and i've also put up an
> inital
> > implementation proposal on github -
> > https://github.com/radai-rosenblatt/kafka/commits/broker-memory-pool.
> the
> > proposed code is not complete and tested yet (so probably buggy) but does
> > include the main points of modification.
> >
> > the specific implementation of the pool on that branch also has a built
> in
> > safety net where memory that is acquired but not released (which is a
> bug)
> > is discovered when the garbage collector frees it and the capacity is
> > reclaimed.
> >
> > On Tue, Aug 9, 2016 at 8:14 AM, Jun Rao  wrote:
> >
> > > Radi,
> > >
> > > Yes, I got the benefit of bounding the request queue by bytes. My
> concern
> > > is the following if we don't change the behavior of processor blocking
> on
> > > queue full.
> > >
> > > If the broker truly doesn't have enough memory for buffering
> outstanding
> > > requests 

Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-08 Thread Jun Rao
Hi, Jiangjie,

Thanks for the updated KIP. A couple of minor comments.

1. latestOffsets() returns the next offset. So there won't be a timestamp
associated with it. Would we use something like -1 for timestamp?

2. Jason mentioned that if no message has timestamp >= the provided
timestamp, we return a null value for that partition. Could we document
that in the wiki?

BTW, we are getting close to the next release. This is a really nice
feature to have. Do you think you will have a patch ready for the next
release?

Thanks.

Jun

On Wed, Sep 7, 2016 at 2:47 PM, Becket Qin  wrote:

> That sounds reasonable to me. I'll update the KIP wiki page.
>
> On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson 
> wrote:
>
> > Hey Becket,
> >
> > I don't have a super strong preference, but I think this
> >
> > earliestOffset(singleton(partition));
> >
> > captures the intent more clearly than this:
> >
> > offsetsForTimes(singletonMap(partition, -1));
> >
> > I can understand the desire to keep the API footprint small, but I think
> > the use case is common enough to justify separate APIs. A couple
> additional
> > points:
> >
> > 1. If we had separate methods, it might make sense to treat negative
> > timestamps as illegal in offsetsForTimes. That seems safer from the user
> > perspective since legitimate timestamps should always be positive.
> > 2. The expected behavior of offsetsForTimes is to return the earliest
> > offset which is greater than or equal to the passed offset, so having
> > Long.MAX_VALUE return the latest value doesn't seem very intuitive to
> me. I
> > would actually expect it to return null.
> >
> > Given that, I think I prefer having the custom methods. What do you
> think?
> >
> > Thanks,
> > Jason
> >
> > On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin  wrote:
> >
> > > Hi Jason,
> > >
> > > Thanks for the feedback. That is a good point. For the -1 and -2
> > semantics,
> > > I was just thinking we will preserve the semantics in the wire
> protocol.
> > > For the user facing API, I agree that is not intuitive. We can do one
> of
> > > the following:
> > > 1. Add two separate methods: earliestOffsets() and latestOffsets().
> > > 2. just have offsetsForTimes() and return the earliest if the timestamp
> > is
> > > negative and the latest if the timestamp is Long.MAX_VALUE.
> > >
> > > The good thing about doing (1) is that we kind of have symmetric
> function
> > > signatures like seekToBeginning() and seekToEnd(). However, even if we
> do
> > > (1), we may still need to do (2) to handle the negative timestamp and
> the
> > > Long.MAX_VALUE timestamp in offsetsForTimes(). Then they essentially
> > become
> > > redundant to earliestOffsets() and latestOffsets().
> > >
> > > Personally I prefer option (2) because of the conciseness and it seems
> > > intuitive enough. But I am open to option (1) as well.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Wed, Sep 7, 2016 at 11:06 AM, Jason Gustafson 
> > > wrote:
> > >
> > > > Hey Becket,
> > > >
> > > > Thanks for the KIP. As I understand, the intention is to preserve the
> > > > current behavior with a timestamp of -1 indicating latest timestamp
> and
> > > -2
> > > > indicating earliest timestamp. So users can query these offsets using
> > the
> > > > offsetsForTimes API if they know the magic values. I'm wondering if
> it
> > > > would make the usage a little nicer to have a separate API instead
> for
> > > > these special cases? Sort of in the way that we expose a generic
> seek()
> > > and
> > > > a seekToBeginning(), maybe we could have an earliestOffset() in
> > addition
> > > to
> > > > offsetsForTimes()?
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > > On Wed, Sep 7, 2016 at 10:04 AM, Becket Qin 
> > > wrote:
> > > >
> > > > > Thanks everyone for all the feedback.
> > > > >
> > > > > If there is no further concerns or comments I will start a voting
> > > thread
> > > > on
> > > > > this KIP tomorrow.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > > > On Tue, Sep 6, 2016 at 9:48 AM, Becket Qin 
> > > wrote:
> > > > >
> > > > > > Hi Magnus,
> > > > > >
> > > > > > Thanks for the comments. I agree that querying messages within a
> > time
> > > > > > range is a valid use case (actually this is an example use case
> in
> > my
> > > > > > previous email). The current proposal can achieve this by having
> > two
> > > > > > ListOffsetRequest, right? I think the current API already
> supports
> > > the
> > > > > use
> > > > > > cases that require the offsets for multiple timestamps. The
> > question
> > > is
> > > > > > that whether it is worth adding more complexity to the protocol
> to
> > > make
> > > > > it
> > > > > > easier for multiple timestamp query. Personally I think given
> that
> > > > query
> > > > > > multiple timestamps is likely an infrequent operation, 

Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-08 Thread Jun Rao
Thanks for the writeup. +1.

Jun

On Tue, Sep 6, 2016 at 7:46 PM, Ismael Juma  wrote:

> Hi all,
>
> I would like to (re)initiate[1] the voting process for KIP-78 Cluster Id:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
>
> As explained in the KIP and discussion thread, we see this as a good first
> step that can serve as a foundation for future improvements.
>
> Thanks,
> Ismael
>
> [1] Even though I created a new vote thread, Gmail placed the messages in
> the discuss thread, making it not as visible as required. It's important to
> mention that two +1s were cast by Gwen and Sriram:
>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> 40mail.gmail.com%3E
>


Build failed in Jenkins: kafka-trunk-jdk7 #1527

2016-09-08 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: fix transient QueryableStateIntegration test failure

--
[...truncated 6861 lines...]

kafka.coordinator.GroupCoordinatorResponseTest > testValidLeaveGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupInactiveGroup 
STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupInactiveGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupNotCoordinator 
STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupNotCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup STARTED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidHeartbeat STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testValidHeartbeat PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreNonEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreNonEmptyGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols STARTED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata STARTED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
STARTED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol STARTED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
STARTED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure STARTED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocol STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocol PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
PASSED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync STARTED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition STARTED


Re: [VOTE] KIP-63: Unify store and downstream caching in streams

2016-09-08 Thread Neha Narkhede
+1 (binding)

On Thu, Sep 8, 2016 at 2:34 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Looks great and very detailed. Nice job, Eno :)
>
> On Thu, Aug 25, 2016 at 3:57 AM, Eno Thereska 
> wrote:
>
> > Hi folks,
> >
> > We'd like to start the vote for KIP-63. At this point the Wiki addresses
> > all previous questions and we believe the PoC is feature-complete.
> >
> > Thanks
> > Eno
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>
-- 
Thanks,
Neha


[jira] [Created] (KAFKA-4145) Avoid redundant integration testing in ProducerSendTests

2016-09-08 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4145:
--

 Summary: Avoid redundant integration testing in ProducerSendTests
 Key: KAFKA-4145
 URL: https://issues.apache.org/jira/browse/KAFKA-4145
 Project: Kafka
  Issue Type: Improvement
  Components: unit tests
Reporter: Jason Gustafson


We have a few test cases in {{BaseProducerSendTest}} which probably have little 
value being tested for both Plaintext and SSL. We can move them to 
{{PlaintextProducerSendTest}} and save a little bit on the build time. The 
following tests seem like possible candidates:

1. testSendCompressedMessageWithCreateTime
2. testSendNonCompressedMessageWithCreateTime
3. testSendCompressedMessageWithLogAppendTime
4. testSendNonCompressedMessageWithLogApendTime
5. testAutoCreateTopic
6. testFlush
7. testSendWithInvalidCreateTime
8. testCloseWithZeroTimeoutFromCallerThread
9. testCloseWithZeroTimeoutFromSenderThread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-63: Unify store and downstream caching in streams

2016-09-08 Thread Gwen Shapira
+1 (binding)

Looks great and very detailed. Nice job, Eno :)

On Thu, Aug 25, 2016 at 3:57 AM, Eno Thereska 
wrote:

> Hi folks,
>
> We'd like to start the vote for KIP-63. At this point the Wiki addresses
> all previous questions and we believe the PoC is feature-complete.
>
> Thanks
> Eno
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



[GitHub] kafka pull request #1836: KAFKA-3703: Graceful close for consumers and produ...

2016-09-08 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1836

KAFKA-3703: Graceful close for consumers and producer with acks=0

Process requests received from channels before they were closed. For 
consumers, wait for coordinator requests to complete before returning from 
close.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3703

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1836.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1836


commit 44ac2327c9c6b66de702fccf0b3c9cabcd8bc25b
Author: Rajini Sivaram 
Date:   2016-09-02T07:55:49Z

KAFKA-3703: Graceful close for consumers and producer with acks=0




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3703) Selector.close() doesn't complete outgoing writes

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474933#comment-15474933
 ] 

ASF GitHub Bot commented on KAFKA-3703:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1836

KAFKA-3703: Graceful close for consumers and producer with acks=0

Process requests received from channels before they were closed. For 
consumers, wait for coordinator requests to complete before returning from 
close.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3703

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1836.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1836


commit 44ac2327c9c6b66de702fccf0b3c9cabcd8bc25b
Author: Rajini Sivaram 
Date:   2016-09-02T07:55:49Z

KAFKA-3703: Graceful close for consumers and producer with acks=0




> Selector.close() doesn't complete outgoing writes
> -
>
> Key: KAFKA-3703
> URL: https://issues.apache.org/jira/browse/KAFKA-3703
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Outgoing writes may be discarded when a connection is closed. For instance, 
> when running a producer with acks=0, a producer that writes data and closes 
> the producer would expect to see all writes to complete if there are no 
> errors. But close() simply closes the channel and socket which could result 
> in outgoing data being discarded.
> This is also an issue in consumers which use commitAsync to commit offsets. 
> Closing the consumer may result in commits being discarded because writes 
> have not completed before close().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1526

2016-09-08 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4139; Reset findCoordinatorFuture when brokers are unavailable

--
[...truncated 12297 lines...]
org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #870

2016-09-08 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: fix transient QueryableStateIntegration test failure

--
[...truncated 4896 lines...]

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.UtilsTest > testAbs STARTED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix STARTED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator STARTED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes STARTED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.JsonTest > testJsonEncoding STARTED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.consumer.TopicFilterTest > testWhitelists STARTED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson STARTED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists STARTED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #869

2016-09-08 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4139; Reset findCoordinatorFuture when brokers are unavailable

--
[...truncated 12315 lines...]
org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueWithProvidedSerde STARTED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueWithProvidedSerde PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValuesWithName STARTED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValuesWithName PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde STARTED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream STARTED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
PASSED


Re: [VOTE] KIP-63: Unify store and downstream caching in streams

2016-09-08 Thread Eno Thereska
Guozhang,

I think the info is there. The next sentence says "The optimization is 
applicable to aggregations. It is not applicable to other operators like joins."
Also the bit about the per-thread cache is one paragraph lower: "Specifically, 
for a streams instance with T threads and cache size C, each thread will have 
an even C/T bytes of cache, to use as it sees fit among its tasks. No sharing 
of caches across threads will happen"

Thanks
Eno


> On 8 Sep 2016, at 18:51, Guozhang Wang  wrote:
> 
> Just read the KIP wiki again:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams
> 
> Just one minor comments but otherwise I'm +1:
> 
> In "proposed changes" section: "The cache has two functions. First, it
> continues to serve as a read buffer for data that is sent to the state
> store, just like today. Second, it serves as a write deduplicator for the
> state store (just like today) as well as for the downstream processor
> node(s). "
> 
> I feel this is still a bit confusing. The caching layer is only turned on
> for state stores used in aggregate operators in the DSL, right? For example
> in KStream-KStream joins, we will not turn on caching since there is no
> updates on the state store on the same key.
> 
> So could we just propose "removing the caching layer inside the persistent
> state store engines (i.e. RocksDB), and instead add a per-thread global
> cache which will only be activated for state stores used in Streams DSL's
> aggregate operator as a write deduplicator for both the state store and to
> the downstream operators"?
> 
> Guozhang
> 
> 
> On Thu, Sep 8, 2016 at 10:07 AM, Eno Thereska 
> wrote:
> 
>> There have been a couple of changes to KIP-63 since the voting started,
>> after more feedback, most notably the fact that this KIP applies to the DSL
>> only, and not to the Processor API.
>> 
>> At this point I'd like to restart the voting process.
>> 
>> Thanks
>> Eno
>> 
>>> On 31 Aug 2016, at 17:16, Jim Jagielski  wrote:
>>> 
>>> +1
 On Aug 25, 2016, at 6:57 AM, Eno Thereska 
>> wrote:
 
 Hi folks,
 
 We'd like to start the vote for KIP-63. At this point the Wiki addresses
 all previous questions and we believe the PoC is feature-complete.
 
 Thanks
 Eno
>>> 
>> 
>> 
> 
> 
> -- 
> -- Guozhang



[GitHub] kafka pull request #1833: MINOR: fix transient QueryableStateIntegration tes...

2016-09-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1833


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-63: Unify store and downstream caching in streams

2016-09-08 Thread Guozhang Wang
Just read the KIP wiki again:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams

Just one minor comments but otherwise I'm +1:

In "proposed changes" section: "The cache has two functions. First, it
continues to serve as a read buffer for data that is sent to the state
store, just like today. Second, it serves as a write deduplicator for the
state store (just like today) as well as for the downstream processor
node(s). "

I feel this is still a bit confusing. The caching layer is only turned on
for state stores used in aggregate operators in the DSL, right? For example
in KStream-KStream joins, we will not turn on caching since there is no
updates on the state store on the same key.

So could we just propose "removing the caching layer inside the persistent
state store engines (i.e. RocksDB), and instead add a per-thread global
cache which will only be activated for state stores used in Streams DSL's
aggregate operator as a write deduplicator for both the state store and to
the downstream operators"?

Guozhang


On Thu, Sep 8, 2016 at 10:07 AM, Eno Thereska 
wrote:

> There have been a couple of changes to KIP-63 since the voting started,
> after more feedback, most notably the fact that this KIP applies to the DSL
> only, and not to the Processor API.
>
> At this point I'd like to restart the voting process.
>
> Thanks
> Eno
>
> > On 31 Aug 2016, at 17:16, Jim Jagielski  wrote:
> >
> > +1
> >> On Aug 25, 2016, at 6:57 AM, Eno Thereska 
> wrote:
> >>
> >> Hi folks,
> >>
> >> We'd like to start the vote for KIP-63. At this point the Wiki addresses
> >> all previous questions and we believe the PoC is feature-complete.
> >>
> >> Thanks
> >> Eno
> >
>
>


-- 
-- Guozhang


Re: [VOTE] KIP-63: Unify store and downstream caching in streams

2016-09-08 Thread Eno Thereska
There have been a couple of changes to KIP-63 since the voting started, after 
more feedback, most notably the fact that this KIP applies to the DSL only, and 
not to the Processor API.

At this point I'd like to restart the voting process. 

Thanks
Eno

> On 31 Aug 2016, at 17:16, Jim Jagielski  wrote:
> 
> +1
>> On Aug 25, 2016, at 6:57 AM, Eno Thereska  wrote:
>> 
>> Hi folks,
>> 
>> We'd like to start the vote for KIP-63. At this point the Wiki addresses
>> all previous questions and we believe the PoC is feature-complete.
>> 
>> Thanks
>> Eno
> 



Re: [DISCUSS] KIP-63: Unify store and downstream caching in streams

2016-09-08 Thread Eno Thereska
Hi,

Users have many options for buffering in the Processor API and it doesn't seem 
right we should prescribe a particular one. Also, there is value in continuing 
to keep the Processor API simple.

As such, we'll remove the ".enableCaching" for a store used in the processor 
API from the KIP and simplify the KIP by having it apply to the DSL only.

Thanks
Eno

> On 7 Sep 2016, at 15:41, Damian Guy  wrote:
> 
> Gouzhang,
> 
> Some points about what you have mentioned:
> 1. You can't just call context.forward() on the flush listener. You have to
> set some other contextual information (currently ProcessorRecordContext)
> prior to doing this otherwise the nodes you are forwarding to are
> undetermined, i.e, this can be called at any point during the topology or
> on commit.
> 2. It is a bytes cache, so the Processors would need to have the Serdes in
> order to use this pattern.
> 3. the namespace of the cache can't just be processorName or even
> processorName-stateStoreName, it also will need to have something like
> taskId along with it.
> 
> Thanks,
> Damian
> 
> 
> On Wed, 7 Sep 2016 at 00:39 Guozhang Wang  wrote:
> 
>> Hi Matthias,
>> 
>> I agree with your concerns of coupling with record forwarding with record
>> storing in the state store, and my understanding is that this can (and
>> should) be resolved with the current interface. Here are my thoughts:
>> 
>> 1. The global cache, MemoryLRUCacheBytes, although is currently defined as
>> internal class, since it is exposed in ProcessorContext anyways, should
>> really be a public class anyways that users can access to (I have some
>> other comments about the names, but will rather leave them in the PR).
>> 
>> 2. In the processor API, the users can choose to use the cache to store the
>> intermediate results in the cache, and register the flush listener via
>> addDirtyEntryFlushListener (again some naming suggestions in PR but use it
>> for discussion for now). And as a result, if the old processor code looks
>> like this:
>> 
>> 
>> 
>> process(...) {
>> 
>>  state.put(...);
>>  context.forward(...);
>> }
>> 
>> 
>> Users can now leverage the cache on some of the processors by modifying the
>> code as:
>> 
>> 
>> 
>> init(...) {
>> 
>>  context.getCache().addDirtyEntyFlushLisener(processorName,
>> {state.put(...); context.forward(...)})
>> }
>> 
>> process(...) {
>> 
>>  context.getCache().put(processorName, ..);
>> }
>> 
>> 
>> 
>> 3. Note whether or not to apply caching is optional for each processor node
>> now, and is decoupled with its logic of forwarding / storing in persistent
>> state stores.
>> 
>> One may argue that now if users want to make use of the cache, he will need
>> to make code changes; but I think this is a reasonable requirement to users
>> actually, since that 1) currently we do one update-per-incoming-record, and
>> without code changes this behavior will be preserved, and 2) for DSL
>> implementation, we can just follow the above pattern to abstract it from
>> users, so they can pick up these changes automatically.
>> 
>> 
>> Guozhang
>> 
>> 
>> On Tue, Sep 6, 2016 at 7:41 AM, Eno Thereska 
>> wrote:
>> 
>>> A small update to the KIP: the deduping of records using the cache does
>>> not affect the .to operator since we'd have already deduped the KTable
>>> before the operator. Adjusting KIP.
>>> 
>>> Thanks
>>> Eno
>>> 
 On 5 Sep 2016, at 12:43, Eno Thereska  wrote:
 
 Hi Matthias,
 
 The motivation for KIP-63 was primarily aggregates and reducing the
>> load
>>> on "both" state stores and downstream. I think there is agreement that
>> for
>>> the DSL the motivation and design make sense.
 
 For the Processor API: caching is a major component in any system, and
>>> it is difficult to continue to operate as before, without fully
>>> understanding the consequences. Hence, I think this is mostly a case of
>>> educating users to understand the boundaries of the solution.
 
 Introducing a cache, either for the state store only, or for downstream
>>> forwarding only, or for both, leads to moving from a model where we
>> process
>>> each request end-to-end (today) to one where a request is temporarily
>>> buffered in a cache. In all the cases, this opens up the question of what
>>> to do next once the request then leaves the cache, and how to express
>> that
>>> (future) behaviour. E.g., even when the cache is just for downstream
>>> forwarding (i.e., decoupled from any state store), the processor API user
>>> might be surprised that context.forward() does not immediately do
>> anything.
 
 I agree that for ultra-flexibility, a processor API user should be able
>>> to choose whether the dedup cache is put 1) on top of a store only, 2) on
>>> forward only, 3) on both store and forward, but given the motivation for
>>> KIP-63 (aggregates), 

[jira] [Updated] (KAFKA-4139) Kafka consumer stuck in ensureCoordinatorReady after broker failure

2016-09-08 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4139:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Kafka consumer stuck in ensureCoordinatorReady after broker failure
> ---
>
> Key: KAFKA-4139
> URL: https://issues.apache.org/jira/browse/KAFKA-4139
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> In one of our tests with a single broker, consumer is stuck waiting to find 
> coordinator after the broker is restarted. {{findCoordinatorFuture}} is never 
> reset if {{sendGroupCoordinatorRequest()}} returns 
> {{RequestFuture.noBrokersAvailable()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4136) KafkaBasedLog should include offsets it is trying to/successfully read to in log messages

2016-09-08 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-4136:
-

Assignee: Liquan Pei  (was: Ewen Cheslack-Postava)

> KafkaBasedLog should include offsets it is trying to/successfully read to in 
> log messages
> -
>
> Key: KAFKA-4136
> URL: https://issues.apache.org/jira/browse/KAFKA-4136
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Minor
>
> Including the offset information will aid in debugging. It'll need to be 
> logged for all partitions, but often the # of partitions won't be too large 
> and this helps to understand after an issue occurs what values were being 
> used (e.g. for configs you can normally get this from DistributedHerder, but 
> for the offsets topic, it allows you to dump the offsets topic and figure out 
> what offset was being used even if the connector doesn't log any information 
> itself).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1831: KAFKA-4139: Reset findCoordinatorFuture when broke...

2016-09-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1831


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4139) Kafka consumer stuck in ensureCoordinatorReady after broker failure

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474369#comment-15474369
 ] 

ASF GitHub Bot commented on KAFKA-4139:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1831


> Kafka consumer stuck in ensureCoordinatorReady after broker failure
> ---
>
> Key: KAFKA-4139
> URL: https://issues.apache.org/jira/browse/KAFKA-4139
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> In one of our tests with a single broker, consumer is stuck waiting to find 
> coordinator after the broker is restarted. {{findCoordinatorFuture}} is never 
> reset if {{sendGroupCoordinatorRequest()}} returns 
> {{RequestFuture.noBrokersAvailable()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4144) Allow per stream/table timestamp extractor

2016-09-08 Thread Elias Levy (JIRA)
Elias Levy created KAFKA-4144:
-

 Summary: Allow per stream/table timestamp extractor
 Key: KAFKA-4144
 URL: https://issues.apache.org/jira/browse/KAFKA-4144
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.0.1
Reporter: Elias Levy
Assignee: Guozhang Wang


At the moment the timestamp extractor is configured via a StreamConfig value to 
KafkaStreams.  That means you can only have a single timestamp extractor per 
app, even though you may be joining multiple streams/tables that require 
different timestamp extraction methods.

You should be able to specify a timestamp extractor via 
KStreamBuilder.stream/table, just like you can specify key and value serdes 
that override the StreamConfig defaults.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-09-08 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15473820#comment-15473820
 ] 

Dustin Cote commented on KAFKA-3590:


[~hachikuji] is going to take this one over for now.

> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Jason Gustafson
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-09-08 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote updated KAFKA-3590:
---
Status: Open  (was: Patch Available)

> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Dustin Cote
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-09-08 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote updated KAFKA-3590:
---
Assignee: Jason Gustafson  (was: Dustin Cote)

> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Jason Gustafson
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15473815#comment-15473815
 ] 

ASF GitHub Bot commented on KAFKA-3590:
---

Github user cotedm closed the pull request at:

https://github.com/apache/kafka/pull/1681


> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Dustin Cote
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1681: KAFKA-3590: KafkaConsumer fails with "Messages are...

2016-09-08 Thread cotedm
Github user cotedm closed the pull request at:

https://github.com/apache/kafka/pull/1681


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15473469#comment-15473469
 ] 

ASF GitHub Bot commented on KAFKA-4130:
---

GitHub user oscerd opened a pull request:

https://github.com/apache/kafka/pull/1835

[DOCS] KAFKA-4130: Link to Varnish architect notes is broken

Hi all,

The PR is related to 

https://issues.apache.org/jira/browse/KAFKA-4130

Thanks,
Andrea

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/oscerd/kafka KAFKA-4130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1835.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1835


commit d937766d93afe330d742487445baa2e496a08146
Author: Andrea Cosentino 
Date:   2016-09-08T10:13:06Z

KAFKA-4130: [docs] Link to Varnish architect notes is broken




> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Assignee: Andrea Cosentino
>Priority: Trivial
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1835: [DOCS] KAFKA-4130: Link to Varnish architect notes...

2016-09-08 Thread oscerd
GitHub user oscerd opened a pull request:

https://github.com/apache/kafka/pull/1835

[DOCS] KAFKA-4130: Link to Varnish architect notes is broken

Hi all,

The PR is related to 

https://issues.apache.org/jira/browse/KAFKA-4130

Thanks,
Andrea

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/oscerd/kafka KAFKA-4130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1835.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1835


commit d937766d93afe330d742487445baa2e496a08146
Author: Andrea Cosentino 
Date:   2016-09-08T10:13:06Z

KAFKA-4130: [docs] Link to Varnish architect notes is broken




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-09-08 Thread Andrea Cosentino (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrea Cosentino reassigned KAFKA-4130:
---

Assignee: Andrea Cosentino

> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Assignee: Andrea Cosentino
>Priority: Trivial
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Multiple active Kafka producers/consumers with users authentication

2016-09-08 Thread Guillaume Grossetie
Hello,

I'm using Kafka 0.10+ with an SASL authentication on the client:

// kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="guillaume"
password="secret";
};

Everything is working as expected and I want to use the same
authentication mechanism
in the Kafka REST Proxy.
As far as I know this is not yet implemented and in order to do that, each
request to the server will need to be authenticated.

A simple solution is to send a basic auth over HTTPS and to use this
username/password in the PlainLoginModule.

During my initial tests I found out that the LoginManager is cached [1] and
released only when the Kafka producer/consumer is closed. This means that I
can have only one authenticated Kafka producer/consumer (more specifically
one Kafka channel) active at a time.

Am I missing something ? How can I have multiple active Kafka
producers/consumers with different credentials ?


Thanks,
Guillaume

[1]
https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35


Re: Queryable state client read guarantees

2016-09-08 Thread Damian Guy
Hi Mikael,

A fix for KAFKA-4123  (the
issue you found with receiving null values) has now been committed to
trunk. I've tried it with your github repo and it appears to be working.
You will have to make a small change to your code as we now throw
InvalidStateStoreException when the Stores are unavailable (previously we
returned null).

We added a test here

to
make sure we only get a value once the store has been (re-)initialized.
Please give it a go and thanks for your help in finding this issue.

Thanks,
Damian

On Mon, 5 Sep 2016 at 22:07 Mikael Högqvist  wrote:

> Hi Damian,
>
> > > Failed to read key hello, org.mkhq.kafka.Topology$StoreUnavailable
> > > > Failed to read key hello, org.mkhq.kafka.Topology$KeyNotFound
> > > > hello -> 10
> > >
> > >
> > The case where you get KeyNotFound looks like a bug to me. This shouldn't
> > happen. I can see why it might happen and we will create a JIRA and fix
> it
> > right away.
> >
>
> Great, thanks for looking into this. I'll try again once the PR is merged.
>
>
> >
> > I'm not sure how you end up with (hello -> 10). It could indicate that
> the
> > offsets for the topic you are consuming from weren't committed so the
> data
> > gets processed again on the restart.
> >
>
> Yes, it didn't commit the offsets since streams.close() was not called on
> ctrl-c. Fixed by adding a shutdown hook.
>
> Thanks,
> Mikael
>
>
> > Thanks,
> > Damian
> >
>