Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-23 Thread Vahid S Hashemian
Hi Jason,

Thanks a lot for reviewing the KIP.

1. I think my suggestion in the KIP was more towards ignoring the client 
provided values and use a large enough broker config value instead. It 
seems the question comes down to whether we still want to honor the 
`retention_time` field in the old requests. With the new request (as per 
this KIP) the client would not be able to overwrite the broker retention 
config. Your suggestion provides kind of a back door for the overwrite. 
Also, since different offset commits associated with a group can 
potentially use different `retention_time` values, it's probably 
reasonable to use the maximum of all those values (including the broker 
config) as the group offset retention.

2. If I'm not mistake you are referring to potential changes in 
`GROUP_METADATA_VALUE_SCHEMA`. I saw this as an internal implementation 
matter and frankly, have not fully thought about it, but I agree that it 
needs to be updated to include either the timestamp the group becomes 
`Empty` or maybe the expiration timestamp of the group. And perhaps, we 
would not need to store per partition offset expiration timestamp anymore. 
Is there a particular reason for your suggestion of storing the timestamp 
the group becomes `Empty`, vs the expiration timestamp of the group?

3. To limit the scope of the KIP I would prefer to handle this matter 
separately if it doesn't have to be addressed as part of this change. It 
probably needs be addressed at some point and I'll mention it in the KIP 
so we have it documented. Do you think my suggestion of manually removing 
topic offsets from group (as an interim solution) is worth additional 
discussion / implementation?

I'll wait for your feedback and clarification on the above items before 
updating the KIP.

Thanks.
--Vahid



From:   Jason Gustafson 
To: dev@kafka.apache.org
Date:   02/18/2018 01:16 PM
Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

Sorry for the late response. The KIP looks good. A few comments:

1. I'm not quite sure I understand how you are handling old clients. It
sounds like you are saying that old clients need to change configuration?
I'd suggest 1) if an old client requests the default expiration, then we
use the updated behavior, and 2) if the old client requests a specific
expiration, we enforce it from the time the group becomes Empty.

2. Does this require a new version of the group metadata messsage format? 
I
think we need to add a new field to indicate the time that the group state
changed to Empty. This will allow us to resume the expiration timer
correctly after a coordinator change. Alternatively, we could reset the
expiration timeout after every coordinator move, but it would be nice to
have a definite bound on offset expiration.

3. The question about removal of offsets for partitions which are no 
longer
in use is interesting. At the moment, it's difficult for the coordinator 
to
know that a partition is no longer being fetched because it is agnostic to
subscription state (the group coordinator is used for more than just
consumer groups). Even if we allow the coordinator to read subscription
state to tell which topics are no longer being consumed, we might need 
some
additional bookkeeping to keep track of /when/ the consumer stopped
subscribing to a particular topic. Or maybe we can reset this expiration
timer after every coordinator change when the new coordinator reads the
offsets and group metadata? I am not sure how common this use case is and
whether it needs to be solved as part of this KIP.

Thanks,
Jason



On Thu, Feb 1, 2018 at 12:40 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Thanks James for sharing that scenario.
>
> I agree it makes sense to be able to remove offsets for the topics that
> are no longer "active" in the group.
> I think it becomes important to determine what constitutes that a topic 
is
> no longer active: If we use per-partition expiration we would manually
> choose a retention time that works for the particular scenario.
>
> That works, but since we are manually intervening and specify a
> per-partition retention, why not do the intervention in some other way:
>
> One alternative for this intervention, to favor the simplicity of the
> suggested protocol in the KIP, is to improve upon the just introduced
> DELETE_GROUPS API and allow for deletion of offsets of specific topics 
in
> the group. This is what the old ZooKeeper based group management 
supported
> anyway, and we would just be leveling the group deletion features of the
> Kafka-based group management with the ZooKeeper-based one.
>
> So, instead of deciding in advance when the offsets should be removed we
> would instantly remove them when we are sure that they are no longer
> needed.
>
> Let me know what you think.
>
> Thanks.
> --Vahid
>
>
>
> From:   James Cheng 
> To: dev@kafka.apache.org
> 

Build failed in Jenkins: kafka-trunk-jdk8 #2434

2018-02-23 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Revert incompatible behavior change to consumer reset tool

--
[...truncated 3.50 MB...]

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException 

Build failed in Jenkins: kafka-0.11.0-jdk7 #360

2018-02-23 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: wait for broker startup for system tests (#4363)

--
[...truncated 905.10 KB...]
kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero STARTED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testSendBeforeAndAfterPartitionExpansion 
STARTED

kafka.api.PlaintextProducerSendTest > testSendBeforeAndAfterPartitionExpansion 
PASSED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED


[jira] [Resolved] (KAFKA-6239) Consume group hung into rebalancing state, now stream not able to poll data

2018-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6239.
--
Resolution: Duplicate

> Consume group hung into rebalancing state, now stream not able to poll data
> ---
>
> Key: KAFKA-6239
> URL: https://issues.apache.org/jira/browse/KAFKA-6239
> Project: Kafka
>  Issue Type: Bug
>Reporter: DHRUV BANSAL
>Priority: Critical
>
> ./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group 
> mitra-log-parser --describe
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> Warning: Consumer group 'mitra-log-parser' is rebalancing.
> How to restore the consumer group state?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6236) stream not picking data from topic - after rebalancing

2018-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6236.
--
Resolution: Cannot Reproduce

Unresponsive, so we can't track this down. Please re-open if there's more info 
to be shared.

> stream not picking data from topic - after rebalancing 
> ---
>
> Key: KAFKA-6236
> URL: https://issues.apache.org/jira/browse/KAFKA-6236
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: DHRUV BANSAL
>Priority: Critical
>
> Kafka stream is not polling new messages from the topic. 
> On enquiring the consumer group it is showing in rebalancing state
> Command output:
> ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group 
>  --describe
> Warning: Consumer group 'name' is rebalancing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6328) Exclude node groups belonging to global stores in InternalTopologyBuilder#makeNodeGroups

2018-02-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6328.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

> Exclude node groups belonging to global stores in 
> InternalTopologyBuilder#makeNodeGroups
> 
>
> Key: KAFKA-6328
> URL: https://issues.apache.org/jira/browse/KAFKA-6328
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Guozhang Wang
>Assignee: Richard Yu
>Priority: Major
>  Labels: newbie
> Fix For: 1.1.0
>
> Attachments: kafka-6328.diff
>
>
> Today when we group processor nodes into groups (i.e. sub-topologies), we 
> assign the sub-topology id for global tables' dummy groups as well. As a 
> result, the subtopology ids (and hence task ids) are not consecutive anymore. 
> This is quite confusing for users trouble shooting and debugging; in 
> addition, the node group for global stores are not useful as well: we simply 
> exclude it in all the caller functions of makeNodeGroups.
> It would be better to simply exclude the global store's node groups in this 
> function so that the subtopology ids and task ids are consecutive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6589) Extract Heartbeat thread from Abstract Coordinator for testability purposes

2018-02-23 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-6589:


 Summary: Extract Heartbeat thread from Abstract Coordinator for 
testability purposes
 Key: KAFKA-6589
 URL: https://issues.apache.org/jira/browse/KAFKA-6589
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Guozhang Wang


Today we do not have an easy way to instrument the heartbeat thread in our unit 
test, e.g. to inject a GC or make it crash etc since it is a private class 
inside AbstractCoordinator.

It is better to extract this class out and also enabling AbstractCoordinator to 
take a heartbeat-thread interface that can be mocked for test utils.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.0.1 RC2

2018-02-23 Thread Guozhang Wang
+1.

Ran the quickstart with the binary release of scala 2.12.


Guozhang

On Fri, Feb 23, 2018 at 1:02 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> +1
>
> Built source and ran quickstarts successfully on Ubuntu and Windows 64bit.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Damian Guy 
> To: dev@kafka.apache.org
> Date:   02/23/2018 01:42 AM
> Subject:Re: [VOTE] 1.0.1 RC2
>
>
>
> +1
>
> Built src and ran tests
> Ran streams quickstart
>
> On Thu, 22 Feb 2018 at 21:32 Ted Yu  wrote:
>
> > +1
> >
> > MetricsTest#testMetricsLeak failed but it is flaky test
> >
> > On Wed, Feb 21, 2018 at 4:06 PM, Ewen Cheslack-Postava
> 
> > wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the third candidate for release of Apache Kafka 1.0.1.
> > >
> > > This is a bugfix release for the 1.0 branch that was first released
> with
> > > 1.0.0 about 3 months ago. We've fixed 49 issues since that release.
> Most
> > of
> > > these are non-critical, but in aggregate these fixes will have
> > significant
> > > impact. A few of the more significant fixes include:
> > >
> > > * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
> > > plugins
> > > * KAFKA-6185: Selector memory leak with high likelihood of OOM in case
> of
> > > down conversion
> > > * KAFKA-6269: KTable state restore fails after rebalance
> > > * KAFKA-6190: GlobalKTable never finishes restoring when consuming
> > > transactional messages
> > > * KAFKA-6529: Stop file descriptor leak when client disconnects with
> > staged
> > > receives
> > > * KAFKA-6238: Issues with protocol version when applying a rolling
> > upgrade
> > > to 1.0.0
> > >
> > > Release notes for the 1.0.1 release:
> > >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eewencp_kafka-2D1.0.1-2Drc2_RELEASE-5FNOTES.
> html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=
> v2WB8ZkBAy5UgHbVO0fX2aTxrrkjUK7FWRgbYkem_bc=
>
> > >
> > > *** Please download, test and vote by Saturday Feb 24, 9pm PT ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdS
> ts3xq5NfJhfBM=w0mQOiRrYJSZI_8uBdSNZuIdz1uPYV43nAzw7e5_TQ0=
>
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eewencp_kafka-2D1.0.1-2Drc2_=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
> ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=IjVYYFDf2lbBOhF3wIV3dd-
> rlbvNG4HXdvchNk3QcjE=
>
> > >
> > > * Maven artifacts to be voted upon:
> > >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__
> repository.apache.org_content_groups_staging_=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
> ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=
> ZGMPv2Pj8iMf9Ygyo5N9tQST2UaK5O0lDH0JugM3_AM=
>
> > >
> > > * Javadoc:
> > >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eewencp_kafka-2D1.0.1-2Drc2_javadoc_=
> DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=vo-
> uUbsQeP4lLVqpMHLrB7KRtxZcxXCpefluTyfBw2I=
>
> > >
> > > * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
> > >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.
> com_apache_kafka_tree_1.0.1-2Drc2=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdS
> ts3xq5NfJhfBM=O9UComAYd7f7IP3WWp-KB068HF5DouP5FnxEfhn5Uio=
>
> > >
> > > * Documentation:
> > >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_10_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdS
> ts3xq5NfJhfBM=BiqFcfHnTMyzrCKFwWB0SrqA-jd1m3YzLKP5Dh-q5o4=
>
> > >
> > > * Protocol:
> > >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_10_protocol.html=DwIBaQ=jf_iaSHvJObTbx-
> siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
> ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=
> zCN8Z93FHqhu7EDdTH5oDnUOBwz3r2PpRqEKYlXUwQI=
>
> > >
> > > /**
> > >
> > > Thanks,
> > > Ewen Cheslack-Postava
> > >
> >
>
>
>
>
>


-- 
-- Guozhang


Jenkins build is back to normal : kafka-1.1-jdk7 #62

2018-02-23 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-261: Add Single Value Fetch in Window Stores

2018-02-23 Thread Guozhang Wang
Hi all,

I'd like to hear more feedbacks on this KIP. If there is no more I'm going
to start the vote thread by tonight.


Guozhang

On Fri, Feb 23, 2018 at 8:52 AM, Damian Guy  wrote:

> Thanks Guozhang. +1
>
> On Thu, 22 Feb 2018 at 23:44 Guozhang Wang  wrote:
>
> > Thanks Ted, have updated the wiki page.
> >
> > On Thu, Feb 22, 2018 at 1:48 PM, Ted Yu  wrote:
> >
> > > +1
> > >
> > > There were some typos:
> > > CachingWindowedStore -> CachingWindowStore
> > > RocksDBWindowedStore -> RocksDBWindowStore
> > > KStreamWindowedAggregate -> KStreamWindowAggregate
> > > KStreamWindowedReduce -> KStreamWindowReduce
> > >
> > > Cheers
> > >
> > > On Thu, Feb 22, 2018 at 1:34 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have submitted KIP-261 to add a new API for window stores in order
> to
> > > > optimize our current windowed aggregation implementations inside
> > Streams
> > > > DSL
> > > > :
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 261%3A+Add+Single+Value+Fetch+in+Window+Stores
> > > >
> > > > This change would require people who have customized window store
> > > > implementations to make code changes as part of their upgrade path.
> > But I
> > > > think it is worth while given that the fraction of customized window
> > > store
> > > > should be very small.
> > > >
> > > >
> > > > Feedback and suggestions are welcome.
> > > >
> > > > Thanks,
> > > > -- Guozhang
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


[jira] [Resolved] (KAFKA-4498) Extract Streams section as a separate page from documentation.html

2018-02-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4498.
--
   Resolution: Fixed
 Assignee: Guozhang Wang
Fix Version/s: 0.10.1.0

> Extract Streams section as a separate page from documentation.html
> --
>
> Key: KAFKA-4498
> URL: https://issues.apache.org/jira/browse/KAFKA-4498
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 0.10.1.0
>
>
> We would like to break the single gigantic {{documentation.html}} into 
> separate pages, where each section lives in its own page for better 
> visibility.
> We will use the last section, {{Kafka Streams}} as a first step for this 
> purpose, in this task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6439) "com.streamsets.pipeline.api.StageException: KAFKA_50 - Error writing data to the Kafka broker: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.N

2018-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6439.
--
Resolution: Not A Bug

> "com.streamsets.pipeline.api.StageException: KAFKA_50 - Error writing data to 
> the Kafka broker: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received"
> -
>
> Key: KAFKA-6439
> URL: https://issues.apache.org/jira/browse/KAFKA-6439
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.0.1
> Environment: Ubuntu 64bit
>Reporter: srithar durairaj
>Priority: Major
>
> We are using streamset to produce data into kafka topic (3 node cluster). We 
> are facing following error frequently in production.  
> "com.streamsets.pipeline.api.StageException: KAFKA_50 - Error writing data to 
> the Kafka broker: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2435

2018-02-23 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-23 Thread Dong Lin
Hey all,

Thanks much for all the comments. I have a rough idea of how to shrink
partitions of a topic. I need some time to think through and write down the
details of the procedure. Then I will reply to your comments.

Thanks,
Dong



On Thu, Feb 22, 2018 at 7:18 PM, Matthias J. Sax 
wrote:

> One more thought:
>
> What about older Producer/Consumers? They don't understand the new
> protocol. How can we guarantee backward compatibility?
>
> Or would this "only" imply, that there is no ordering guarantee for
> older clients?
>
>
> -Matthias
>
>
> On 2/22/18 6:24 PM, Matthias J. Sax wrote:
> > Dong,
> >
> > thanks a lot for the KIP!
> >
> > Can you elaborate how this would work for compacted topics? If it does
> > not work for compacted topics, I think Streams API cannot allow to scale
> > input topics.
> >
> > This question seems to be particularly interesting for deleting
> > partitions: assume that a key is never (or for a very long time)
> > updated, a partition cannot be deleted.
> >
> >
> > -Matthias
> >
> >
> > On 2/22/18 5:19 PM, Jay Kreps wrote:
> >> Hey Dong,
> >>
> >> Two questions:
> >> 1. How will this work with Streams and Connect?
> >> 2. How does this compare to a solution where we physically split
> partitions
> >> using a linear hashing approach (the partition number is equivalent to
> the
> >> hash bucket in a hash table)? https://en.wikipedia.org/wiki/
> Linear_hashing
> >>
> >> -Jay
> >>
> >> On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
> >>
> >>> Hi all,
> >>>
> >>> I have created KIP-253: Support in-order message delivery with
> partition
> >>> expansion. See
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> 253%3A+Support+in-order+message+delivery+with+partition+expansion
> >>> .
> >>>
> >>> This KIP provides a way to allow messages of the same key from the same
> >>> producer to be consumed in the same order they are produced even if we
> >>> expand partition of the topic.
> >>>
> >>> Thanks,
> >>> Dong
> >>>
> >>
> >
>
>


Jenkins build is back to normal : kafka-trunk-jdk9 #428

2018-02-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-1.1-jdk7 #61

2018-02-23 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix ConcurrentModificationException in TransactionManager (#4608)

[jason] KAFKA-6578: Changed the Connect distributed and standalone main method

--
[...truncated 413.88 KB...]

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning 
STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning 
PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningNewConsumer
 STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningNewConsumer
 PASSED

kafka.tools.ConsoleConsumerTest > 
shouldSetAutoResetToSmallestWhenFromBeginningConfigured STARTED

kafka.tools.ConsoleConsumerTest > 
shouldSetAutoResetToSmallestWhenFromBeginningConfigured PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetEarliest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetEarliest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidOldConsumerConfigWithAutoOffsetResetLargest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidOldConsumerConfigWithAutoOffsetResetLargest PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningOldConsumer
 STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningOldConsumer
 PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.message.MessageCompressionTest > testCompressSize STARTED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageTest > testChecksum STARTED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp STARTED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable STARTED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination STARTED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping STARTED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues STARTED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte STARTED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality STARTED

kafka.message.MessageTest > testEquality PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially STARTED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED


[jira] [Resolved] (KAFKA-2544) Replication tools wiki page needs to be updated

2018-02-23 Thread Jakub Scholz (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Scholz resolved KAFKA-2544.
-
Resolution: Fixed
  Assignee: Jakub Scholz

This issue has been resolved.

> Replication tools wiki page needs to be updated
> ---
>
> Key: KAFKA-2544
> URL: https://issues.apache.org/jira/browse/KAFKA-2544
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Jakub Scholz
>Priority: Minor
>  Labels: documentation, newbie
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools is 
> outdated, mentions tools which have been heavily refactored or replaced by 
> other tools, e.g. add partition tool, list/create topics tools, etc.
> Please have the replication tools wiki page updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-261: Add Single Value Fetch in Window Stores

2018-02-23 Thread Damian Guy
Thanks Guozhang. +1

On Thu, 22 Feb 2018 at 23:44 Guozhang Wang  wrote:

> Thanks Ted, have updated the wiki page.
>
> On Thu, Feb 22, 2018 at 1:48 PM, Ted Yu  wrote:
>
> > +1
> >
> > There were some typos:
> > CachingWindowedStore -> CachingWindowStore
> > RocksDBWindowedStore -> RocksDBWindowStore
> > KStreamWindowedAggregate -> KStreamWindowAggregate
> > KStreamWindowedReduce -> KStreamWindowReduce
> >
> > Cheers
> >
> > On Thu, Feb 22, 2018 at 1:34 PM, Guozhang Wang 
> wrote:
> >
> > > Hi all,
> > >
> > > I have submitted KIP-261 to add a new API for window stores in order to
> > > optimize our current windowed aggregation implementations inside
> Streams
> > > DSL
> > > :
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 261%3A+Add+Single+Value+Fetch+in+Window+Stores
> > >
> > > This change would require people who have customized window store
> > > implementations to make code changes as part of their upgrade path.
> But I
> > > think it is worth while given that the fraction of customized window
> > store
> > > should be very small.
> > >
> > >
> > > Feedback and suggestions are welcome.
> > >
> > > Thanks,
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-6587) Kafka Streams hangs when not able to access internal topics

2018-02-23 Thread Chris Medved (JIRA)
Chris Medved created KAFKA-6587:
---

 Summary: Kafka Streams hangs when not able to access internal 
topics
 Key: KAFKA-6587
 URL: https://issues.apache.org/jira/browse/KAFKA-6587
 Project: Kafka
  Issue Type: Bug
  Components: security, streams
Affects Versions: 1.0.0
Reporter: Chris Medved


*Expectation:* Kafka Streams client will throw an exception, log errors, or 
crash when a fatal error occurs.

*Observation:* Kafka Streams does not log an error or throw an exception when 
necessary permissions for internal state store topics are not granted. It will 
hang indefinitely and not start running the topology.

*Steps to reproduce:*
 # Create a Kafka Cluster with ACLs enabled (allow.everyone.if.no.acl.found 
should be set to false, or deny permissions must be set on the intermediate 
topics).
 # Create a simple streams application that does a stateful operation such as 
count.
 # Grant ACLs on source and sink topics to principal used for testing (would 
recommend using ANONYMOUS user if possible for ease of testing).
 # Grant ACLs for consumer group and cluster create. Add deny permissions to 
state store topics if the default is "allow". You can run the application to 
create the topics or use the toplogy describe method to get the names.
 # Run streams application. It should hang on "(Re-)joining group" with no 
errors printed.

*Detailed Explanation*

I spent some time trying to figure out what was wrong with my streams app. I'm 
using ACLs on my Kafka cluster and it turns out I forgot to grant read/write 
access to the internal topic state store for an aggregation.

The streams client would hang on "(Re-)joining group" until killed (note ^C is 
ctrl+c, which I used to kill the app): 
{code:java}
10:29:10.064 [kafka-consumer-client-StreamThread-1] INFO 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
clientId=kafka-consumer-client-StreamThread-1-consumer, 
groupId=kafka-consumer-test] Discovered coordinator localhost:9092 (id: 
2147483647 rack: null)
10:29:10.105 [kafka-consumer-client-StreamThread-1] INFO 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer 
clientId=kafka-consumer-client-StreamThread-1-consumer, 
groupId=kafka-consumer-test] Revoking previously assigned partitions []
10:29:10.106 [kafka-consumer-client-StreamThread-1] INFO 
org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[kafka-consumer-client-StreamThread-1] State transition from RUNNING to 
PARTITIONS_REVOKED
10:29:10.106 [kafka-consumer-client-StreamThread-1] INFO 
org.apache.kafka.streams.KafkaStreams - stream-client 
[kafka-consumer-client]State transition from RUNNING to REBALANCING
10:29:10.107 [kafka-consumer-client-StreamThread-1] INFO 
org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[kafka-consumer-client-StreamThread-1] partition revocation took 1 ms.
suspended active tasks: []
suspended standby tasks: []
10:29:10.107 [kafka-consumer-client-StreamThread-1] INFO 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
clientId=kafka-consumer-client-StreamThread-1-consumer, 
groupId=kafka-consumer-test] (Re-)joining group
^C
10:34:53.609 [Thread-3] INFO org.apache.kafka.streams.KafkaStreams - 
stream-client [kafka-consumer-client]State transition from REBALANCING to 
PENDING_SHUTDOWN
10:34:53.610 [Thread-3] INFO 
org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[kafka-consumer-client-StreamThread-1] Informed to shut down
10:34:53.610 [Thread-3] INFO 
org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[kafka-consumer-client-StreamThread-1] State transition from PARTITIONS_REVOKED 
to PENDING_SHUTDOWN{code}
The server log would show:
{code:java}
[2018-02-23 10:29:10,408] INFO [Partition 
kafka-consumer-test-KSTREAM-AGGREGATE-STATE-STORE-05-changelog-0 
broker=0] 
kafka-consumer-test-KSTREAM-AGGREGATE-STATE-STORE-05-changelog-0 starts 
at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 
(kafka.cluster.Partition)
[2018-02-23 10:29:20,143] INFO [GroupCoordinator 0]: Member 
kafka-consumer-client-StreamThread-1-consumer-f86e4ca8-4c
45-4883-bdaa-2383193eabbe in group kafka-consumer-test has failed, removing it 
from the group (kafka.coordinator.group.GroupCoordinator)
[2018-02-23 10:29:20,143] INFO [GroupCoordinator 0]: Preparing to rebalance 
group kafka-consumer-test with old generation 1 (__consumer_offsets-23) 
(kafka.coordinator.group.GroupCoordinator)
[2018-02-23 10:29:20,143] INFO [GroupCoordinator 0]: Group kafka-consumer-test 
with generation 2 is now empty (__consumer_offsets-23) 
(kafka.coordinator.group.GroupCoordinator)
[2018-02-23 10:31:23,448] INFO [GroupMetadataManager brokerId=0] Group 
kafka-consumer-test transitioned to Dead in generation 2 

[jira] [Created] (KAFKA-6588) Add a metric to monitor live log cleaner thread

2018-02-23 Thread Navina Ramesh (JIRA)
Navina Ramesh created KAFKA-6588:


 Summary: Add a metric to monitor live log cleaner thread
 Key: KAFKA-6588
 URL: https://issues.apache.org/jira/browse/KAFKA-6588
 Project: Kafka
  Issue Type: Bug
Reporter: Navina Ramesh


We want to have a more direct metric to monitor the log cleaner thread. Hence, 
adding a simple metric in `LogCleaner.scala`. 

Additionally, making a minor change to make sure the correct offsets are logged 
in `LogCleaner#recordStats` 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.0.1 RC2

2018-02-23 Thread Vahid S Hashemian
+1

Built source and ran quickstarts successfully on Ubuntu and Windows 64bit.

Thanks.
--Vahid




From:   Damian Guy 
To: dev@kafka.apache.org
Date:   02/23/2018 01:42 AM
Subject:Re: [VOTE] 1.0.1 RC2



+1

Built src and ran tests
Ran streams quickstart

On Thu, 22 Feb 2018 at 21:32 Ted Yu  wrote:

> +1
>
> MetricsTest#testMetricsLeak failed but it is flaky test
>
> On Wed, Feb 21, 2018 at 4:06 PM, Ewen Cheslack-Postava 

> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 1.0.1.
> >
> > This is a bugfix release for the 1.0 branch that was first released 
with
> > 1.0.0 about 3 months ago. We've fixed 49 issues since that release. 
Most
> of
> > these are non-critical, but in aggregate these fixes will have
> significant
> > impact. A few of the more significant fixes include:
> >
> > * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
> > plugins
> > * KAFKA-6185: Selector memory leak with high likelihood of OOM in case 
of
> > down conversion
> > * KAFKA-6269: KTable state restore fails after rebalance
> > * KAFKA-6190: GlobalKTable never finishes restoring when consuming
> > transactional messages
> > * KAFKA-6529: Stop file descriptor leak when client disconnects with
> staged
> > receives
> > * KAFKA-6238: Issues with protocol version when applying a rolling
> upgrade
> > to 1.0.0
> >
> > Release notes for the 1.0.1 release:
> > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eewencp_kafka-2D1.0.1-2Drc2_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=v2WB8ZkBAy5UgHbVO0fX2aTxrrkjUK7FWRgbYkem_bc=

> >
> > *** Please download, test and vote by Saturday Feb 24, 9pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=w0mQOiRrYJSZI_8uBdSNZuIdz1uPYV43nAzw7e5_TQ0=

> >
> > * Release artifacts to be voted upon (source and binary):
> > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eewencp_kafka-2D1.0.1-2Drc2_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=IjVYYFDf2lbBOhF3wIV3dd-rlbvNG4HXdvchNk3QcjE=

> >
> > * Maven artifacts to be voted upon:
> > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=ZGMPv2Pj8iMf9Ygyo5N9tQST2UaK5O0lDH0JugM3_AM=

> >
> > * Javadoc:
> > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eewencp_kafka-2D1.0.1-2Drc2_javadoc_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=vo-uUbsQeP4lLVqpMHLrB7KRtxZcxXCpefluTyfBw2I=

> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
> > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_tree_1.0.1-2Drc2=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=O9UComAYd7f7IP3WWp-KB068HF5DouP5FnxEfhn5Uio=

> >
> > * Documentation:
> > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_10_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=BiqFcfHnTMyzrCKFwWB0SrqA-jd1m3YzLKP5Dh-q5o4=

> >
> > * Protocol:
> > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_10_protocol.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=ej7xOpAnIyFWKFyM8r5d3UY6evMIdSts3xq5NfJhfBM=zCN8Z93FHqhu7EDdTH5oDnUOBwz3r2PpRqEKYlXUwQI=

> >
> > /**
> >
> > Thanks,
> > Ewen Cheslack-Postava
> >
>






Jenkins build is back to normal : kafka-trunk-jdk8 #2433

2018-02-23 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.1 RC2

2018-02-23 Thread Damian Guy
+1

Built src and ran tests
Ran streams quickstart

On Thu, 22 Feb 2018 at 21:32 Ted Yu  wrote:

> +1
>
> MetricsTest#testMetricsLeak failed but it is flaky test
>
> On Wed, Feb 21, 2018 at 4:06 PM, Ewen Cheslack-Postava 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 1.0.1.
> >
> > This is a bugfix release for the 1.0 branch that was first released with
> > 1.0.0 about 3 months ago. We've fixed 49 issues since that release. Most
> of
> > these are non-critical, but in aggregate these fixes will have
> significant
> > impact. A few of the more significant fixes include:
> >
> > * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
> > plugins
> > * KAFKA-6185: Selector memory leak with high likelihood of OOM in case of
> > down conversion
> > * KAFKA-6269: KTable state restore fails after rebalance
> > * KAFKA-6190: GlobalKTable never finishes restoring when consuming
> > transactional messages
> > * KAFKA-6529: Stop file descriptor leak when client disconnects with
> staged
> > receives
> > * KAFKA-6238: Issues with protocol version when applying a rolling
> upgrade
> > to 1.0.0
> >
> > Release notes for the 1.0.1 release:
> > http://home.apache.org/~ewencp/kafka-1.0.1-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Saturday Feb 24, 9pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~ewencp/kafka-1.0.1-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~ewencp/kafka-1.0.1-rc2/javadoc/
> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
> > https://github.com/apache/kafka/tree/1.0.1-rc2
> >
> > * Documentation:
> > http://kafka.apache.org/10/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/10/protocol.html
> >
> > /**
> >
> > Thanks,
> > Ewen Cheslack-Postava
> >
>


Re: [VOTE] 1.0.1 RC2

2018-02-23 Thread Attila Sasvári
Hi there,

Here is what I did:
- Verified signature and hashes
- Built kafka from source
- Executed tests - all passed
- Started Zookeeper, Kafka broker, a console producer and consumer.
Verified that the stream of records published to my test Kafka topic was
processed
- Ran the Kafka Streams demo app - worked as expected

- Tried to validate the release candidate by running system tests too.
Unfortunately, tests are failing because they cannot download old binaries
from the specified mirror (http://mirrors.ocf.berkeley.edu/apache/kafka/ (it
was fixed by KAFKA-6247).
- version.py in system tests might need to be updated ("1.0.1" is missing)

+0 (non-binding) because of the system test issue

Regards,
- Attila


> On Thu, Feb 22, 2018 at 1:06 AM, Ewen Cheslack-Postava 
>  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the third candidate for release of Apache Kafka 1.0.1.
>>
>> This is a bugfix release for the 1.0 branch that was first released with
>> 1.0.0 about 3 months ago. We've fixed 49 issues since that release. Most
>> of
>> these are non-critical, but in aggregate these fixes will have significant
>> impact. A few of the more significant fixes include:
>>
>> * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
>> plugins
>> * KAFKA-6185: Selector memory leak with high likelihood of OOM in case of
>> down conversion
>> * KAFKA-6269: KTable state restore fails after rebalance
>> * KAFKA-6190: GlobalKTable never finishes restoring when consuming
>> transactional messages
>> * KAFKA-6529: Stop file descriptor leak when client disconnects with
>> staged
>> receives
>> * KAFKA-6238: Issues with protocol version when applying a rolling upgrade
>> to 1.0.0
>>
>> Release notes for the 1.0.1 release:
>> http://home.apache.org/~ewencp/kafka-1.0.1-rc2/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Saturday Feb 24, 9pm PT ***
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~ewencp/kafka-1.0.1-rc2/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/
>>
>> * Javadoc:
>> http://home.apache.org/~ewencp/kafka-1.0.1-rc2/javadoc/
>>
>> * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
>> https://github.com/apache/kafka/tree/1.0.1-rc2
>>
>> * Documentation:
>> http://kafka.apache.org/10/documentation.html
>>
>> * Protocol:
>> http://kafka.apache.org/10/protocol.html
>>
>> /**
>>
>> Thanks,
>> Ewen Cheslack-Postava
>>
>
>