[jira] [Resolved] (KAFKA-6578) Connect distributed and standalone worker 'main()' methods should catch and log all exceptions

2018-02-22 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6578.

   Resolution: Fixed
Fix Version/s: 1.1.0

> Connect distributed and standalone worker 'main()' methods should catch and 
> log all exceptions
> --
>
> Key: KAFKA-6578
> URL: https://issues.apache.org/jira/browse/KAFKA-6578
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Randall Hauch
>Priority: Critical
> Fix For: 1.1.0
>
>
> Currently, the {{main}} methods in {{ConnectDistributed}} and 
> {{ConnectStandalone}} do not catch and log most of the potential exceptions. 
> That means that when such an exception does occur, Java does terminate the 
> process and report it to stderr, but does not log the exception in the log.
> We should add a try block around most of the existing code in the main method 
> to catch any Throwable exception, log it, and either rethrow it or explicitly 
> exit with a non-zero return code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-22 Thread Matthias J. Sax
One more thought:

What about older Producer/Consumers? They don't understand the new
protocol. How can we guarantee backward compatibility?

Or would this "only" imply, that there is no ordering guarantee for
older clients?


-Matthias


On 2/22/18 6:24 PM, Matthias J. Sax wrote:
> Dong,
> 
> thanks a lot for the KIP!
> 
> Can you elaborate how this would work for compacted topics? If it does
> not work for compacted topics, I think Streams API cannot allow to scale
> input topics.
> 
> This question seems to be particularly interesting for deleting
> partitions: assume that a key is never (or for a very long time)
> updated, a partition cannot be deleted.
> 
> 
> -Matthias
> 
> 
> On 2/22/18 5:19 PM, Jay Kreps wrote:
>> Hey Dong,
>>
>> Two questions:
>> 1. How will this work with Streams and Connect?
>> 2. How does this compare to a solution where we physically split partitions
>> using a linear hashing approach (the partition number is equivalent to the
>> hash bucket in a hash table)? https://en.wikipedia.org/wiki/Linear_hashing
>>
>> -Jay
>>
>> On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
>>
>>> Hi all,
>>>
>>> I have created KIP-253: Support in-order message delivery with partition
>>> expansion. See
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> 253%3A+Support+in-order+message+delivery+with+partition+expansion
>>> .
>>>
>>> This KIP provides a way to allow messages of the same key from the same
>>> producer to be consumed in the same order they are produced even if we
>>> expand partition of the topic.
>>>
>>> Thanks,
>>> Dong
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-22 Thread Matthias J. Sax
Dong,

thanks a lot for the KIP!

Can you elaborate how this would work for compacted topics? If it does
not work for compacted topics, I think Streams API cannot allow to scale
input topics.

This question seems to be particularly interesting for deleting
partitions: assume that a key is never (or for a very long time)
updated, a partition cannot be deleted.


-Matthias


On 2/22/18 5:19 PM, Jay Kreps wrote:
> Hey Dong,
> 
> Two questions:
> 1. How will this work with Streams and Connect?
> 2. How does this compare to a solution where we physically split partitions
> using a linear hashing approach (the partition number is equivalent to the
> hash bucket in a hash table)? https://en.wikipedia.org/wiki/Linear_hashing
> 
> -Jay
> 
> On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
> 
>> Hi all,
>>
>> I have created KIP-253: Support in-order message delivery with partition
>> expansion. See
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 253%3A+Support+in-order+message+delivery+with+partition+expansion
>> .
>>
>> This KIP provides a way to allow messages of the same key from the same
>> producer to be consumed in the same order they are produced even if we
>> expand partition of the topic.
>>
>> Thanks,
>> Dong
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-22 Thread Jay Kreps
Hey Dong,

Two questions:
1. How will this work with Streams and Connect?
2. How does this compare to a solution where we physically split partitions
using a linear hashing approach (the partition number is equivalent to the
hash bucket in a hash table)? https://en.wikipedia.org/wiki/Linear_hashing

-Jay

On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:

> Hi all,
>
> I have created KIP-253: Support in-order message delivery with partition
> expansion. See
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 253%3A+Support+in-order+message+delivery+with+partition+expansion
> .
>
> This KIP provides a way to allow messages of the same key from the same
> producer to be consumed in the same order they are produced even if we
> expand partition of the topic.
>
> Thanks,
> Dong
>


Re: [DISCUSS] KIP-261: Add Single Value Fetch in Window Stores

2018-02-22 Thread Guozhang Wang
Thanks Ted, have updated the wiki page.

On Thu, Feb 22, 2018 at 1:48 PM, Ted Yu  wrote:

> +1
>
> There were some typos:
> CachingWindowedStore -> CachingWindowStore
> RocksDBWindowedStore -> RocksDBWindowStore
> KStreamWindowedAggregate -> KStreamWindowAggregate
> KStreamWindowedReduce -> KStreamWindowReduce
>
> Cheers
>
> On Thu, Feb 22, 2018 at 1:34 PM, Guozhang Wang  wrote:
>
> > Hi all,
> >
> > I have submitted KIP-261 to add a new API for window stores in order to
> > optimize our current windowed aggregation implementations inside Streams
> > DSL
> > :
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 261%3A+Add+Single+Value+Fetch+in+Window+Stores
> >
> > This change would require people who have customized window store
> > implementations to make code changes as part of their upgrade path. But I
> > think it is worth while given that the fraction of customized window
> store
> > should be very small.
> >
> >
> > Feedback and suggestions are welcome.
> >
> > Thanks,
> > -- Guozhang
> >
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-22 Thread Allen Wang
Overall this is a very useful feature. With this we can finally scale keyed
messages.

+1 on the ability to remove partitions. This will greatly increase Kafka's
scalability in cloud.

For example, when there is traffic increase, we can add brokers and assign
new partitions to the new brokers. When traffic decreases, we can mark
these new partitions as read only and remove them afterwards, together with
the brokers that host these partitions. This will be a light-weight
approach to scale a Kafka cluster compared to partition reassignment where
you will always have to move data.

I have some suggestions:

- The KIP described each step in detail which is great. However, it lacks
the "why" part to explain the high level goal we want to achieve with each
step. For example, the purpose of step 5 may be described as "Make sure
consumers always first finish consuming all data prior to partition
expansion to enforce message ordering".

- The rejection of produce request at partition expansion should be
configurable because it does not matter for non-keyed messages. Same with
the consumer behavior for step 5. This will ensure that for non-keyed
messages, partition expansion does not add the cost of possible message
drop on producer or message latency on the consumer.

- Since we now allow adding partitions for keyed messages while preserving
the message ordering on the consumer side, the default producer partitioner
seems to be inadequate as it rehashes all keys. As part of this KIP, should
we also include a partitioner that better handles partition changes, for
example, with consistent hashing?

Thanks,
Allen


On Thu, Feb 22, 2018 at 11:52 AM, Jun Rao  wrote:

> Hi, Dong,
>
> Regarding deleting partitions, Gwen's point is right on. In some of the
> usage of Kafka, the traffic can be bursty. When the traffic goes up, adding
> partitions is a quick way of shifting some traffic to the newly added
> brokers. Once the traffic goes down, the newly added brokers will be
> reclaimed (potentially by moving replicas off those brokers). However, if
> one can only add partitions without removing, eventually, one will hit the
> limit.
>
> Thanks,
>
> Jun
>
> On Wed, Feb 21, 2018 at 12:23 PM, Dong Lin  wrote:
>
> > Hey Jun,
> >
> > Thanks much for your comments.
> >
> > On Wed, Feb 21, 2018 at 10:17 AM, Jun Rao  wrote:
> >
> > > Hi, Dong,
> > >
> > > Thanks for the KIP. At the high level, this makes sense. A few comments
> > > below.
> > >
> > > 1. It would be useful to support removing partitions as well. The
> general
> > > idea could be bumping the leader epoch for the remaining partitions.
> For
> > > the partitions to be removed, we can make them read-only and remove
> them
> > > after the retention time.
> > >
> >
> > I think we should be able to find a way to delete partitions of an
> existing
> > topic. But it will also add complexity to our broker and client
> > implementation. I am just not sure whether this feature is worth the
> > complexity. Could you explain a bit more why user would want to delete
> > partitions of an existing topic? Is it to handle the human error where a
> > topic is created with too many partitions by mistake?
> >
> >
> > >
> > > 2. If we support removing partitions, I am not sure if it's enough to
> > fence
> > > off the producer using total partition number since the total partition
> > > number may remain the same after adding and then removing partitions.
> > > Perhaps we need some notion of partition epoch.
> > >
> > > 3. In step 5) of the Proposed Changes, I am not sure that we can always
> > > rely upon position 0 for dealing with the new partitions. A consumer
> will
> > > start consuming the new partition when some of the existing records
> have
> > > been removed due to retention.
> > >
> >
> >
> > You are right. I have updated the KIP to compare the startPosition with
> the
> > earliest offset of the partition. If the startPosition > earliest offset,
> > then the consumer can consume messages from the given partition directly.
> > This should handle the case where some of the existing records have been
> > removed before consumer starts consumption.
> >
> >
> > >
> > > 4. When the consumer is allowed to read messages after the partition
> > > expansion point, a key may be moved from one consumer instance to
> > another.
> > > In this case, similar to consumer rebalance, it's useful to inform the
> > > application about this so that the consumer can save and reload the per
> > key
> > > state. So, we need to either add some new callbacks or reuse the
> existing
> > > rebalance callbacks.
> > >
> >
> >
> > Good point. I will add the callback later after we discuss the need for
> > partition deletion.
> >
> >
> > >
> > > 5. There is some subtlety in assigning partitions. Currently, the
> > consumer
> > > assigns partitions without needing to know the consumption offset. This
> > > could mean that a particular consumer may be 

Re: Contributor

2018-02-22 Thread Matthias J. Sax
Done.

On 2/22/18 6:44 AM, Sebastian Toader wrote:
> Hi Kafka Dev team,
> 
> Can you please add me the contributor list as I would like to contribute to
> the Kafka project?
> 
> My apache username: stoader
> 
> 
> Thank you,
> Sebastian
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-261: Add Single Value Fetch in Window Stores

2018-02-22 Thread Ted Yu
+1

There were some typos:
CachingWindowedStore -> CachingWindowStore
RocksDBWindowedStore -> RocksDBWindowStore
KStreamWindowedAggregate -> KStreamWindowAggregate
KStreamWindowedReduce -> KStreamWindowReduce

Cheers

On Thu, Feb 22, 2018 at 1:34 PM, Guozhang Wang  wrote:

> Hi all,
>
> I have submitted KIP-261 to add a new API for window stores in order to
> optimize our current windowed aggregation implementations inside Streams
> DSL
> :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 261%3A+Add+Single+Value+Fetch+in+Window+Stores
>
> This change would require people who have customized window store
> implementations to make code changes as part of their upgrade path. But I
> think it is worth while given that the fraction of customized window store
> should be very small.
>
>
> Feedback and suggestions are welcome.
>
> Thanks,
> -- Guozhang
>


[DISCUSS] KIP-261: Add Single Value Fetch in Window Stores

2018-02-22 Thread Guozhang Wang
Hi all,

I have submitted KIP-261 to add a new API for window stores in order to
optimize our current windowed aggregation implementations inside Streams DSL
:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-261%3A+Add+Single+Value+Fetch+in+Window+Stores

This change would require people who have customized window store
implementations to make code changes as part of their upgrade path. But I
think it is worth while given that the fraction of customized window store
should be very small.


Feedback and suggestions are welcome.

Thanks,
-- Guozhang


Re: [VOTE] 1.0.1 RC2

2018-02-22 Thread Ted Yu
+1

MetricsTest#testMetricsLeak failed but it is flaky test

On Wed, Feb 21, 2018 at 4:06 PM, Ewen Cheslack-Postava 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.0.1.
>
> This is a bugfix release for the 1.0 branch that was first released with
> 1.0.0 about 3 months ago. We've fixed 49 issues since that release. Most of
> these are non-critical, but in aggregate these fixes will have significant
> impact. A few of the more significant fixes include:
>
> * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
> plugins
> * KAFKA-6185: Selector memory leak with high likelihood of OOM in case of
> down conversion
> * KAFKA-6269: KTable state restore fails after rebalance
> * KAFKA-6190: GlobalKTable never finishes restoring when consuming
> transactional messages
> * KAFKA-6529: Stop file descriptor leak when client disconnects with staged
> receives
> * KAFKA-6238: Issues with protocol version when applying a rolling upgrade
> to 1.0.0
>
> Release notes for the 1.0.1 release:
> http://home.apache.org/~ewencp/kafka-1.0.1-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Saturday Feb 24, 9pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~ewencp/kafka-1.0.1-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~ewencp/kafka-1.0.1-rc2/javadoc/
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
> https://github.com/apache/kafka/tree/1.0.1-rc2
>
> * Documentation:
> http://kafka.apache.org/10/documentation.html
>
> * Protocol:
> http://kafka.apache.org/10/protocol.html
>
> /**
>
> Thanks,
> Ewen Cheslack-Postava
>


Build failed in Jenkins: kafka-trunk-jdk9 #427

2018-02-22 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Fix javadoc for consumer offsets lookup APIs which do 
not block

--
[...truncated 1.48 MB...]
kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #2432

2018-02-22 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Fix javadoc for consumer offsets lookup APIs which do 
not block

--
[...truncated 3.50 MB...]

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > 

Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Guozhang Wang
Yup, agreed.

On Thu, Feb 22, 2018 at 11:46 AM, Ismael Juma  wrote:

> Hi Guozhang,
>
> To clarify my comment: any change with a backwards compatibility impact
> should be mentioned in the "Compatibility, Deprecation, and Migration Plan"
> section (in addition to the deprecation period and only happening in a
> major release as you said).
>
> Ismael
>
> On Thu, Feb 22, 2018 at 11:10 AM, Guozhang Wang 
> wrote:
>
> > Just to clarify, the KIP itself has mentioned about the change so the PR
> > was not un-intentional:
> >
> > "
> >
> > 3. Keep execution parameters uniform between both tools: It will execute
> by
> > default, and have a `dry-run` parameter just show the results. This will
> > involve change current `ConsumerGroupCommand` to change execution
> options.
> >
> > "
> >
> > We were agreed that the proposed change is better than the current
> status,
> > since may people not using "--execute" on consumer reset tool were
> actually
> > surprised that nothing gets executed. What we were concerning as a
> > hind-sight is that instead of doing such change in a minor release like
> > 1.1, we should consider only doing that in the next major release as it
> > breaks compatibility. In the past when we are going to remove / replace
> > certain option we would first add a going-to-be-deprecated warning in the
> > previous releases until it was finally removed. So Jason's suggestion is
> to
> > do the same: we are not reverting this change forever, but trying to
> delay
> > it after 1.1.
> >
> >
> > Guozhang
> >
> >
> > On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe 
> wrote:
> >
> > > Perhaps, if the user doesn't pass the --execute flag, the tool should
> > > print a prompt like "would you like to perform this reset?" and wait
> for
> > a
> > > Y / N (or yes or no) input from the command-line.  Then, if the
> --execute
> > > flag is passed, we skip this.  That seems 99% compatible, and also
> > > accomplishes the goal of making the tool less confusing.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> > > > Yes, let's revert the incompatible changes. There was no mention of
> > > > compatibility impact on the KIP and we should ensure that is the case
> > for
> > > > 1.1.0.
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson  >
> > > wrote:
> > > >
> > > > > I know it's a been a while since this vote passed, but I think we
> > need
> > > to
> > > > > reconsider the incompatible changes to the consumer reset tool.
> > > > > Specifically, we have removed the --execute option without
> > deprecating
> > > it
> > > > > first, and we have changed the default behavior to execute rather
> > than
> > > do a
> > > > > dry run. The latter in particular seems dangerous since users who
> > were
> > > > > previously using the default behavior to view offsets will now
> > suddenly
> > > > > find the offsets already committed. As far as I can tell, this
> change
> > > was
> > > > > done mostly for cosmetic reasons. Without a compelling reason, I
> > think
> > > we
> > > > > should err on the side of maintaining compatibility. At a minimum,
> if
> > > we
> > > > > really want to break compatibility, we should wait for the next
> major
> > > > > release.
> > > > >
> > > > > Note that I have submitted a patch to revert this change here:
> > > > > https://github.com/apache/kafka/pull/4611.
> > > > >
> > > > > Thoughts?
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> > > > > quilcate.jo...@gmail.com> wrote:
> > > > >
> > > > > > Thanks to everyone for your feedback.
> > > > > >
> > > > > > KIP has been accepted and discussion is moved to PR.
> > > > > >
> > > > > > Cheers,
> > > > > > Jorge.
> > > > > >
> > > > > > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> > > > > rajinisiva...@gmail.com
> > > > > > >)
> > > > > > escribió:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > > Thanks for the KIP,  Jorge.
> > > > > > >
> > > > > > > Regards,
> > > > > > >
> > > > > > > Rajini
> > > > > > >
> > > > > > > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy <
> > damian@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks for the KIP - +1 (binding)
> > > > > > > >
> > > > > > > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang <
> wangg...@gmail.com
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Guozhang
> > > > > > > > >
> > > > > > > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck <
> > > bbej...@gmail.com>
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > +1
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Bill
> > > > > > > > > >
> > > > > > > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu <
> > 

[jira] [Created] (KAFKA-6586) Refactor Connect executables

2018-02-22 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-6586:


 Summary: Refactor Connect executables
 Key: KAFKA-6586
 URL: https://issues.apache.org/jira/browse/KAFKA-6586
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch


The main methods in {{ConnectDistributed}} and {{ConnectStandalone}} have a lot 
of duplication, and it'd be good to refactor to centralize the logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-22 Thread Jun Rao
Hi, Dong,

Regarding deleting partitions, Gwen's point is right on. In some of the
usage of Kafka, the traffic can be bursty. When the traffic goes up, adding
partitions is a quick way of shifting some traffic to the newly added
brokers. Once the traffic goes down, the newly added brokers will be
reclaimed (potentially by moving replicas off those brokers). However, if
one can only add partitions without removing, eventually, one will hit the
limit.

Thanks,

Jun

On Wed, Feb 21, 2018 at 12:23 PM, Dong Lin  wrote:

> Hey Jun,
>
> Thanks much for your comments.
>
> On Wed, Feb 21, 2018 at 10:17 AM, Jun Rao  wrote:
>
> > Hi, Dong,
> >
> > Thanks for the KIP. At the high level, this makes sense. A few comments
> > below.
> >
> > 1. It would be useful to support removing partitions as well. The general
> > idea could be bumping the leader epoch for the remaining partitions. For
> > the partitions to be removed, we can make them read-only and remove them
> > after the retention time.
> >
>
> I think we should be able to find a way to delete partitions of an existing
> topic. But it will also add complexity to our broker and client
> implementation. I am just not sure whether this feature is worth the
> complexity. Could you explain a bit more why user would want to delete
> partitions of an existing topic? Is it to handle the human error where a
> topic is created with too many partitions by mistake?
>
>
> >
> > 2. If we support removing partitions, I am not sure if it's enough to
> fence
> > off the producer using total partition number since the total partition
> > number may remain the same after adding and then removing partitions.
> > Perhaps we need some notion of partition epoch.
> >
> > 3. In step 5) of the Proposed Changes, I am not sure that we can always
> > rely upon position 0 for dealing with the new partitions. A consumer will
> > start consuming the new partition when some of the existing records have
> > been removed due to retention.
> >
>
>
> You are right. I have updated the KIP to compare the startPosition with the
> earliest offset of the partition. If the startPosition > earliest offset,
> then the consumer can consume messages from the given partition directly.
> This should handle the case where some of the existing records have been
> removed before consumer starts consumption.
>
>
> >
> > 4. When the consumer is allowed to read messages after the partition
> > expansion point, a key may be moved from one consumer instance to
> another.
> > In this case, similar to consumer rebalance, it's useful to inform the
> > application about this so that the consumer can save and reload the per
> key
> > state. So, we need to either add some new callbacks or reuse the existing
> > rebalance callbacks.
> >
>
>
> Good point. I will add the callback later after we discuss the need for
> partition deletion.
>
>
> >
> > 5. There is some subtlety in assigning partitions. Currently, the
> consumer
> > assigns partitions without needing to know the consumption offset. This
> > could mean that a particular consumer may be assigned some new partitions
> > that are not consumable yet, which could lead to imbalanced load
> > temporarily. Not sure if this is super important to address though.
> >
>
> Personally I think it is not worth adding more complexity just to optimize
> this scenario. This imbalance should exist only for a short period of time.
> If it is important I can think more about how to handle it.
>
>
> >
> > Thanks,
> >
> > Jun
> >
> >
> >
> > On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
> >
> > > Hi all,
> > >
> > > I have created KIP-253: Support in-order message delivery with
> partition
> > > expansion. See
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 253%3A+Support+in-order+message+delivery+with+partition+expansion
> > > .
> > >
> > > This KIP provides a way to allow messages of the same key from the same
> > > producer to be consumed in the same order they are produced even if we
> > > expand partition of the topic.
> > >
> > > Thanks,
> > > Dong
> > >
> >
>


Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Ismael Juma
Hi Guozhang,

To clarify my comment: any change with a backwards compatibility impact
should be mentioned in the "Compatibility, Deprecation, and Migration Plan"
section (in addition to the deprecation period and only happening in a
major release as you said).

Ismael

On Thu, Feb 22, 2018 at 11:10 AM, Guozhang Wang  wrote:

> Just to clarify, the KIP itself has mentioned about the change so the PR
> was not un-intentional:
>
> "
>
> 3. Keep execution parameters uniform between both tools: It will execute by
> default, and have a `dry-run` parameter just show the results. This will
> involve change current `ConsumerGroupCommand` to change execution options.
>
> "
>
> We were agreed that the proposed change is better than the current status,
> since may people not using "--execute" on consumer reset tool were actually
> surprised that nothing gets executed. What we were concerning as a
> hind-sight is that instead of doing such change in a minor release like
> 1.1, we should consider only doing that in the next major release as it
> breaks compatibility. In the past when we are going to remove / replace
> certain option we would first add a going-to-be-deprecated warning in the
> previous releases until it was finally removed. So Jason's suggestion is to
> do the same: we are not reverting this change forever, but trying to delay
> it after 1.1.
>
>
> Guozhang
>
>
> On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe  wrote:
>
> > Perhaps, if the user doesn't pass the --execute flag, the tool should
> > print a prompt like "would you like to perform this reset?" and wait for
> a
> > Y / N (or yes or no) input from the command-line.  Then, if the --execute
> > flag is passed, we skip this.  That seems 99% compatible, and also
> > accomplishes the goal of making the tool less confusing.
> >
> > best,
> > Colin
> >
> >
> > On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> > > Yes, let's revert the incompatible changes. There was no mention of
> > > compatibility impact on the KIP and we should ensure that is the case
> for
> > > 1.1.0.
> > >
> > > Ismael
> > >
> > > On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson 
> > wrote:
> > >
> > > > I know it's a been a while since this vote passed, but I think we
> need
> > to
> > > > reconsider the incompatible changes to the consumer reset tool.
> > > > Specifically, we have removed the --execute option without
> deprecating
> > it
> > > > first, and we have changed the default behavior to execute rather
> than
> > do a
> > > > dry run. The latter in particular seems dangerous since users who
> were
> > > > previously using the default behavior to view offsets will now
> suddenly
> > > > find the offsets already committed. As far as I can tell, this change
> > was
> > > > done mostly for cosmetic reasons. Without a compelling reason, I
> think
> > we
> > > > should err on the side of maintaining compatibility. At a minimum, if
> > we
> > > > really want to break compatibility, we should wait for the next major
> > > > release.
> > > >
> > > > Note that I have submitted a patch to revert this change here:
> > > > https://github.com/apache/kafka/pull/4611.
> > > >
> > > > Thoughts?
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > >
> > > >
> > > > On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > > > Thanks to everyone for your feedback.
> > > > >
> > > > > KIP has been accepted and discussion is moved to PR.
> > > > >
> > > > > Cheers,
> > > > > Jorge.
> > > > >
> > > > > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> > > > rajinisiva...@gmail.com
> > > > > >)
> > > > > escribió:
> > > > >
> > > > > > +1 (binding)
> > > > > > Thanks for the KIP,  Jorge.
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Rajini
> > > > > >
> > > > > > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy <
> damian@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks for the KIP - +1 (binding)
> > > > > > >
> > > > > > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > > > > > >
> > > > > > > >
> > > > > > > > Guozhang
> > > > > > > >
> > > > > > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck <
> > bbej...@gmail.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Bill
> > > > > > > > >
> > > > > > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu <
> yuzhih...@gmail.com
> > >
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > +1
> > > > > > > > > >
> > > > > > > > > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> > > > > > > > matth...@confluent.io>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > +1
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate 

[jira] [Created] (KAFKA-6585) Sonsolidate duplicated logic on reset tools

2018-02-22 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-6585:


 Summary: Sonsolidate duplicated logic on reset tools
 Key: KAFKA-6585
 URL: https://issues.apache.org/jira/browse/KAFKA-6585
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang


The consumer reset tool and streams reset tool today shares lot of common 
logics such as resetting to a datetime etc. We can consolidate them into a 
common class which directly depend on admin client at simply let these tools to 
use the class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Jason Gustafson
I like Colin's suggestion for the longer term. If you don't provide
--dry-run or --execute, then the command will prompt you.

-Jason

On Thu, Feb 22, 2018 at 11:10 AM, Guozhang Wang  wrote:

> Just to clarify, the KIP itself has mentioned about the change so the PR
> was not un-intentional:
>
> "
>
> 3. Keep execution parameters uniform between both tools: It will execute by
> default, and have a `dry-run` parameter just show the results. This will
> involve change current `ConsumerGroupCommand` to change execution options.
>
> "
>
> We were agreed that the proposed change is better than the current status,
> since may people not using "--execute" on consumer reset tool were actually
> surprised that nothing gets executed. What we were concerning as a
> hind-sight is that instead of doing such change in a minor release like
> 1.1, we should consider only doing that in the next major release as it
> breaks compatibility. In the past when we are going to remove / replace
> certain option we would first add a going-to-be-deprecated warning in the
> previous releases until it was finally removed. So Jason's suggestion is to
> do the same: we are not reverting this change forever, but trying to delay
> it after 1.1.
>
>
> Guozhang
>
>
> On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe  wrote:
>
> > Perhaps, if the user doesn't pass the --execute flag, the tool should
> > print a prompt like "would you like to perform this reset?" and wait for
> a
> > Y / N (or yes or no) input from the command-line.  Then, if the --execute
> > flag is passed, we skip this.  That seems 99% compatible, and also
> > accomplishes the goal of making the tool less confusing.
> >
> > best,
> > Colin
> >
> >
> > On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> > > Yes, let's revert the incompatible changes. There was no mention of
> > > compatibility impact on the KIP and we should ensure that is the case
> for
> > > 1.1.0.
> > >
> > > Ismael
> > >
> > > On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson 
> > wrote:
> > >
> > > > I know it's a been a while since this vote passed, but I think we
> need
> > to
> > > > reconsider the incompatible changes to the consumer reset tool.
> > > > Specifically, we have removed the --execute option without
> deprecating
> > it
> > > > first, and we have changed the default behavior to execute rather
> than
> > do a
> > > > dry run. The latter in particular seems dangerous since users who
> were
> > > > previously using the default behavior to view offsets will now
> suddenly
> > > > find the offsets already committed. As far as I can tell, this change
> > was
> > > > done mostly for cosmetic reasons. Without a compelling reason, I
> think
> > we
> > > > should err on the side of maintaining compatibility. At a minimum, if
> > we
> > > > really want to break compatibility, we should wait for the next major
> > > > release.
> > > >
> > > > Note that I have submitted a patch to revert this change here:
> > > > https://github.com/apache/kafka/pull/4611.
> > > >
> > > > Thoughts?
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > >
> > > >
> > > > On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > > > Thanks to everyone for your feedback.
> > > > >
> > > > > KIP has been accepted and discussion is moved to PR.
> > > > >
> > > > > Cheers,
> > > > > Jorge.
> > > > >
> > > > > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> > > > rajinisiva...@gmail.com
> > > > > >)
> > > > > escribió:
> > > > >
> > > > > > +1 (binding)
> > > > > > Thanks for the KIP,  Jorge.
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Rajini
> > > > > >
> > > > > > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy <
> damian@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks for the KIP - +1 (binding)
> > > > > > >
> > > > > > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > > > > > >
> > > > > > > >
> > > > > > > > Guozhang
> > > > > > > >
> > > > > > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck <
> > bbej...@gmail.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Bill
> > > > > > > > >
> > > > > > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu <
> yuzhih...@gmail.com
> > >
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > +1
> > > > > > > > > >
> > > > > > > > > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> > > > > > > > matth...@confluent.io>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > +1
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > > > > > > > > > Hi All,
> > > > > > > > > > > >
> > > > > > > > > > > > It seems that there is no further concern 

Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Guozhang Wang
Just to clarify, the KIP itself has mentioned about the change so the PR
was not un-intentional:

"

3. Keep execution parameters uniform between both tools: It will execute by
default, and have a `dry-run` parameter just show the results. This will
involve change current `ConsumerGroupCommand` to change execution options.

"

We were agreed that the proposed change is better than the current status,
since may people not using "--execute" on consumer reset tool were actually
surprised that nothing gets executed. What we were concerning as a
hind-sight is that instead of doing such change in a minor release like
1.1, we should consider only doing that in the next major release as it
breaks compatibility. In the past when we are going to remove / replace
certain option we would first add a going-to-be-deprecated warning in the
previous releases until it was finally removed. So Jason's suggestion is to
do the same: we are not reverting this change forever, but trying to delay
it after 1.1.


Guozhang


On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe  wrote:

> Perhaps, if the user doesn't pass the --execute flag, the tool should
> print a prompt like "would you like to perform this reset?" and wait for a
> Y / N (or yes or no) input from the command-line.  Then, if the --execute
> flag is passed, we skip this.  That seems 99% compatible, and also
> accomplishes the goal of making the tool less confusing.
>
> best,
> Colin
>
>
> On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> > Yes, let's revert the incompatible changes. There was no mention of
> > compatibility impact on the KIP and we should ensure that is the case for
> > 1.1.0.
> >
> > Ismael
> >
> > On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson 
> wrote:
> >
> > > I know it's a been a while since this vote passed, but I think we need
> to
> > > reconsider the incompatible changes to the consumer reset tool.
> > > Specifically, we have removed the --execute option without deprecating
> it
> > > first, and we have changed the default behavior to execute rather than
> do a
> > > dry run. The latter in particular seems dangerous since users who were
> > > previously using the default behavior to view offsets will now suddenly
> > > find the offsets already committed. As far as I can tell, this change
> was
> > > done mostly for cosmetic reasons. Without a compelling reason, I think
> we
> > > should err on the side of maintaining compatibility. At a minimum, if
> we
> > > really want to break compatibility, we should wait for the next major
> > > release.
> > >
> > > Note that I have submitted a patch to revert this change here:
> > > https://github.com/apache/kafka/pull/4611.
> > >
> > > Thoughts?
> > >
> > > Thanks,
> > > Jason
> > >
> > >
> > >
> > > On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> > > quilcate.jo...@gmail.com> wrote:
> > >
> > > > Thanks to everyone for your feedback.
> > > >
> > > > KIP has been accepted and discussion is moved to PR.
> > > >
> > > > Cheers,
> > > > Jorge.
> > > >
> > > > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> > > rajinisiva...@gmail.com
> > > > >)
> > > > escribió:
> > > >
> > > > > +1 (binding)
> > > > > Thanks for the KIP,  Jorge.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy 
> > > > wrote:
> > > > >
> > > > > > Thanks for the KIP - +1 (binding)
> > > > > >
> > > > > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang 
> > > wrote:
> > > > > >
> > > > > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > > > > >
> > > > > > >
> > > > > > > Guozhang
> > > > > > >
> > > > > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck <
> bbej...@gmail.com>
> > > > > wrote:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Bill
> > > > > > > >
> > > > > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu  >
> > > > wrote:
> > > > > > > >
> > > > > > > > > +1
> > > > > > > > >
> > > > > > > > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> > > > > > > matth...@confluent.io>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > +1
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > > > > > > > > Hi All,
> > > > > > > > > > >
> > > > > > > > > > > It seems that there is no further concern with the
> KIP-171.
> > > > > > > > > > > At this point we would like to start the voting
> process.
> > > > > > > > > > >
> > > > > > > > > > > The KIP can be found here:
> > > > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > > > 171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+
> > > > Application
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks!
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > 

Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Colin McCabe
Perhaps, if the user doesn't pass the --execute flag, the tool should print a 
prompt like "would you like to perform this reset?" and wait for a Y / N (or 
yes or no) input from the command-line.  Then, if the --execute flag is passed, 
we skip this.  That seems 99% compatible, and also accomplishes the goal of 
making the tool less confusing.

best,
Colin


On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> Yes, let's revert the incompatible changes. There was no mention of
> compatibility impact on the KIP and we should ensure that is the case for
> 1.1.0.
> 
> Ismael
> 
> On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson  wrote:
> 
> > I know it's a been a while since this vote passed, but I think we need to
> > reconsider the incompatible changes to the consumer reset tool.
> > Specifically, we have removed the --execute option without deprecating it
> > first, and we have changed the default behavior to execute rather than do a
> > dry run. The latter in particular seems dangerous since users who were
> > previously using the default behavior to view offsets will now suddenly
> > find the offsets already committed. As far as I can tell, this change was
> > done mostly for cosmetic reasons. Without a compelling reason, I think we
> > should err on the side of maintaining compatibility. At a minimum, if we
> > really want to break compatibility, we should wait for the next major
> > release.
> >
> > Note that I have submitted a patch to revert this change here:
> > https://github.com/apache/kafka/pull/4611.
> >
> > Thoughts?
> >
> > Thanks,
> > Jason
> >
> >
> >
> > On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Thanks to everyone for your feedback.
> > >
> > > KIP has been accepted and discussion is moved to PR.
> > >
> > > Cheers,
> > > Jorge.
> > >
> > > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> > rajinisiva...@gmail.com
> > > >)
> > > escribió:
> > >
> > > > +1 (binding)
> > > > Thanks for the KIP,  Jorge.
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy 
> > > wrote:
> > > >
> > > > > Thanks for the KIP - +1 (binding)
> > > > >
> > > > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang 
> > wrote:
> > > > >
> > > > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > > > >
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck 
> > > > wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Bill
> > > > > > >
> > > > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu 
> > > wrote:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> > > > > > matth...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > > > > > > > Hi All,
> > > > > > > > > >
> > > > > > > > > > It seems that there is no further concern with the KIP-171.
> > > > > > > > > > At this point we would like to start the voting process.
> > > > > > > > > >
> > > > > > > > > > The KIP can be found here:
> > > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > > 171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+
> > > Application
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks!
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > >
> > >
> >


Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Ismael Juma
Yes, let's revert the incompatible changes. There was no mention of
compatibility impact on the KIP and we should ensure that is the case for
1.1.0.

Ismael

On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson  wrote:

> I know it's a been a while since this vote passed, but I think we need to
> reconsider the incompatible changes to the consumer reset tool.
> Specifically, we have removed the --execute option without deprecating it
> first, and we have changed the default behavior to execute rather than do a
> dry run. The latter in particular seems dangerous since users who were
> previously using the default behavior to view offsets will now suddenly
> find the offsets already committed. As far as I can tell, this change was
> done mostly for cosmetic reasons. Without a compelling reason, I think we
> should err on the side of maintaining compatibility. At a minimum, if we
> really want to break compatibility, we should wait for the next major
> release.
>
> Note that I have submitted a patch to revert this change here:
> https://github.com/apache/kafka/pull/4611.
>
> Thoughts?
>
> Thanks,
> Jason
>
>
>
> On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Thanks to everyone for your feedback.
> >
> > KIP has been accepted and discussion is moved to PR.
> >
> > Cheers,
> > Jorge.
> >
> > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> rajinisiva...@gmail.com
> > >)
> > escribió:
> >
> > > +1 (binding)
> > > Thanks for the KIP,  Jorge.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy 
> > wrote:
> > >
> > > > Thanks for the KIP - +1 (binding)
> > > >
> > > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang 
> wrote:
> > > >
> > > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck 
> > > wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > Thanks,
> > > > > > Bill
> > > > > >
> > > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu 
> > wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> > > > > matth...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > > > > > > Hi All,
> > > > > > > > >
> > > > > > > > > It seems that there is no further concern with the KIP-171.
> > > > > > > > > At this point we would like to start the voting process.
> > > > > > > > >
> > > > > > > > > The KIP can be found here:
> > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > 171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+
> > Application
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #2431

2018-02-22 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Refactor GroupMetadataManager cleanupGroupMetadata (#4504)

--
[...truncated 416.74 KB...]
kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic 

Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-22 Thread Jason Gustafson
I know it's a been a while since this vote passed, but I think we need to
reconsider the incompatible changes to the consumer reset tool.
Specifically, we have removed the --execute option without deprecating it
first, and we have changed the default behavior to execute rather than do a
dry run. The latter in particular seems dangerous since users who were
previously using the default behavior to view offsets will now suddenly
find the offsets already committed. As far as I can tell, this change was
done mostly for cosmetic reasons. Without a compelling reason, I think we
should err on the side of maintaining compatibility. At a minimum, if we
really want to break compatibility, we should wait for the next major
release.

Note that I have submitted a patch to revert this change here:
https://github.com/apache/kafka/pull/4611.

Thoughts?

Thanks,
Jason



On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Thanks to everyone for your feedback.
>
> KIP has been accepted and discussion is moved to PR.
>
> Cheers,
> Jorge.
>
> El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram ( >)
> escribió:
>
> > +1 (binding)
> > Thanks for the KIP,  Jorge.
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, Oct 31, 2017 at 9:58 AM, Damian Guy 
> wrote:
> >
> > > Thanks for the KIP - +1 (binding)
> > >
> > > On Mon, 23 Oct 2017 at 18:39 Guozhang Wang  wrote:
> > >
> > > > Thanks Jorge for driving this KIP! +1 (binding).
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck 
> > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Thanks,
> > > > > Bill
> > > > >
> > > > > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu 
> wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> > > > matth...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > > > > > Hi All,
> > > > > > > >
> > > > > > > > It seems that there is no further concern with the KIP-171.
> > > > > > > > At this point we would like to start the voting process.
> > > > > > > >
> > > > > > > > The KIP can be found here:
> > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > 171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+
> Application
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-6584) Session expiration concurrent with ZooKeeper leadership failover may lead to broker registration failure

2018-02-22 Thread Chris Thunes (JIRA)
Chris Thunes created KAFKA-6584:
---

 Summary: Session expiration concurrent with ZooKeeper leadership 
failover may lead to broker registration failure
 Key: KAFKA-6584
 URL: https://issues.apache.org/jira/browse/KAFKA-6584
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Chris Thunes


It seems that an edge case exists which can lead to sessions "un-expiring" 
during a ZooKeeper leadership failover. Additional details can be found in 
ZOOKEEPER-2985.

This leads to a NODEXISTS error when attempting to re-create the ephemeral 
brokers/ids/\{id} node in ZkUtils.registerBrokerInZk. We experienced this issue 
on each node within a 3-node Kafka cluster running 1.0.0. All three nodes 
continued running (producers and consumers appeared unaffected), but none of 
the nodes were considered online and partition leadership could be not 
re-assigned.

I took a quick look at trunk and I believe the issue is still present, but has 
moved into KafkaZkClient.checkedEphemeralCreate which will [raise an 
error|https://github.com/apache/kafka/blob/90e0bbe/core/src/main/scala/kafka/zk/KafkaZkClient.scala#L1512]
 when it finds that the broker/ids/\{id} node exists, but belongs to the old 
(believed expired) session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6583) Metadata should include number of state stores for task

2018-02-22 Thread Richard Yu (JIRA)
Richard Yu created KAFKA-6583:
-

 Summary: Metadata should include number of state stores for task
 Key: KAFKA-6583
 URL: https://issues.apache.org/jira/browse/KAFKA-6583
 Project: Kafka
  Issue Type: Improvement
Reporter: Richard Yu


Currently, in the need for clients to be more evenly balanced, stateful tasks 
should be distributed in such a manner that it will be spread equally. However, 
for such an awareness to be implemented during task assignment, it would 
require the need for the present rebalance protocol metadata to also contain 
the number of state stores in a particular task. This way, it will allow us to 
"weight" tasks during assignment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: [DISCUSS]KIP-235 DNS alias and secured connections

2018-02-22 Thread Skrzypek, Jonathan
Hi,

Could anyone take a look at the pull request, so that if ok I can start a VOTE 
thread ?

Regards,

Jonathan Skrzypek 

-Original Message-
From: Skrzypek, Jonathan [Tech] 
Sent: 09 February 2018 13:57
To: 'dev@kafka.apache.org'
Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections

Hi,

I have raised a PR https://github.com/apache/kafka/pull/4485 with suggested 
code changes.
There are however reported failures, don't understand what's the issue since 
tests are passing.
Any ideas ?


Jonathan Skrzypek 

-Original Message-
From: Skrzypek, Jonathan [Tech]
Sent: 29 January 2018 16:51
To: dev@kafka.apache.org
Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections

Hi,

Yes I believe this might address what you're seeing as well.

Jonathan Skrzypek
Middleware Engineering
Messaging Engineering
Goldman Sachs International

-Original Message-
From: Stephane Maarek [mailto:steph...@simplemachines.com.au]
Sent: 06 December 2017 10:43
To: dev@kafka.apache.org
Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections

Hi Jonathan

I think this will be very useful. I reported something similar here :
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_KAFKA-2D4781=DwIFaQ=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=3R1dVnw5Ttyz1YbVIMSRNMz2gjWsQmbTNXl63kwXvKo=MywacMwh18eVH_NvLY6Ffhc3CKMh43Tai3WMUf9PsjM=
 

Please confirm your kip will address it ?

Stéphane

On 6 Dec. 2017 8:20 pm, "Skrzypek, Jonathan" 
wrote:

> True, amended the KIP, thanks.
>
> Jonathan Skrzypek
> Middleware Engineering
> Messaging Engineering
> Goldman Sachs International
>
>
> -Original Message-
> From: Tom Bentley [mailto:t.j.bent...@gmail.com]
> Sent: 05 December 2017 18:19
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS]KIP-235 DNS alias and secured connections
>
> Hi Jonathan,
>
> It might be worth mentioning in the KIP that this is necessary only 
> for
> *Kerberos* on SASL, and not other SASL mechanisms. Reading the JIRA it 
> makes sensem, but I was confused up until that point.
>
> Cheers,
>
> Tom
>
> On 5 December 2017 at 17:53, Skrzypek, Jonathan 
> 
> wrote:
>
> > Hi,
> >
> > I would like to discuss a KIP I've submitted :
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.or
> > g_
> > confluence_display_KAFKA_KIP-2D=DwIBaQ=7563p3e2zaQw0AB1wrFVgyagb
> > 2I
> > E5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=GWK
> > XA
> > ILbqxFU2j7LtoOx9MZ00uy_jJcGWWIG92CyAuc=fv5WAkOgLhVOmF4vhEzq_39CWnE
> > o0 q0AJbqhAuDFDT0=
> > 235%3A+Add+DNS+alias+support+for+secured+connection
> >
> > Feedback and suggestions welcome !
> >
> > Regards,
> > Jonathan Skrzypek
> > Middleware Engineering
> > Messaging Engineering
> > Goldman Sachs International
> > Christchurch Court - 10-15 Newgate Street London EC1A 7HD
> > Tel: +442070512977
> >
> >
>


Are there plans to migrate some/all of the command line tools to use the new AdminClient?

2018-02-22 Thread Sönke Liebau
I've dug around jira and the list of KIPs for a bit now, but could not
really find anything specific on plans to move the command line tools over
to the new AdminClient. Did I miss something or is that not currently
planned?

Most of the current command line tools require access to Zookeeper, which
becomes a bit of an issue once you enable zookeeper acls, as you need to
kinit with a broker keytab to be allowed write access which is somewhat of
a security concern. Also, if you want to firewall Zookeeper of from the
rest of the world any management command would need to be run from a
cluster machine.
None of this is an actual issue, it just required some additional effort
for cluster administration, however in a larger corporate environment I
can't imagine this would go down well with security audit guys and related
persons.

Using the AdminClient on the other hand allows to give specific users the
right to create topics/acls etc.which is checked by the brokers and
requires no access to Zookeeper by anybody except the brokers.

Maybe we could add a --use-adminclient parameter to the command line tools
sort of similar to the --new-consumer parameter to keep the old
functionality while enabling us to slowly move things over to the
AdminClient implementation?

Best regards,
Sönke


Contributor

2018-02-22 Thread Sebastian Toader
Hi Kafka Dev team,

Can you please add me the contributor list as I would like to contribute to
the Kafka project?

My apache username: stoader


Thank you,
Sebastian


[jira] [Created] (KAFKA-6582) Partitions get underreplicated, with a single ISR, and doesn't recover. Other brokers do not take over and we need to manually restart the broker.

2018-02-22 Thread Jurriaan Pruis (JIRA)
Jurriaan Pruis created KAFKA-6582:
-

 Summary: Partitions get underreplicated, with a single ISR, and 
doesn't recover. Other brokers do not take over and we need to manually restart 
the broker.
 Key: KAFKA-6582
 URL: https://issues.apache.org/jira/browse/KAFKA-6582
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 1.0.0
 Environment: Ubuntu 16.04
Linux kafka04 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 
x86_64 x86_64 x86_64 GNU/Linux

java version "9.0.1"
Java(TM) SE Runtime Environment (build 9.0.1+11)
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode) 

but also tried with the latest JVM 8 before with the same result.
Reporter: Jurriaan Pruis


Partitions get underreplicated, with a single ISR, and doesn't recover. Other 
brokers do not take over and we need to manually restart the 'single ISR' 
broker (if you describe the partitions of replicated topic it is clear that 
some partitions are only in sync on this broker).

This bug resembles KAFKA-4477 a lot, but since that issue is marked as resolved 
this is probably something else but similar.

We have the same issue (or at least it looks pretty similar) on Kafka 1.0. 

Since upgrading to Kafka 1.0 in November 2017 we've had these issues (we've 
upgraded from Kafka 0.10.2.1).

This happens almost every 24-48 hours on a random broker. This is why we 
currently have a cronjob which restarts every broker every 24 hours. 

During this issue the ISR shows the following server log: 
{code:java}
[2018-02-20 12:02:08,342] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.132.0.32:9092-10.14.148.20:56352-96708 (kafka.network.Processor)
[2018-02-20 12:02:08,364] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.132.0.32:9092-10.14.150.25:54412-96715 (kafka.network.Processor)
[2018-02-20 12:02:08,349] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.132.0.32:9092-10.14.149.18:35182-96705 (kafka.network.Processor)
[2018-02-20 12:02:08,379] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.132.0.32:9092-10.14.150.25:54456-96717 (kafka.network.Processor)
[2018-02-20 12:02:08,448] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.132.0.32:9092-10.14.159.20:36388-96720 (kafka.network.Processor)
[2018-02-20 12:02:08,683] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.132.0.32:9092-10.14.157.110:41922-96740 (kafka.network.Processor)
{code}
Also on the ISR broker, the controller log shows this:
{code:java}
[2018-02-20 12:02:14,927] INFO [Controller-3-to-broker-3-send-thread]: 
Controller 3 connected to 10.132.0.32:9092 (id: 3 rack: null) for sending state 
change requests (kafka.controller.RequestSendThread)
[2018-02-20 12:02:14,927] INFO [Controller-3-to-broker-0-send-thread]: 
Controller 3 connected to 10.132.0.10:9092 (id: 0 rack: null) for sending state 
change requests (kafka.controller.RequestSendThread)
[2018-02-20 12:02:14,928] INFO [Controller-3-to-broker-1-send-thread]: 
Controller 3 connected to 10.132.0.12:9092 (id: 1 rack: null) for sending state 
change requests (kafka.controller.RequestSendThread){code}
And the non-ISR brokers show these kind of errors:

 
{code:java}
2018-02-20 12:02:29,204] WARN [ReplicaFetcher replicaId=1, leaderId=3, 
fetcherId=0] Error in fetch to broker 3, request (type=FetchRequest, 
replicaId=1, maxWait=500, minBytes=1, maxBytes=10485760, 
fetchData={..}, isolationLevel=READ_UNCOMMITTED) 
(kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 3 was disconnected before the response was 
read
 at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:95)
 at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:96)
 at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:205)
 at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:41)
 at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:149)
 at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:113)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2430

2018-02-22 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk7 #3205

2018-02-22 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6577) Connect standalone SASL file source and sink test fails without explanation

2018-02-22 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-6577.
---
   Resolution: Fixed
Fix Version/s: 1.2.0

Issue resolved by pull request 4610
[https://github.com/apache/kafka/pull/4610]

> Connect standalone SASL file source and sink test fails without explanation
> ---
>
> Key: KAFKA-6577
> URL: https://issues.apache.org/jira/browse/KAFKA-6577
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, system tests
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.2.0, 1.1.0
>
>
> The 
> {{tests/kafkatest/tests/connect/connect_test.py::ConnectStandaloneFileTest.test_file_source_and_sink}}
>  test is failing with the SASL configuration without a sufficient 
> explanation. During the test, the Connect worker fails to start, but the 
> Connect log contains no useful information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6581) ConsumerGroupCommand hangs if even one of the partition is unavailable

2018-02-22 Thread Sahil Aggarwal (JIRA)
Sahil Aggarwal created KAFKA-6581:
-

 Summary: ConsumerGroupCommand hangs if even one of the partition 
is unavailable
 Key: KAFKA-6581
 URL: https://issues.apache.org/jira/browse/KAFKA-6581
 Project: Kafka
  Issue Type: Bug
  Components: admin, core, tools
Affects Versions: 0.10.0.0
Reporter: Sahil Aggarwal
 Fix For: 0.10.0.2


ConsumerGroupCommand.scala uses consumer internally to get the position for 
each partition but if the partition is unavailable the call 
consumer.position(topicPartition) will block indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)