[kafka] branch 2.5 updated: KAFKA-9577; SaslClientAuthenticator incorrectly negotiates SASL_HANDSHAKE version (#8142)

2020-02-21 Thread jgus
This is an automated email from the ASF dual-hosted git repository.

jgus pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new c6cbdf2  KAFKA-9577; SaslClientAuthenticator incorrectly negotiates 
SASL_HANDSHAKE version (#8142)
c6cbdf2 is described below

commit c6cbdf2be85088d043e40c794000dcdbdac008d9
Author: Lucas Bradstreet 
AuthorDate: Fri Feb 21 21:49:11 2020 -0800

KAFKA-9577; SaslClientAuthenticator incorrectly negotiates SASL_HANDSHAKE 
version (#8142)

The SaslClientAuthenticator incorrectly negotiates supported 
SaslHandshakeRequest version and  uses the maximum version supported by the 
broker whether or not the client supports it.

This bug was exposed by a recent version bump in 
https://github.com/apache/kafka/commit/0a2569e2b9907a1217dd50ccbc320f8ad0b42fd0.

This PR rolls back the recent SaslHandshake[Request,Response] bump, fixes 
the version negotiation, and adds a test to prevent anyone from accidentally 
bumping the version without a workaround such as a new ApiKey. The existing key 
will be difficult to support for clients < 2.5 due to the incorrect negotiation.

Reviewers: Ron Dagostino , Rajini Sivaram 
, Colin P. McCabe , Jason 
Gustafson 
---
 .../authenticator/SaslClientAuthenticator.java | 25 +---
 .../common/message/SaslHandshakeRequest.json   |  7 +-
 .../common/message/SaslHandshakeResponse.json  |  7 +-
 .../authenticator/SaslAuthenticatorTest.java   | 74 +-
 4 files changed, 96 insertions(+), 17 deletions(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientAuthenticator.java
 
b/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientAuthenticator.java
index 6784a01..e972da1 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientAuthenticator.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientAuthenticator.java
@@ -133,6 +133,8 @@ public class SaslClientAuthenticator implements 
Authenticator {
 private RequestHeader currentRequestHeader;
 // Version of SaslAuthenticate request/responses
 private short saslAuthenticateVersion;
+// Version of SaslHandshake request/responses
+private short saslHandshakeVersion;
 
 public SaslClientAuthenticator(Map configs,
AuthenticateCallbackHandler callbackHandler,
@@ -213,13 +215,13 @@ public class SaslClientAuthenticator implements 
Authenticator {
 if (apiVersionsResponse == null)
 break;
 else {
-saslAuthenticateVersion(apiVersionsResponse);
+
setSaslAuthenticateAndHandshakeVersions(apiVersionsResponse);
 reauthInfo.apiVersionsResponseReceivedFromBroker = 
apiVersionsResponse;
 setSaslState(SaslState.SEND_HANDSHAKE_REQUEST);
 // Fall through to send handshake request with the latest 
supported version
 }
 case SEND_HANDSHAKE_REQUEST:
-
sendHandshakeRequest(reauthInfo.apiVersionsResponseReceivedFromBroker);
+sendHandshakeRequest(saslHandshakeVersion);
 setSaslState(SaslState.RECEIVE_HANDSHAKE_RESPONSE);
 break;
 case RECEIVE_HANDSHAKE_RESPONSE:
@@ -236,11 +238,11 @@ public class SaslClientAuthenticator implements 
Authenticator {
 setSaslState(SaslState.INTERMEDIATE);
 break;
 case REAUTH_PROCESS_ORIG_APIVERSIONS_RESPONSE:
-
saslAuthenticateVersion(reauthInfo.apiVersionsResponseFromOriginalAuthentication);
+
setSaslAuthenticateAndHandshakeVersions(reauthInfo.apiVersionsResponseFromOriginalAuthentication);
 setSaslState(SaslState.REAUTH_SEND_HANDSHAKE_REQUEST); // Will 
set immediately
 // Fall through to send handshake request with the latest 
supported version
 case REAUTH_SEND_HANDSHAKE_REQUEST:
-
sendHandshakeRequest(reauthInfo.apiVersionsResponseFromOriginalAuthentication);
+sendHandshakeRequest(saslHandshakeVersion);
 
setSaslState(SaslState.REAUTH_RECEIVE_HANDSHAKE_OR_OTHER_RESPONSE);
 break;
 case REAUTH_RECEIVE_HANDSHAKE_OR_OTHER_RESPONSE:
@@ -285,9 +287,8 @@ public class SaslClientAuthenticator implements 
Authenticator {
 }
 }
 
-private void sendHandshakeRequest(ApiVersionsResponse apiVersionsResponse) 
throws IOException {
-SaslHandshakeRequest handshakeRequest = createSaslHandshakeRequest(
-
apiVersionsResponse.apiVersion(ApiKeys.SASL_HANDSHAKE.id).maxVersion());
+private void sendHandshakeRequest(short version) throws IOException {
+

[kafka] branch trunk updated (97d107a -> 1a8dcff)

2020-02-21 Thread jgus
This is an automated email from the ASF dual-hosted git repository.

jgus pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 97d107a  KAFKA-9441: Add internal TransactionManager (#8105)
 add 1a8dcff  KAFKA-9577; SaslClientAuthenticator incorrectly negotiates 
SASL_HANDSHAKE version (#8142)

No new revisions were added by this update.

Summary of changes:
 .../authenticator/SaslClientAuthenticator.java | 25 +---
 .../common/message/SaslHandshakeRequest.json   |  7 +-
 .../common/message/SaslHandshakeResponse.json  |  7 +-
 .../authenticator/SaslAuthenticatorTest.java   | 74 +-
 4 files changed, 96 insertions(+), 17 deletions(-)



[kafka] branch trunk updated (bbfecae -> 97d107a)

2020-02-21 Thread mjsax
This is an automated email from the ASF dual-hosted git repository.

mjsax pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from bbfecae  MINOR: Document endpoints for connector topic tracking 
(KIP-558)
 add 97d107a  KAFKA-9441: Add internal TransactionManager (#8105)

No new revisions were added by this update.

Summary of changes:
 .../kafka/clients/producer/MockProducer.java   | 103 ++-
 .../kafka/clients/producer/MockProducerTest.java   |   8 +
 .../streams/errors/TaskMigratedException.java  |   2 +-
 .../processor/internals/RecordCollectorImpl.java   | 319 +++--
 .../processor/internals/StoreChangelogReader.java  |  16 +-
 .../streams/processor/internals/StreamTask.java|  15 +-
 .../streams/processor/internals/StreamThread.java  | 132 ++--
 .../processor/internals/StreamsProducer.java   | 227 ++
 .../streams/processor/internals/TaskManager.java   |  40 +-
 .../org/apache/kafka/streams/KafkaStreamsTest.java |   8 +-
 .../internals/ProcessorStateManagerTest.java   |   6 +-
 .../processor/internals/RecordCollectorTest.java   | 797 +++--
 .../processor/internals/StreamThreadTest.java  |  61 +-
 .../processor/internals/StreamsProducerTest.java   | 638 +
 .../processor/internals/TaskManagerTest.java   |   3 +-
 .../streams/state/KeyValueStoreTestDriver.java |  13 +-
 .../StreamThreadStateStoreProviderTest.java|  23 +-
 .../org/apache/kafka/test/MockClientSupplier.java  |   2 +-
 .../org/apache/kafka/test/MockKeyValueStore.java   |  11 +-
 .../kafka/test/MockKeyValueStoreBuilder.java   |   4 +-
 .../apache/kafka/streams/TopologyTestDriver.java   | 143 ++--
 21 files changed, 1615 insertions(+), 956 deletions(-)
 create mode 100644 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
 create mode 100644 
streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsProducerTest.java



[kafka] branch trunk updated (003dce5 -> bbfecae)

2020-02-21 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 003dce5  MINOR: Standby task commit needed when offsets updated (#8146)
 add bbfecae  MINOR: Document endpoints for connector topic tracking 
(KIP-558)

No new revisions were added by this update.

Summary of changes:
 docs/connect.html | 6 ++
 1 file changed, 6 insertions(+)



[kafka] branch 2.5 updated: MINOR: Document endpoints for connector topic tracking (KIP-558)

2020-02-21 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new 972119e  MINOR: Document endpoints for connector topic tracking 
(KIP-558)
972119e is described below

commit 972119e34c76eb9c936b698b622692002e459402
Author: Konstantine Karantasis 
AuthorDate: Fri Feb 21 12:25:35 2020 -0800

MINOR: Document endpoints for connector topic tracking (KIP-558)

Update the site documentation to include the endpoints introduced with 
KIP-558 and a short paragraph on how this feature is used in Connect.

Author: Konstantine Karantasis 

Reviewers: Toby Drake , Ewen Cheslack-Postava 


Closes #8148 from kkonstantine/kip-558-docs

(cherry picked from commit bbfecaef725456f648f03530d26a5395042966fa)
Signed-off-by: Ewen Cheslack-Postava 
---
 docs/connect.html | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/docs/connect.html b/docs/connect.html
index a92bb04..473569c 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -239,6 +239,8 @@
 POST /connectors/{name}/restart - restart a connector 
(typically because it has failed)
 POST /connectors/{name}/tasks/{taskId}/restart - 
restart an individual task (typically because it has failed)
 DELETE /connectors/{name} - delete a connector, 
halting all tasks and deleting its configuration
+GET /connectors/{name}/topics - get the set of topics 
that a specific connector is using since the connector was created or since a 
request to reset its set of active topics was issued
+PUT /connectors/{name}/topics/reset - send a request 
to empty the set of active topics of a connector
 
 
 Kafka Connect also provides a REST API for getting information about 
connector plugins:
@@ -577,6 +579,10 @@
 
 
 
+Starting with 2.5.0, Kafka Connect uses the 
status.storage.topic to also store information related to the 
topics that each connector is using. Connect Workers use these per-connector 
topic status updates to respond to requests to the REST endpoint GET 
/connectors/{name}/topics by returning the set of topic names that a 
connector is using. A request to the REST endpoint PUT 
/connectors/{name}/topics/reset resets the set of active topics for a 
con [...]
+
+
+
 It's sometimes useful to temporarily stop the message processing of a 
connector. For example, if the remote system is undergoing maintenance, it 
would be preferable for source connectors to stop polling it for new data 
instead of filling logs with exception spam. For this use case, Connect offers 
a pause/resume API. While a source connector is paused, Connect will stop 
polling it for additional records. While a sink connector is paused, Connect 
will stop pushing new messages to it. T [...]
 
 



[kafka] branch trunk updated (84c4025 -> 003dce5)

2020-02-21 Thread guozhang
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 84c4025  KAFKA-9206; Throw KafkaException on CORRUPT_MESSAGE error in 
Fetch response (#8111)
 add 003dce5  MINOR: Standby task commit needed when offsets updated (#8146)

No new revisions were added by this update.

Summary of changes:
 .../streams/processor/internals/StandbyTask.java   |  8 +-
 .../processor/internals/StandbyTaskTest.java   | 33 ++
 2 files changed, 40 insertions(+), 1 deletion(-)



[kafka] branch 2.5 updated: MINOR: Improve EOS example exception handling (#8052)

2020-02-21 Thread guozhang
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new e2f9f08  MINOR: Improve EOS example exception handling (#8052)
e2f9f08 is described below

commit e2f9f08d1c735e89e7de46e5af0ebfbe771de473
Author: Boyang Chen 
AuthorDate: Thu Feb 20 09:59:09 2020 -0800

MINOR: Improve EOS example exception handling (#8052)

The current EOS example mixes fatal and non-fatal error handling. This 
patch fixes this problem and simplifies the example.

Reviewers: Jason Gustafson 
---
 examples/README|  12 +-
 .../src/main/java/kafka/examples/Consumer.java |   3 +
 .../examples/ExactlyOnceMessageProcessor.java  | 121 -
 .../kafka/examples/KafkaConsumerProducerDemo.java  |   3 +-
 .../java/kafka/examples/KafkaExactlyOnceDemo.java  |  32 +++---
 .../main/java/kafka/examples/KafkaProperties.java  |   5 -
 6 files changed, 75 insertions(+), 101 deletions(-)

diff --git a/examples/README b/examples/README
index 2efe71a..bff6cd3 100644
--- a/examples/README
+++ b/examples/README
@@ -6,10 +6,8 @@ To run the demo:
2. For simple consumer demo, `run bin/java-simple-consumer-demo.sh`
3. For unlimited sync-producer-consumer run, `run 
bin/java-producer-consumer-demo.sh sync`
4. For unlimited async-producer-consumer run, `run 
bin/java-producer-consumer-demo.sh`
-   5. For standalone mode exactly once demo run, `run bin/exactly-once-demo.sh 
standaloneMode 6 3 5`,
-  this means we are starting 3 EOS instances with 6 topic partitions and 
5 pre-populated records
-   6. For group mode exactly once demo run, `run bin/exactly-once-demo.sh 
groupMode 6 3 5`,
-  this means the same as the standalone demo, except consumers are using 
subscription mode.
-   7. Some notes for exactly once demo:
-  7.1. The Kafka server has to be on broker version 2.5 or higher to be 
able to run group mode.
-  7.2. You could also use Intellij to run the example directly by 
configuring parameters as "Program arguments"
+   5. For exactly once demo run, `run bin/exactly-once-demo.sh 6 3 5`,
+  this means we are starting 3 EOS instances with 6 topic partitions and 
5 pre-populated records.
+   6. Some notes for exactly once demo:
+  6.1. The Kafka server has to be on broker version 2.5 or higher.
+  6.2. You could also use Intellij to run the example directly by 
configuring parameters as "Program arguments"
diff --git a/examples/src/main/java/kafka/examples/Consumer.java 
b/examples/src/main/java/kafka/examples/Consumer.java
index 19cb67c..d748832 100644
--- a/examples/src/main/java/kafka/examples/Consumer.java
+++ b/examples/src/main/java/kafka/examples/Consumer.java
@@ -24,6 +24,7 @@ import org.apache.kafka.clients.consumer.KafkaConsumer;
 
 import java.time.Duration;
 import java.util.Collections;
+import java.util.Optional;
 import java.util.Properties;
 import java.util.concurrent.CountDownLatch;
 
@@ -37,6 +38,7 @@ public class Consumer extends ShutdownableThread {
 
 public Consumer(final String topic,
 final String groupId,
+final Optional instanceId,
 final boolean readCommitted,
 final int numMessageToConsume,
 final CountDownLatch latch) {
@@ -45,6 +47,7 @@ public class Consumer extends ShutdownableThread {
 Properties props = new Properties();
 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
KafkaProperties.KAFKA_SERVER_URL + ":" + KafkaProperties.KAFKA_SERVER_PORT);
 props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
+instanceId.ifPresent(id -> 
props.put(ConsumerConfig.GROUP_INSTANCE_ID_CONFIG, id));
 props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
 props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
 props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "3");
diff --git 
a/examples/src/main/java/kafka/examples/ExactlyOnceMessageProcessor.java 
b/examples/src/main/java/kafka/examples/ExactlyOnceMessageProcessor.java
index 482e442..8f31b19 100644
--- a/examples/src/main/java/kafka/examples/ExactlyOnceMessageProcessor.java
+++ b/examples/src/main/java/kafka/examples/ExactlyOnceMessageProcessor.java
@@ -16,7 +16,6 @@
  */
 package kafka.examples;
 
-import org.apache.kafka.clients.consumer.CommitFailedException;
 import org.apache.kafka.clients.consumer.ConsumerRebalanceListener;
 import org.apache.kafka.clients.consumer.ConsumerRecord;
 import org.apache.kafka.clients.consumer.ConsumerRecords;
@@ -34,8 +33,8 @@ import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
-import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import 

[kafka] branch 2.5 updated: KAFKA-9582: Do not abort transaction in unclean close (#8143)

2020-02-21 Thread guozhang
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new 121df46  KAFKA-9582: Do not abort transaction in unclean close (#8143)
121df46 is described below

commit 121df465fad0bc062537c027bf9ed4755112d10c
Author: Boyang Chen 
AuthorDate: Fri Feb 21 10:27:57 2020 -0800

KAFKA-9582: Do not abort transaction in unclean close (#8143)

In order to avoid hitting the fatal exception during unclean close, we 
should avoid calling the abortTransaction() call.

Reviewers: John Roesler , Guozhang Wang 

---
 .../streams/processor/internals/StreamTask.java| 49 ++
 .../processor/internals/StreamTaskTest.java| 12 +++---
 2 files changed, 18 insertions(+), 43 deletions(-)

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
index 54da00d..9aa8e79 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
@@ -94,7 +94,6 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 private long idleStartTime;
 private Producer producer;
 private boolean commitRequested = false;
-private boolean transactionInFlight = false;
 
 private final String threadId;
 
@@ -294,7 +293,6 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 } catch (final ProducerFencedException | 
UnknownProducerIdException e) {
 throw new TaskMigratedException(this, e);
 }
-transactionInFlight = true;
 }
 
 processorContext.initialize();
@@ -522,10 +520,8 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 if (eosEnabled) {
 producer.sendOffsetsToTransaction(consumedOffsetsAndMetadata, 
applicationId);
 producer.commitTransaction();
-transactionInFlight = false;
 if (startNewTransaction) {
 producer.beginTransaction();
-transactionInFlight = true;
 }
 } else {
 consumer.commitSync(consumedOffsetsAndMetadata);
@@ -602,7 +598,7 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
  */
 public void suspend() {
 log.debug("Suspending");
-suspend(true, false);
+suspend(true);
 }
 
 /**
@@ -618,8 +614,7 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
  *   or if the task producer got fenced (EOS)
  */
 // visible for testing
-void suspend(final boolean clean,
- final boolean isZombie) {
+void suspend(final boolean clean) {
 // this is necessary because all partition times are reset to -1 
during close
 // we need to preserve the original partitions times before calling 
commit
 final Map partitionTimes = 
extractPartitionTimes();
@@ -640,14 +635,7 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 
 if (eosEnabled) {
 stateMgr.checkpoint(activeTaskCheckpointableOffsets());
-
-try {
-recordCollector.close();
-} catch (final RecoverableClientException e) {
-taskMigratedException = new 
TaskMigratedException(this, e);
-} finally {
-producer = null;
-}
+taskMigratedException = closeRecordCollector();
 }
 }
 if (taskMigratedException != null) {
@@ -662,37 +650,26 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 }
 
 if (eosEnabled) {
-maybeAbortTransactionAndCloseRecordCollector(isZombie);
+// Ignore any exceptions whilee closing the record collector, 
i.e task producer.
+closeRecordCollector();
 }
 }
 }
 
-private void maybeAbortTransactionAndCloseRecordCollector(final boolean 
isZombie) {
-if (!isZombie) {
-try {
-if (transactionInFlight) {
-producer.abortTransaction();
-}
-transactionInFlight = false;
-} catch (final ProducerFencedException ignore) {
-/* TODO
- * this should actually never happen atm as we guard the call 
to #abortTransaction
- * -> the reason for the guard is a "bug" in the 

[kafka] branch trunk updated (747ef08 -> 84c4025)

2020-02-21 Thread jgus
This is an automated email from the ASF dual-hosted git repository.

jgus pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 747ef08  MINOR: Remove unwanted regexReplace on 
tests/kafkatest/__init__.py
 add 84c4025  KAFKA-9206; Throw KafkaException on CORRUPT_MESSAGE error in 
Fetch response (#8111)

No new revisions were added by this update.

Summary of changes:
 .../kafka/clients/consumer/internals/Fetcher.java  | 14 +++--
 .../kafka/common/requests/FetchResponse.java   |  2 ++
 .../clients/consumer/internals/FetcherTest.java| 24 ++
 3 files changed, 38 insertions(+), 2 deletions(-)



[kafka] branch trunk updated (b28aa4e -> 747ef08)

2020-02-21 Thread manikumar
This is an automated email from the ASF dual-hosted git repository.

manikumar pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from b28aa4e  KAFKA-9586: Fix errored json filename in ops documentation
 add 747ef08  MINOR: Remove unwanted regexReplace on 
tests/kafkatest/__init__.py

No new revisions were added by this update.

Summary of changes:
 release.py | 2 --
 1 file changed, 2 deletions(-)



[kafka-site] branch asf-site updated: new key for bbejeck (#255)

2020-02-21 Thread bbejeck
This is an automated email from the ASF dual-hosted git repository.

bbejeck pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new b303840  new key for bbejeck (#255)
b303840 is described below

commit b30384019baed6318faaa5d7ea658aee14d4fbf2
Author: Bill Bejeck 
AuthorDate: Fri Feb 21 09:41:20 2020 -0500

new key for bbejeck (#255)

Merged #255 into asf-site
---
 KEYS | 108 ++-
 1 file changed, 55 insertions(+), 53 deletions(-)

diff --git a/KEYS b/KEYS
index b0b9dd3..337417d 100644
--- a/KEYS
+++ b/KEYS
@@ -1070,62 +1070,64 @@ 
gm+FU3WdD9oqC3/dDKZ7pFizUhijUZo+nfedhj+7+ZGPFrPX11TS+N3p2E863DGG
 gGDok4uiZw4HdrV8d3/JvahBXfQWH12KeoFmZi/pY/XiPZszEyPrdRVrNbRag/0=
 =V7qY
 -END PGP PUBLIC KEY BLOCK-
-pub   rsa4096 2019-02-23 [SC]
-  1BE3 316E 762D 17FF 4D2D  55B6 7D19 0980 87E7 3B62
-uid   [ultimate] Bill Bejeck (CODE SIGNING KEY) 
-sub   rsa4096 2019-02-23 [E]
+pub   rsa4096 2020-02-20 [SC]
+  DFB5ABA9CD50A02B5C2A511662A9813636302260
+uid   [ultimate] Bill Bejeck (CODE SIGNING KEY) 
+sig 362A9813636302260 2020-02-20  Bill Bejeck (CODE SIGNING KEY) 

+sub   rsa4096 2020-02-20 [E]
+sig  62A9813636302260 2020-02-20  Bill Bejeck (CODE SIGNING KEY) 

 
 -BEGIN PGP PUBLIC KEY BLOCK-
 
-mQINBFxxo3wBEACe7CV/KvBUovUy+TvtmrWCL+9t3dAnteudZiFMOARXn7WyUmww
-Yc0gsl+hYRoiebBxHqgJ3ER233DdPrZjabx7Q4gyDciIaX/Xgp53S+7KZe+IT1rr
-I+fuYeh7Xlvdetna38jwseXvVKXa3WJYUvdu92iFj5Pcaw290mXfE3i+Xm0idlcm
-tw6YImJecS/7JqyF5DpFLSgHEtqhGTR9cdP6zcOoj80RzNza0e8BlN+eOBNpr4Vj
-9rIfcw15IdhdAokYEQ6ek8dLFWP+XHZst81JeaokTzB4TrtIhyYudLwTnW2x6JOY
-QTmni4g0ysex7BKnTKbzCJYYlocTGbtlzE7ZE02KQOUmWplOoYDLPFb80DtdQwSM
-c7PEg0z9+zWgXiCeMbfU5Ji7KE7o74k5CASnVHJ7PBvTYapIOsmYzmBkjqxeFyMu
-5PBud5SH7nVnIC1X3iWJhJKBt9l/D6AbKf09GxpY0MJg5xe0aZBninKaozJf56Fb
-UaHJouHi5tfg16cTC8EvAsKCAPlDiYvqLq4dan7XiIIpe5FwBL/4kJkyeq/zbgBP
-JYPsKB7K0Z8LInuT3YMVOpnPS2jQz6HEoxBjMD+COmeTqy/eBJDLSFLnnD0ZXGus
-gWgoSs5ujcZIrJnpQpDpxFmlpmXrFkRb8AWgoQXfgzQy4x9BsBY/afaTawARAQAB
-tDJCaWxsIEJlamVjayAoQ09ERSBTSUdOSU5HIEtFWSkgPGJiZWplY2tAZ21haWwu
-Y29tPokCTgQTAQoAOBYhBBvjMW52LRf/TS1Vtn0ZCYCH5ztiBQJccaN8AhsDBQsJ
-CAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEH0ZCYCH5zti8m4P/3xip32e2d1E3fPa
-NB8FWaJlRhSFJCXAE5kxWGHmb0IYEg8HnYzMkX2Cri8pb3rIfZzIoBai9B4u7ADv
-S91zaLFqurD5e6UrmuJOGvHyDKemMzR29qdARDSKsP2T/5ugyb1n/YmBdV8ZZcnw
-sJQUkyZSgTFPatVLqynzrF3H4iExCIJi+5tm9tiuwIyInGSi9qK6cBzmplOz3EtA
-4q0lbJTqk9LQyQMgk2f5XBoftL++d+LPcilJr8EsKd6FB0svNGovXJr80csVI6gs
-dKi0TSeI94dVSLhpSqbWVBqdgo/rPdXAzzURm05vLkvtQoDWcfyYayOC8q3DmajI
-WoXts2myPMtXl4SfsrtlYz9xG6cybW2sKtAQSpp0P7T2V1B/+ATz2hKRnAn5FIUg
-7ehfqIoXqQEzwBeX3yDcb99Bj9svxGYInsKwHyqUe9H4piuSq96ynGm57nmuYIxY
-9iXGhAdCu26MwIS5Hmk/t89IWv4PmDjmtqe2ySN57BcRd1I9BlSbJGGQQrRoBmTY
-PkgvfV58jrUJq+jM3fKWWxSvrtFJ6OMv60ZaPNOnTmYYmye0tEAEp2RXD1JKnj4F
-ndrR859PBRnejfWOwLe0++cfq1pCttsjS2Mrbrz7m7rKKJwa48rkXC+Z0qr2BIpu
-Ak3rZ/NZtKje1b4PK7KLtvJuH5XuuQINBFxxo3wBEADFdsFBeWFODMHTon5k++GK
-o3EjQdNONCt8zzW3udzVpg6sJgTbuBh2+QRpjs6d4yVnfDmwq72FVAoiqGr4BflT
-KBpFyE0G5JFzLPJ++uc2iWRK0Wc+MNhRKOttvuAqzZZTuehZ8K+P4j2H3tOR0Y2S
-gL1jCiFgOQRhoKlrtAFs42acgivQijDrtULKuUpeVp6AXUHkuCQjVbHAhbwm5phw
-LDcJXhzBh8gQ3mFk59u35GBLoseiSucgcsVKvj5i0Um59JoAi7jkJ6dR54o/uxgg
-rpuOWngxljPUAqPXo0u2+1WJBfkgnqksx2K60IIlUpBuyriyU7OxTiBcoQd6NX+D
-LlkByHsbdUi4GnQo8NkAmIbTw3Vei74QnMgsMy/iFmsYmFZyB1hKK5M4KlTvB11h
-Gq61vpNVLrvTH+83oAdfm6i88wJGsXBXgL/0oU7vxxnNIBlnggEYt/RY93HjokBf
-KZHEAviAwqHQ2PefLI2EMcYJu1j+dvgjtW9FJMyDHZPiHL4ZUU6uuDhkc5kXP/ki
-G08oaNXIrQqMUBEhGkA2JzjksjJ0yRHm0oVXwSNqecTbPo0XvoSA0lh8EPyOiPFH
-c5kFOrOwB2+ZQXb61LxAHA2xEeCud3qOQBa8zCDYfireaBSicTFRtnBENbNoVFO+
-WRDETUqaGpjHLaV+i7RqjwARAQABiQI2BBgBCgAgFiEEG+MxbnYtF/9NLVW2fRkJ
-gIfnO2IFAlxxo3wCGwwACgkQfRkJgIfnO2IcSA//Waj2L1gTbbWxK3iZdZj1PiUh
-1+2j/OeqH7xZ6W/CXsQVyu4hm8pyHPVuOfji4Es9RX23IJq5aycEMTePNoX4Z1IN
-UiFHdBj2kQkEkMkWx9f4rVGB0PahT2DjbVd7mc92m4U1IrBL/ii9Y/X7IhOHuMBq
-jXmN4aw/jpQPx/1A7wzwBUkl5zCjWJz0oCu2xypFGpYqdvc6s2lXBeEoC3VDIiNb
-z+p4rkh1ZRNN/t8tof3xjevTb+gShwC845d2LlyKz2TM2L3ps7BmvjXnMf3GxtvA
-8FQN218OZVy2K8JErK4nGOsTgwWucDL4EPaxVgBD2TqHn3bTohljNH/9ElR1KIEe
-k2tQIDmXZCMIcQGxjgVEv5nQ4v0JlF3L1kua56M4lP3Nw1DvEBK4n5LULBYSsnta
-+BiIUO31gajwnuqCAk2PnLPwonXrLknq5hpjqXNQCbAsCov7STLrN/ItlqJjjXW4
-Y3fZTNOhZePnHUuUHxB8xlx2iXERMtawSyKyDr9t3GU4rCqfMdF1824/mCyqH/ME
-mMUourTyq0HdNTn0IgSLLTG6+Z5gU/lzS82BCzFX1lQ0ubbRPOjVbaJEkZCjGjqY
-HA2n6MDyGfIZA1X3CCN+AoSINMz5a3gC2n14q44RZJUSVgEzQ72ZR+gCVlcFV+Zy
-Y+Zqob82+zQ+EY/PtiM=
-=v46J
+mQINBF5PHNsBEACubVJQ5wQ4O/iBR12VJtwqoWiF8ML4Nn73dLtX+B7DgpB/CUgS
+dbF3wbYPgvvo1r6+BJFKwQnJftUrtTao5n+7Fj//necCBvxMCW5SQg8F6jO4piBq
+a2iTe2ZK2WXxoKxv5KMKLx9SqUSiTeJFae8oJCcp9u+hJtFBCAnk11e7IFPORC3q
+CXgVAekHH9wFimJ9w1xA9Xd6PYxn4RawamOMbVc7csSLvmrKbKi7Nv8MWaR/WnhB
+DNgU1fhTy4D02UCjxRgsRq28jDhAH3wOt4TOBblEMiP1sZSZNDwKfhRoEZtQgQCP
+fCag9ZasPOcSV1sFZCJF6pR+8+YbMe3RMP0987f6oKYX2jy06pc+BYuoy38oYw4B

[kafka] branch 2.5 updated: KAFKA-9586: Fix errored json filename in ops documentation

2020-02-21 Thread manikumar
This is an automated email from the ASF dual-hosted git repository.

manikumar pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new a721e71  KAFKA-9586: Fix errored json filename in ops documentation
a721e71 is described below

commit a721e71bc21e06fd127ee8a396bd449eeab9666b
Author: Lee Dongjin 
AuthorDate: Fri Feb 21 18:49:11 2020 +0530

KAFKA-9586: Fix errored json filename in ops documentation

This PR is the counterpart of apache/kafka-site#253.

cc/ omkreddy

Author: Lee Dongjin 

Reviewers: Manikumar Reddy 

Closes #8149 from dongjinleekr/feature/KAFKA-9586

(cherry picked from commit b28aa4ece6e65a30dfb2009afffd39ba07b31374)
Signed-off-by: Manikumar Reddy 
---
 docs/ops.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/ops.html b/docs/ops.html
index 0b0e7ae..0fd0ba1 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -398,7 +398,7 @@
   }
   
   
-  The --verify option can be used with the tool to check the status of the 
partition reassignment. Note that the same expand-cluster-reassignment.json 
(used with the --execute option) should be used with the --verify option:
+  The --verify option can be used with the tool to check the status of the 
partition reassignment. Note that the same custom-reassignment.json (used with 
the --execute option) should be used with the --verify option:
   
   > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 
--reassignment-json-file custom-reassignment.json --verify
   Status of partition reassignment:



[kafka] branch trunk updated (d9b8b86 -> b28aa4e)

2020-02-21 Thread manikumar
This is an automated email from the ASF dual-hosted git repository.

manikumar pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from d9b8b86  KAFKA-9575: Mention ZooKeeper 3.5.7 upgrade
 add b28aa4e  KAFKA-9586: Fix errored json filename in ops documentation

No new revisions were added by this update.

Summary of changes:
 docs/ops.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[kafka] branch trunk updated (3b6573c -> d9b8b86)

2020-02-21 Thread manikumar
This is an automated email from the ASF dual-hosted git repository.

manikumar pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 3b6573c  KAFKA-9481: Graceful handling TaskMigrated and TaskCorrupted 
(#8058)
 add d9b8b86  KAFKA-9575: Mention ZooKeeper 3.5.7 upgrade

No new revisions were added by this update.

Summary of changes:
 docs/upgrade.html | 10 ++
 1 file changed, 10 insertions(+)



[kafka] branch 2.5 updated: KAFKA-9575: Mention ZooKeeper 3.5.7 upgrade

2020-02-21 Thread manikumar
This is an automated email from the ASF dual-hosted git repository.

manikumar pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new f008dbf  KAFKA-9575: Mention ZooKeeper 3.5.7 upgrade
f008dbf is described below

commit f008dbf9b7fd408b8987b0dd80d44323cdb6b5dc
Author: Ron Dagostino 
AuthorDate: Fri Feb 21 18:45:14 2020 +0530

KAFKA-9575: Mention ZooKeeper 3.5.7 upgrade

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Ron Dagostino 

Reviewers: Reviewers: Manikumar Reddy 

Closes #8139 from rondagostino/KAFKA-9575

(cherry picked from commit d9b8b86bdd0b4fe6965b1f597f9ba7ec33954a50)
Signed-off-by: Manikumar Reddy 
---
 docs/upgrade.html | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/docs/upgrade.html b/docs/upgrade.html
index 55593d6..bb30735 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -42,6 +42,16 @@
 enabled by default. You can continue to use TLSv1 and TLSv1.1 by 
explicitly enabling these in the configuration options
 ssl.protocol and ssl.enabled.protocols.
 
+ZooKeeper has been upgraded to 3.5.7, and a ZooKeeper upgrade from 
3.4.X to 3.5.7 can fail if there are no snapshot files in the 3.4 data 
directory.
+This usually happens in test upgrades where ZooKeeper 3.5.7 is trying 
to load an existing 3.4 data dir in which no snapshot file has been created.
+For more details about the issue please refer to https://issues.apache.org/jira/browse/ZOOKEEPER-3056;>ZOOKEEPER-3056.
+A fix is given in https://issues.apache.org/jira/browse/ZOOKEEPER-3056;>ZOOKEEPER-3056, 
which is to set snapshot.trust.empty=true
+config in zookeeper.properties before the upgrade.
+
+ZooKeeper version 3.5.7 supports TLS-encrypted connectivity to 
ZooKeeper both with or without client certificates,
+and additional Kafka configurations are available to take advantage of 
this.
+See https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication;>KIP-515
 for details.
+
 
 
 Upgrading from 0.8.x, 0.9.x, 
0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, 2.0.x or 2.1.x or 2.2.x 
or 2.3.x to 2.4.0