[kafka] branch trunk updated: MINOR:Fix table outer join test (#5099)

2018-06-01 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e24916a  MINOR:Fix table outer join test (#5099)
e24916a is described below

commit e24916a68f8259046be677e7f8c1f365960e0dc3
Author: emmanuel Harel 
AuthorDate: Fri Jun 1 13:09:24 2018 +0200

MINOR:Fix table outer join test (#5099)
---
 .../java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java
index 399e519..0b9c1ab 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java
@@ -472,7 +472,7 @@ public class KTableImplTest {
 
 @Test(expected = NullPointerException.class)
 public void shouldThrowNullPointerOnOuterJoinWhenMaterializedIsNull() {
-table.leftJoin(table, MockValueJoiner.TOSTRING_JOINER, (Materialized) 
null);
+table.outerJoin(table, MockValueJoiner.TOSTRING_JOINER, (Materialized) 
null);
 }
 
 @Test(expected = NullPointerException.class)

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] annotated tag 1.1.0-rc3 updated (9368c84 -> ecb57c1)

2018-03-15 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a change to annotated tag 1.1.0-rc3
in repository https://gitbox.apache.org/repos/asf/kafka.git.


*** WARNING: tag 1.1.0-rc3 was modified! ***

from 9368c84  (commit)
  to ecb57c1  (tag)
 tagging 9368c84565224fff1c74199af995c86f806be37a (commit)
 replaces 0.8.0-beta1
  by Damian Guy
  on Thu Mar 15 13:27:58 2018 +

- Log -
1.1.0-rc3
---


No new revisions were added by this update.

Summary of changes:

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] annotated tag 1.1.0-rc0 created (now 7d74914)

2018-02-24 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a change to annotated tag 1.1.0-rc0
in repository https://gitbox.apache.org/repos/asf/kafka.git.


  at 7d74914  (tag)
 tagging e99dd247490bab052023315aa6789ebe03dd0927 (commit)
 replaces 0.8.0-beta1
  by Damian Guy
  on Sat Feb 24 15:13:54 2018 +

- Log -
1.1.0-rc0
---

This annotated tag includes the following new commits:

 new e99dd24  Bump version to 1.1.0

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] 01/01: Bump version to 1.1.0

2018-02-24 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to annotated tag 1.1.0-rc0
in repository https://gitbox.apache.org/repos/asf/kafka.git

commit e99dd247490bab052023315aa6789ebe03dd0927
Author: Damian Guy <damian@gmail.com>
AuthorDate: Sat Feb 24 15:13:54 2018 +

Bump version to 1.1.0
---
 gradle.properties | 2 +-
 streams/quickstart/java/pom.xml   | 4 ++--
 .../quickstart/java/src/main/resources/archetype-resources/pom.xml| 4 ++--
 streams/quickstart/pom.xml| 4 ++--
 tests/kafkatest/__init__.py   | 2 +-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/gradle.properties b/gradle.properties
index 7062ba0..190f223 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -16,7 +16,7 @@
 group=org.apache.kafka
 # NOTE: When you change this version number, you should also make sure to 
update
 # the version numbers in tests/kafkatest/__init__.py and kafka-merge-pr.py.
-version=1.1.0-SNAPSHOT
+version=1.1.0
 scalaVersion=2.11.12
 task=build
 org.gradle.jvmargs=-XX:MaxPermSize=512m -Xmx1024m -Xss2m
diff --git a/streams/quickstart/java/pom.xml b/streams/quickstart/java/pom.xml
index 70c1416..c3f50d0 100644
--- a/streams/quickstart/java/pom.xml
+++ b/streams/quickstart/java/pom.xml
@@ -26,11 +26,11 @@
 
 org.apache.kafka
 streams-quickstart
-1.1.0-SNAPSHOT
+1.1.0
 ..
 
 
 streams-quickstart-java
 maven-archetype
 
-
\ No newline at end of file
+
diff --git 
a/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml 
b/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
index f2b8a8f..07ca444 100644
--- a/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
+++ b/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
@@ -29,7 +29,7 @@
 
 
 UTF-8
-1.1.0-SNAPSHOT
+1.1.0
 1.7.7
 1.2.17
 
@@ -133,4 +133,4 @@
 ${kafka.version}
 
 
-
\ No newline at end of file
+
diff --git a/streams/quickstart/pom.xml b/streams/quickstart/pom.xml
index 010b3fb..0a165d9 100644
--- a/streams/quickstart/pom.xml
+++ b/streams/quickstart/pom.xml
@@ -22,7 +22,7 @@
 org.apache.kafka
 streams-quickstart
 pom
-1.1.0-SNAPSHOT
+1.1.0
 
 Kafka Streams :: Quickstart
 
@@ -118,4 +118,4 @@
 
 
 
-
\ No newline at end of file
+
diff --git a/tests/kafkatest/__init__.py b/tests/kafkatest/__init__.py
index 80824f9..e7778a4 100644
--- a/tests/kafkatest/__init__.py
+++ b/tests/kafkatest/__init__.py
@@ -22,4 +22,4 @@
 # Instead, in development branches, the version should have a suffix of the 
form ".devN"
 #
 # For example, when Kafka is at version 1.0.0-SNAPSHOT, this should be 
something like "1.0.0.dev0"
-__version__ = '1.1.0.dev0'
+__version__ = '1.1.0'

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch 1.1 updated: KAFKA-6577: Fix Connect system tests and add debug messages

2018-02-22 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new ba0389f  KAFKA-6577: Fix Connect system tests and add debug messages
ba0389f is described below

commit ba0389f57cf9d1321e3762f21da04b62d04559d5
Author: Randall Hauch <rha...@gmail.com>
AuthorDate: Thu Feb 22 09:39:59 2018 +

KAFKA-6577: Fix Connect system tests and add debug messages

**NOTE: This should be backported to the `1.1` branch, and is currently a 
blocker for 1.1.**

The `connect_test.py::ConnectStandaloneFileTest.test_file_source_and_sink` 
system test is failing with the SASL configuration without a sufficient 
explanation. During the test, the Connect worker fails to start, but the 
Connect log contains no useful information. There are actual several things 
compounding to cause the failure and make it difficult to understand the 
problem.

First, the 
`tests/kafkatest/tests/connect/templates/connect_standalone.properties` is only 
adding in the broker's security configuration with the `producer.` and 
`consumer.` prefixes, but is not adding them with no prefix. The worker uses 
the AdminClient to connect to the broker to get the Kafka cluster ID and to 
manage the three internal topics, and the AdminClient is configured via 
top-level properties. Because the SASL test requires the clients all connect 
using SASL, the lack of b [...]

Second, the default `request.timeout.ms` for the AdminClient (and the other 
clients) is 120 seconds, so the AdminClient was retrying for 120 seconds before 
it would give up and thrown an error. However, the test was only waiting for 60 
seconds before determining that the service failed to start. This can be 
corrected by setting `request.timeout.ms=1` in the Connect distributed and 
standalone worker configurations.

Third, the Connect workers were recently changed to lookup the Kafka 
cluster ID before it started the herder. This is unlike the older uses of the 
AdminClient to find and manage the internal topics, where failure to connect 
was not necessarily logged correctly but nevertheless still skipped over, 
relying upon broker auto-topic creation to create the internal topics. (This 
may be why the test did not fail prior to the recent change to always require a 
successful AdminClient connection. [...]

The `ConnectStandaloneFileTest.test_file_source_and_sink` system tests were 
run locally prior to this fix, and they failed as with the nightlies. Once 
these fixes were made, the locally run system tests passed.

Author: Randall Hauch <rha...@gmail.com>

Reviewers: Konstantine Karantasis <konstant...@confluent.io>, Ewen 
Cheslack-Postava <m...@ewencp.org>

Closes #4610 from rhauch/kafka-6577-trunk

(cherry picked from commit fc19c3e6f243a8d1b3e27cdc912dc092bbd342e0)
Signed-off-by: Damian Guy <damian@gmail.com>
---
 .../main/java/org/apache/kafka/connect/cli/ConnectDistributed.java   | 1 +
 .../main/java/org/apache/kafka/connect/cli/ConnectStandalone.java| 1 +
 .../org/apache/kafka/connect/storage/KafkaConfigBackingStore.java| 1 +
 .../org/apache/kafka/connect/storage/KafkaOffsetBackingStore.java| 1 +
 .../org/apache/kafka/connect/storage/KafkaStatusBackingStore.java| 1 +
 .../src/main/java/org/apache/kafka/connect/util/ConnectUtils.java| 5 -
 tests/kafkatest/tests/connect/connect_test.py| 2 +-
 .../kafkatest/tests/connect/templates/connect-distributed.properties | 3 +++
 .../kafkatest/tests/connect/templates/connect-standalone.properties  | 4 
 9 files changed, 17 insertions(+), 2 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
index 98a77ed..8930602 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
@@ -73,6 +73,7 @@ public class ConnectDistributed {
 DistributedConfig config = new DistributedConfig(workerProps);
 
 String kafkaClusterId = ConnectUtils.lookupKafkaClusterId(config);
+log.debug("Kafka cluster ID: {}", kafkaClusterId);
 
 RestServer rest = new RestServer(config);
 URI advertisedUrl = rest.advertisedUrl();
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
index 1769905..413cb46 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStanda

[kafka] branch trunk updated: KAFKA-6238; Fix inter-broker protocol message format compatibility check

2018-02-21 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 660c0c0  KAFKA-6238; Fix inter-broker protocol message format 
compatibility check
660c0c0 is described below

commit 660c0c0aa33ced5307ee70bfdb78ebde4b978d73
Author: Jason Gustafson <ja...@confluent.io>
AuthorDate: Wed Feb 21 09:38:39 2018 +

KAFKA-6238; Fix inter-broker protocol message format compatibility check

This patch fixes a bug in the validation of the inter-broker protocol and 
the message format version. We should allow the configured message format api 
version to be greater than the inter-broker protocol api version as long as the 
actual message format versions are equal. For example, if the message format 
version is set to 1.0, it is fine for the inter-broker protocol version to be 
0.11.0 because they both use message format v2.

I have added a unit test which checks compatibility for all combinations of 
the message format version and the inter-broker protocol version.

Author: Jason Gustafson <ja...@confluent.io>

Reviewers: Ismael Juma <ism...@juma.me.uk>

Closes #4583 from hachikuji/KAFKA-6328-REOPENED
---
 .../apache/kafka/common/record/RecordFormat.java   | 41 +++
 core/src/main/scala/kafka/api/ApiVersion.scala | 46 ++
 core/src/main/scala/kafka/log/Log.scala|  4 +-
 core/src/main/scala/kafka/server/KafkaApis.scala   |  5 ++-
 core/src/main/scala/kafka/server/KafkaConfig.scala |  9 -
 .../main/scala/kafka/server/ReplicaManager.scala   |  2 +-
 .../test/scala/unit/kafka/api/ApiVersionTest.scala | 13 ++
 .../scala/unit/kafka/server/KafkaConfigTest.scala  | 24 +++
 docs/upgrade.html  | 19 +
 9 files changed, 131 insertions(+), 32 deletions(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/record/RecordFormat.java 
b/clients/src/main/java/org/apache/kafka/common/record/RecordFormat.java
new file mode 100644
index 000..e71ec59
--- /dev/null
+++ b/clients/src/main/java/org/apache/kafka/common/record/RecordFormat.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.record;
+
+public enum RecordFormat {
+V0(0), V1(1), V2(2);
+
+public final byte value;
+
+RecordFormat(int value) {
+this.value = (byte) value;
+}
+
+public static RecordFormat lookup(byte version) {
+switch (version) {
+case 0: return V0;
+case 1: return V1;
+case 2: return V2;
+default: throw new IllegalArgumentException("Unknown format 
version: " + version);
+}
+}
+
+public static RecordFormat current() {
+return V2;
+}
+
+}
diff --git a/core/src/main/scala/kafka/api/ApiVersion.scala 
b/core/src/main/scala/kafka/api/ApiVersion.scala
index b8329c1..9270a7a 100644
--- a/core/src/main/scala/kafka/api/ApiVersion.scala
+++ b/core/src/main/scala/kafka/api/ApiVersion.scala
@@ -17,7 +17,7 @@
 
 package kafka.api
 
-import org.apache.kafka.common.record.RecordBatch
+import org.apache.kafka.common.record.RecordFormat
 
 /**
  * This class contains the different Kafka versions.
@@ -90,11 +90,23 @@ object ApiVersion {
 
   def latestVersion = versionNameMap.values.max
 
+  def allVersions: Set[ApiVersion] = {
+versionNameMap.values.toSet
+  }
+
+  def minVersionForMessageFormat(messageFormatVersion: RecordFormat): String = 
{
+messageFormatVersion match {
+  case RecordFormat.V0 => "0.8.0"
+  case RecordFormat.V1 => "0.10.0"
+  case RecordFormat.V2 => "0.11.0"
+  case _ => throw new IllegalArgumentException(s"Invalid message format 
version $messageFormatVersion")
+}
+  }
 }
 
 sealed trait ApiVersion extends Ordered[ApiVersion] {
   val version: String
-  val messageFormatVersion: Byte
+  val messageFormatVersion: RecordFormat
   val id: Int
 
   override def compare(that: Api

[kafka] branch trunk updated: MINOR: Fix streams broker compatibility test.

2018-02-20 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 57059d4  MINOR: Fix streams broker compatibility test.
57059d4 is described below

commit 57059d40223d6284d1b2c9f3034b30f6bd61c44f
Author: Damian Guy <damian@gmail.com>
AuthorDate: Tue Feb 20 17:46:05 2018 +

MINOR: Fix streams broker compatibility test.

Change the string in the test condition to the one that is logged

Author: Damian Guy <damian@gmail.com>

Reviewers: Bill Bejeck <b...@confluent.io>, Guozhang Wang 
<wangg...@gmail.com>

Closes #4599 from dguy/broker-compatibility
---
 tests/kafkatest/tests/streams/streams_broker_compatibility_test.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/kafkatest/tests/streams/streams_broker_compatibility_test.py 
b/tests/kafkatest/tests/streams/streams_broker_compatibility_test.py
index 1eb46ef..b00b9bb 100644
--- a/tests/kafkatest/tests/streams/streams_broker_compatibility_test.py
+++ b/tests/kafkatest/tests/streams/streams_broker_compatibility_test.py
@@ -67,9 +67,9 @@ class StreamsBrokerCompatibility(Test):
 
 processor.node.account.ssh(processor.start_cmd(processor.node))
 with processor.node.account.monitor_log(processor.STDERR_FILE) as 
monitor:
-monitor.wait_until('FATAL: An unexpected exception 
org.apache.kafka.common.errors.UnsupportedVersionException: The broker does not 
support LIST_OFFSETS ',
+monitor.wait_until("Exception in thread \"main\" 
org.apache.kafka.common.errors.UnsupportedVersionException: The broker does not 
support LIST_OFFSETS ",
timeout_sec=60,
-   err_msg="Never saw 'FATAL: An unexpected 
exception org.apache.kafka.common.errors.UnsupportedVersionException: The 
broker does not support LIST_OFFSETS ' error message " + 
str(processor.node.account))
+   err_msg="Exception in thread \"main\" 
org.apache.kafka.common.errors.UnsupportedVersionException: The broker does not 
support LIST_OFFSETS " + str(processor.node.account))
 
 self.kafka.stop()
 

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch 1.1 updated: MINOR: ignore streams eos tests (#4597)

2018-02-20 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 6ca5997  MINOR: ignore streams eos tests (#4597)
6ca5997 is described below

commit 6ca59977f378647254d50b2c62f192b93ba72551
Author: Damian Guy <damian@gmail.com>
AuthorDate: Tue Feb 20 17:26:31 2018 +

MINOR: ignore streams eos tests (#4597)
---
 tests/kafkatest/tests/streams/streams_eos_test.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/kafkatest/tests/streams/streams_eos_test.py 
b/tests/kafkatest/tests/streams/streams_eos_test.py
index d6ac600..986702c 100644
--- a/tests/kafkatest/tests/streams/streams_eos_test.py
+++ b/tests/kafkatest/tests/streams/streams_eos_test.py
@@ -38,6 +38,7 @@ class StreamsEosTest(KafkaTest):
 self.driver = StreamsEosTestDriverService(test_context, self.kafka)
 self.test_context = test_context
 
+@ignored
 @cluster(num_nodes=9)
 def test_rebalance_simple(self):
 self.run_rebalance(StreamsEosTestJobRunnerService(self.test_context, 
self.kafka),

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch 1.1 updated: MINOR: Redirect response code in Connect's RestClient to logs instead of stdout

2018-02-20 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new d373fe2  MINOR: Redirect response code in Connect's RestClient to logs 
instead of stdout
d373fe2 is described below

commit d373fe2e378e3797d1b64f255f281f0b1f41cede
Author: Konstantine Karantasis <konstant...@confluent.io>
AuthorDate: Tue Feb 20 17:15:31 2018 +

MINOR: Redirect response code in Connect's RestClient to logs instead of 
stdout

Sending the response code of an http request issued via `RestClient` in 
Connect to stdout seems like a unconventional choice.

This PR redirects the responds code with a message in the logs at DEBUG 
level (usually the same level as the one that the caller of 
`RestClient.httpRequest` uses.

This fix will also fix system tests that broke by outputting this response 
code to stdout.

Author: Konstantine Karantasis <konstant...@confluent.io>

Reviewers: Randall Hauch <rha...@gmail.com>, Damian Guy 
<damian@gmail.com>

Closes #4591 from 
kkonstantine/MINOR-Redirect-response-code-in-Connect-RestClient-to-logs-instead-of-stdout

(cherry picked from commit b79e11bb511e259c8187d865761c3b448391603f)
Signed-off-by: Damian Guy <damian@gmail.com>
---
 .../main/java/org/apache/kafka/connect/runtime/rest/RestClient.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
index d500ad2..15e8418 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
@@ -82,12 +82,14 @@ public class RestClient {
 req.method(method);
 req.accept("application/json");
 req.agent("kafka-connect");
-req.content(new StringContentProvider(serializedBody, 
StandardCharsets.UTF_8), "application/json");
+if (serializedBody != null) {
+req.content(new StringContentProvider(serializedBody, 
StandardCharsets.UTF_8), "application/json");
+}
 
 ContentResponse res = req.send();
 
 int responseCode = res.getStatus();
-System.out.println(responseCode);
+log.debug("Request's response code: {}", responseCode);
 if (responseCode == HttpStatus.NO_CONTENT_204) {
 return new HttpResponse<>(responseCode, 
convertHttpFieldsToMap(res.getHeaders()), null);
 } else if (responseCode >= 400) {

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch trunk updated: MINOR: Redirect response code in Connect's RestClient to logs instead of stdout

2018-02-20 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b79e11b  MINOR: Redirect response code in Connect's RestClient to logs 
instead of stdout
b79e11b is described below

commit b79e11bb511e259c8187d865761c3b448391603f
Author: Konstantine Karantasis <konstant...@confluent.io>
AuthorDate: Tue Feb 20 17:15:31 2018 +

MINOR: Redirect response code in Connect's RestClient to logs instead of 
stdout

Sending the response code of an http request issued via `RestClient` in 
Connect to stdout seems like a unconventional choice.

This PR redirects the responds code with a message in the logs at DEBUG 
level (usually the same level as the one that the caller of 
`RestClient.httpRequest` uses.

This fix will also fix system tests that broke by outputting this response 
code to stdout.

Author: Konstantine Karantasis <konstant...@confluent.io>

Reviewers: Randall Hauch <rha...@gmail.com>, Damian Guy 
<damian@gmail.com>

Closes #4591 from 
kkonstantine/MINOR-Redirect-response-code-in-Connect-RestClient-to-logs-instead-of-stdout
---
 .../main/java/org/apache/kafka/connect/runtime/rest/RestClient.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
index d500ad2..15e8418 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestClient.java
@@ -82,12 +82,14 @@ public class RestClient {
 req.method(method);
 req.accept("application/json");
 req.agent("kafka-connect");
-req.content(new StringContentProvider(serializedBody, 
StandardCharsets.UTF_8), "application/json");
+if (serializedBody != null) {
+req.content(new StringContentProvider(serializedBody, 
StandardCharsets.UTF_8), "application/json");
+}
 
 ContentResponse res = req.send();
 
 int responseCode = res.getStatus();
-System.out.println(responseCode);
+log.debug("Request's response code: {}", responseCode);
 if (responseCode == HttpStatus.NO_CONTENT_204) {
 return new HttpResponse<>(responseCode, 
convertHttpFieldsToMap(res.getHeaders()), null);
 } else if (responseCode >= 400) {

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch 1.1 updated: MINOR: Fix file source task configs in system tests.

2018-02-20 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 323e555  MINOR: Fix file source task configs in system tests.
323e555 is described below

commit 323e555074344aeddd6e747067c403833582ab06
Author: Konstantine Karantasis <konstant...@confluent.io>
AuthorDate: Tue Feb 20 10:45:50 2018 +

MINOR: Fix file source task configs in system tests.

Another fall-through of `headers.converter` and `batch.size` properties. 
Here in `FileStreamSourceConnector` tests

Author: Konstantine Karantasis <konstant...@confluent.io>

Reviewers: Randall Hauch <rha...@gmail.com>, Damian Guy 
<damian@gmail.com>

Closes #4590 from 
kkonstantine/MINOR-Fix-file-source-task-config-in-system-tests

(cherry picked from commit f10c0d38634822ab4c9abc4744268c9fd5b50a2c)
Signed-off-by: Damian Guy <damian@gmail.com>
---
 tests/kafkatest/tests/connect/connect_rest_test.py | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tests/kafkatest/tests/connect/connect_rest_test.py 
b/tests/kafkatest/tests/connect/connect_rest_test.py
index 8172df3..3c7cd89 100644
--- a/tests/kafkatest/tests/connect/connect_rest_test.py
+++ b/tests/kafkatest/tests/connect/connect_rest_test.py
@@ -31,14 +31,15 @@ class ConnectRestApiTest(KafkaTest):
 FILE_SOURCE_CONNECTOR = 
'org.apache.kafka.connect.file.FileStreamSourceConnector'
 FILE_SINK_CONNECTOR = 
'org.apache.kafka.connect.file.FileStreamSinkConnector'
 
-FILE_SOURCE_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'topic', 'file', 'transforms'}
-FILE_SINK_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'topics', 'file', 'transforms', 
'topics.regex'}
+FILE_SOURCE_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'header.converter', 'batch.size', 'topic', 
'file', 'transforms'}
+FILE_SINK_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'header.converter', 'topics', 'file', 
'transforms', 'topics.regex'}
 
 INPUT_FILE = "/mnt/connect.input"
 INPUT_FILE2 = "/mnt/connect.input2"
 OUTPUT_FILE = "/mnt/connect.output"
 
 TOPIC = "test"
+DEFAULT_BATCH_SIZE = "2000"
 OFFSETS_TOPIC = "connect-offsets"
 OFFSETS_REPLICATION_FACTOR = "1"
 OFFSETS_PARTITIONS = "1"
@@ -141,7 +142,8 @@ class ConnectRestApiTest(KafkaTest):
 'config': {
 'task.class': 
'org.apache.kafka.connect.file.FileStreamSourceTask',
 'file': self.INPUT_FILE,
-'topic': self.TOPIC
+'topic': self.TOPIC,
+'batch.size': self.DEFAULT_BATCH_SIZE
 }
 }]
 source_task_info = self.cc.get_connector_tasks("local-file-source")

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch trunk updated: MINOR: Fix file source task configs in system tests.

2018-02-20 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f10c0d3  MINOR: Fix file source task configs in system tests.
f10c0d3 is described below

commit f10c0d38634822ab4c9abc4744268c9fd5b50a2c
Author: Konstantine Karantasis <konstant...@confluent.io>
AuthorDate: Tue Feb 20 10:45:50 2018 +

MINOR: Fix file source task configs in system tests.

Another fall-through of `headers.converter` and `batch.size` properties. 
Here in `FileStreamSourceConnector` tests

Author: Konstantine Karantasis <konstant...@confluent.io>

Reviewers: Randall Hauch <rha...@gmail.com>, Damian Guy 
<damian@gmail.com>

Closes #4590 from 
kkonstantine/MINOR-Fix-file-source-task-config-in-system-tests
---
 tests/kafkatest/tests/connect/connect_rest_test.py | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tests/kafkatest/tests/connect/connect_rest_test.py 
b/tests/kafkatest/tests/connect/connect_rest_test.py
index 8172df3..3c7cd89 100644
--- a/tests/kafkatest/tests/connect/connect_rest_test.py
+++ b/tests/kafkatest/tests/connect/connect_rest_test.py
@@ -31,14 +31,15 @@ class ConnectRestApiTest(KafkaTest):
 FILE_SOURCE_CONNECTOR = 
'org.apache.kafka.connect.file.FileStreamSourceConnector'
 FILE_SINK_CONNECTOR = 
'org.apache.kafka.connect.file.FileStreamSinkConnector'
 
-FILE_SOURCE_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'topic', 'file', 'transforms'}
-FILE_SINK_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'topics', 'file', 'transforms', 
'topics.regex'}
+FILE_SOURCE_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'header.converter', 'batch.size', 'topic', 
'file', 'transforms'}
+FILE_SINK_CONFIGS = {'name', 'connector.class', 'tasks.max', 
'key.converter', 'value.converter', 'header.converter', 'topics', 'file', 
'transforms', 'topics.regex'}
 
 INPUT_FILE = "/mnt/connect.input"
 INPUT_FILE2 = "/mnt/connect.input2"
 OUTPUT_FILE = "/mnt/connect.output"
 
 TOPIC = "test"
+DEFAULT_BATCH_SIZE = "2000"
 OFFSETS_TOPIC = "connect-offsets"
 OFFSETS_REPLICATION_FACTOR = "1"
 OFFSETS_PARTITIONS = "1"
@@ -141,7 +142,8 @@ class ConnectRestApiTest(KafkaTest):
 'config': {
 'task.class': 
'org.apache.kafka.connect.file.FileStreamSourceTask',
 'file': self.INPUT_FILE,
-'topic': self.TOPIC
+'topic': self.TOPIC,
+'batch.size': self.DEFAULT_BATCH_SIZE
 }
 }]
 source_task_info = self.cc.get_connector_tasks("local-file-source")

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch trunk updated: Bump trunk versions to 1.2-SNAPSHOT (#4505)

2018-02-01 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ca01711  Bump trunk versions to 1.2-SNAPSHOT (#4505)
ca01711 is described below

commit ca01711c0ec0b616840cc696419e5bbf500f6651
Author: Damian Guy <damian@gmail.com>
AuthorDate: Thu Feb 1 11:35:43 2018 +

Bump trunk versions to 1.2-SNAPSHOT (#4505)
---
 gradle.properties  | 2 +-
 kafka-merge-pr.py  | 2 +-
 streams/quickstart/java/pom.xml| 2 +-
 streams/quickstart/java/src/main/resources/archetype-resources/pom.xml | 2 +-
 streams/quickstart/pom.xml | 2 +-
 tests/kafkatest/__init__.py| 2 +-
 6 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/gradle.properties b/gradle.properties
index 7062ba0..325a1d0 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -16,7 +16,7 @@
 group=org.apache.kafka
 # NOTE: When you change this version number, you should also make sure to 
update
 # the version numbers in tests/kafkatest/__init__.py and kafka-merge-pr.py.
-version=1.1.0-SNAPSHOT
+version=1.2.0-SNAPSHOT
 scalaVersion=2.11.12
 task=build
 org.gradle.jvmargs=-XX:MaxPermSize=512m -Xmx1024m -Xss2m
diff --git a/kafka-merge-pr.py b/kafka-merge-pr.py
index 90fcf22..02cf6e0 100755
--- a/kafka-merge-pr.py
+++ b/kafka-merge-pr.py
@@ -70,7 +70,7 @@ TEMP_BRANCH_PREFIX = "PR_TOOL"
 
 DEV_BRANCH_NAME = "trunk"
 
-DEFAULT_FIX_VERSION = os.environ.get("DEFAULT_FIX_VERSION", "1.1.0")
+DEFAULT_FIX_VERSION = os.environ.get("DEFAULT_FIX_VERSION", "1.2.0")
 
 def get_json(url):
 try:
diff --git a/streams/quickstart/java/pom.xml b/streams/quickstart/java/pom.xml
index 70c1416..fed2bbc 100644
--- a/streams/quickstart/java/pom.xml
+++ b/streams/quickstart/java/pom.xml
@@ -26,7 +26,7 @@
 
 org.apache.kafka
 streams-quickstart
-1.1.0-SNAPSHOT
+1.2.0-SNAPSHOT
 ..
 
 
diff --git 
a/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml 
b/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
index f2b8a8f..6da81a7 100644
--- a/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
+++ b/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
@@ -29,7 +29,7 @@
 
 
 UTF-8
-1.1.0-SNAPSHOT
+1.2.0-SNAPSHOT
 1.7.7
 1.2.17
 
diff --git a/streams/quickstart/pom.xml b/streams/quickstart/pom.xml
index d348e64..b14a9ab 100644
--- a/streams/quickstart/pom.xml
+++ b/streams/quickstart/pom.xml
@@ -22,7 +22,7 @@
 org.apache.kafka
 streams-quickstart
 pom
-1.1.0-SNAPSHOT
+1.2.0-SNAPSHOT
 
 Kafka Streams :: Quickstart
 
diff --git a/tests/kafkatest/__init__.py b/tests/kafkatest/__init__.py
index 80824f9..935f20d 100644
--- a/tests/kafkatest/__init__.py
+++ b/tests/kafkatest/__init__.py
@@ -22,4 +22,4 @@
 # Instead, in development branches, the version should have a suffix of the 
form ".devN"
 #
 # For example, when Kafka is at version 1.0.0-SNAPSHOT, this should be 
something like "1.0.0.dev0"
-__version__ = '1.1.0.dev0'
+__version__ = '1.2.0.dev0'

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch 1.1 created (now c38a345)

2018-02-01 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a change to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git.


  at c38a345  MINOR: Fix brokerId passed to metrics reporters (#4497)

No new revisions were added by this update.

-- 
To stop receiving notification emails like this one, please contact
damian...@apache.org.


[kafka] branch 1.0 updated: KAFKA-6378 KStream-GlobalKTable null KeyValueMapper handling

2018-01-31 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new ab3e4a2  KAFKA-6378 KStream-GlobalKTable null KeyValueMapper handling
ab3e4a2 is described below

commit ab3e4a27671df2499f8dea34a84fc0740102269c
Author: Andy Bryant <andybry...@gmail.com>
AuthorDate: Wed Jan 31 10:20:12 2018 +

KAFKA-6378 KStream-GlobalKTable null KeyValueMapper handling

For KStream-GlobalKTable joins let `null` `KeyValueMapper` results indicate 
no match

For KStream-GlobalKTable joins, a `KeyValueMapper` is used to derive a key 
from the stream records into the `GlobalKTable`. For some stream values there 
may be no valid reference to the table stream. This patch allows developers to 
use `null` return values to indicate there is no possible match. This is 
possible in this case since `null` is never a valid key value for a 
`GlobalKTable`.
Without this patch, providing a `null` value caused the stream to crash on 
Kafka 1.0.

I added unit tests for KStream-GlobalKTable left and inner joins, since 
they were missing. I also covered this additional scenario where 
`KeyValueMapper` returns `null` to insure it is handled correctly.

Author: Andy Bryant <andybry...@gmail.com>

Reviewers: Matthias J. Sax <matth...@confluent.io>, Damian Guy 
<damian@gmail.com>

Closes #4424 from 
andybryant/KAFKA-6378-null-handling-stream-globaltable-join
---
 .../org/apache/kafka/streams/kstream/KStream.java  |  12 +-
 .../internals/KStreamKTableJoinProcessor.java  |   5 +-
 .../internals/KStreamGlobalKTableJoinTest.java | 211 +
 .../internals/KStreamGlobalKTableLeftJoinTest.java | 211 +
 .../kstream/internals/KStreamKTableJoinTest.java   | 132 +
 .../internals/KStreamKTableLeftJoinTest.java   | 139 +-
 6 files changed, 617 insertions(+), 93 deletions(-)

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
index 0d1d201..6973719 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
@@ -2474,8 +2474,10 @@ public interface KStream<K, V> {
  * For each {@code KStream} record that finds a corresponding record in 
{@link GlobalKTable} the provided
  * {@link ValueJoiner} will be called to compute a value (with arbitrary 
type) for the result record.
  * The key of the result record is the same as the key of this {@code 
KStream}.
- * If an {@code KStream} input record key or value is {@code null} the 
record will not be included in the join
+ * If a {@code KStream} input record key or value is {@code null} the 
record will not be included in the join
  * operation and thus no output record will be added to the resulting 
{@code KStream}.
+ * If {@code keyValueMapper} returns {@code null} implying no match 
exists, no output record will be added to the
+ * resulting {@code KStream}.
  *
  * @param globalKTable   the {@link GlobalKTable} to be joined with this 
stream
  * @param keyValueMapper instance of {@link KeyValueMapper} used to map 
from the (key, value) of this stream
@@ -2506,11 +2508,13 @@ public interface KStream<K, V> {
  * 
  * For each {@code KStream} record whether or not it finds a corresponding 
record in {@link GlobalKTable} the
  * provided {@link ValueJoiner} will be called to compute a value (with 
arbitrary type) for the result record.
- * If no {@link GlobalKTable} record was found during lookup, a {@code 
null} value will be provided to
- * {@link ValueJoiner}.
  * The key of the result record is the same as this {@code KStream}.
- * If an {@code KStream} input record key or value is {@code null} the 
record will not be included in the join
+ * If a {@code KStream} input record key or value is {@code null} the 
record will not be included in the join
  * operation and thus no output record will be added to the resulting 
{@code KStream}.
+ * If {@code keyValueMapper} returns {@code null} implying no match 
exists, a {@code null} value will be
+ * provided to {@link ValueJoiner}.
+ * If no {@link GlobalKTable} record was found during lookup, a {@code 
null} value will be provided to
+ * {@link ValueJoiner}.
  *
  * @param globalKTable   the {@link GlobalKTable} to be joined with this 
stream
  * @param keyValueMapper instance of {@link KeyValueMapper} used to map 
from the (key, value) of this stream
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamKTableJoinProcessor.java
 
b/streams/src/main/java/org/ap

[kafka] branch trunk updated: KAFKA-6412 Improve synchronization in CachingKeyValueStore methods

2018-01-10 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 874eeb8  KAFKA-6412 Improve synchronization in CachingKeyValueStore 
methods
874eeb8 is described below

commit 874eeb88d9dc5e8fbcd681672fb90a4cd7597fec
Author: tedyu <yuzhih...@gmail.com>
AuthorDate: Wed Jan 10 10:21:51 2018 +

KAFKA-6412 Improve synchronization in CachingKeyValueStore methods

Currently CachingKeyValueStore methods are synchronized at method level.

It seems we can use read lock for getter and write lock for put / delete 
methods.

For getInternal(), if the underlying thread is streamThread, the 
getInternal() may trigger eviction. This can be handled by obtaining write lock 
at the beginning of the method for streamThread.

The jmh patch is attached to JIRA:
https://issues.apache.org/jira/secure/attachment/12905140/6412-jmh.v1.txt

Author: tedyu <yuzhih...@gmail.com>

Reviewers: Damian Guy <damian@gmail.com>, Bill Bejeck 
<b...@confluent.io>

Closes #4372 from tedyu/6412
---
 .../state/internals/CachingKeyValueStore.java  | 97 --
 1 file changed, 71 insertions(+), 26 deletions(-)

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
index f0669a4..9fff8cc 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
@@ -31,6 +31,9 @@ import org.apache.kafka.streams.state.StateSerdes;
 
 import java.util.List;
 import java.util.Objects;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReadWriteLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 class CachingKeyValueStore<K, V> extends WrappedStateStore.AbstractStateStore 
implements KeyValueStore<Bytes, byte[]>, CachedStateStore<K, V> {
 
@@ -44,6 +47,7 @@ class CachingKeyValueStore<K, V> extends 
WrappedStateStore.AbstractStateStore im
 private InternalProcessorContext context;
 private StateSerdes<K, V> serdes;
 private Thread streamThread;
+private ReadWriteLock lock = new ReentrantReadWriteLock();
 
 CachingKeyValueStore(final KeyValueStore<Bytes, byte[]> underlying,
  final Serde keySerde,
@@ -108,9 +112,14 @@ class CachingKeyValueStore<K, V> extends 
WrappedStateStore.AbstractStateStore im
 }
 
 @Override
-public synchronized void flush() {
-cache.flush(cacheName);
-underlying.flush();
+public void flush() {
+lock.writeLock().lock();
+try {
+cache.flush(cacheName);
+underlying.flush();
+} finally {
+lock.writeLock().unlock();
+}
 }
 
 @Override
@@ -131,10 +140,21 @@ class CachingKeyValueStore<K, V> extends 
WrappedStateStore.AbstractStateStore im
 }
 
 @Override
-public synchronized byte[] get(final Bytes key) {
+public byte[] get(final Bytes key) {
 validateStoreOpen();
-Objects.requireNonNull(key);
-return getInternal(key);
+Lock theLock;
+if (Thread.currentThread().equals(streamThread)) {
+theLock = lock.writeLock();
+} else {
+theLock = lock.readLock();
+}
+theLock.lock();
+try {
+Objects.requireNonNull(key);
+return getInternal(key);
+} finally {
+theLock.unlock();
+}
 }
 
 private byte[] getInternal(final Bytes key) {
@@ -176,50 +196,75 @@ class CachingKeyValueStore<K, V> extends 
WrappedStateStore.AbstractStateStore im
 }
 
 @Override
-public synchronized long approximateNumEntries() {
+public long approximateNumEntries() {
 validateStoreOpen();
-return underlying.approximateNumEntries();
+lock.readLock().lock();
+try {
+return underlying.approximateNumEntries();
+} finally {
+lock.readLock().unlock();
+}
 }
 
 @Override
-public synchronized void put(final Bytes key, final byte[] value) {
+public void put(final Bytes key, final byte[] value) {
 Objects.requireNonNull(key, "key cannot be null");
 validateStoreOpen();
-putInternal(key, value);
+lock.writeLock().lock();
+try {
+putInternal(key, value);
+} finally {
+lock.writeLock().unlock();
+}
 }
 
-private synchronized void putInternal(final Bytes rawKey, final byte[] 
value) {
+private void putInternal(final Bytes r

[kafka] branch trunk updated: MINOR: Add documentation for KAFKA-6086 (ProductionExceptionHandler) (#4395)

2018-01-08 Thread damianguy
This is an automated email from the ASF dual-hosted git repository.

damianguy pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e1c5d0c  MINOR: Add documentation for KAFKA-6086 
(ProductionExceptionHandler) (#4395)
e1c5d0c is described below

commit e1c5d0c119b38a9ddb2b09b6309a3817d86d8e14
Author: Matt Farmer <m...@frmr.me>
AuthorDate: Mon Jan 8 06:33:23 2018 -0500

MINOR: Add documentation for KAFKA-6086 (ProductionExceptionHandler) (#4395)

* Update streams documentation to describe production exception handler

* Add a mention of the ProductionExceptionHandler in the upgrade guide
---
 docs/streams/developer-guide/config-streams.html | 76 +++-
 docs/streams/upgrade-guide.html  |  9 ++-
 2 files changed, 68 insertions(+), 17 deletions(-)

diff --git a/docs/streams/developer-guide/config-streams.html 
b/docs/streams/developer-guide/config-streams.html
index dbac7fb..256cc18 100644
--- a/docs/streams/developer-guide/config-streams.html
+++ b/docs/streams/developer-guide/config-streams.html
@@ -69,6 +69,7 @@
   
   Optional configuration 
parameters
 default.deserialization.exception.handler
+default.production.exception.handler
 default.key.serde
 default.value.serde
 num.standby.replicas
@@ -216,77 +217,82 @@
 Exception handling class that implements the DeserializationExceptionHandler interface.
 3 milliseconds
   
-  key.serde
+  default.production.exception.handler
+Medium
+Exception handling class that implements the ProductionExceptionHandler interface.
+DefaultProductionExceptionHandler
+  
+  key.serde
 Medium
 Default serializer/deserializer class for record 
keys, implements the Serde interface (see also value.serde).
 Serdes.ByteArray().getClass().getName()
   
-  metric.reporters
+  metric.reporters
 Low
 A list of classes to use as metrics reporters.
 the empty list
   
-  metrics.num.samples
+  metrics.num.samples
 Low
 The number of samples maintained to compute 
metrics.
 2
   
-  metrics.recording.level
+  metrics.recording.level
 Low
 The highest recording level for metrics.
 INFO
   
-  metrics.sample.window.ms
+  metrics.sample.window.ms
 Low
 The window of time a metrics sample is computed 
over.
 3 milliseconds
   
-  num.standby.replicas
+  num.standby.replicas
 Medium
 The number of standby replicas for each task.
 0
   
-  num.stream.threads
+  num.stream.threads
 Medium
 The number of threads to execute stream 
processing.
 1
   
-  partition.grouper
+  partition.grouper
 Low
 Partition grouper class that implements the PartitionGrouper 
interface.
 See Partition Grouper
   
-  poll.ms
+  poll.ms
 Low
 The amount of time in milliseconds to block 
waiting for input.
 100 milliseconds
   
-  replication.factor
+  replication.factor
 High
 The replication factor for changelog topics and 
repartition topics created by the application.
 1
   
-  state.cleanup.delay.ms
+  state.cleanup.delay.ms
 Low
 The amount of time in milliseconds to wait before 
deleting state when a partition has migrated.
 600 milliseconds
   
-  state.dir
+  state.dir
 High
 Directory location for state stores.
 /var/lib/kafka-streams
   
-  timestamp.extractor
+  timestamp.extractor
 Medium
 Timestamp extractor class that implements the 
TimestampExtractor interface.
 See Timestamp Extractor
   
-  value.serde
+  value.serde
 Medium
 Default serializer/deserializer class for record 
values, implements the Serde interface (see also key.serde).
 Serdes.ByteArray().getClass().getName()
   
-  windowstore.changelog.additional.retention.ms
+  windowstore.changelog.additional.retention.ms
 Low
 Added to a windows maintainMs to ensure data is 
not deleted from the log prematurely. Allows for clock drift.
 8640 milliseconds = 1 day
@@ -309,6 +

kafka git commit: KAFKA-6086: Provide for custom error handling when Kafka Streams fails to produce

2017-12-15 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 68712dcde -> 69777260e


KAFKA-6086: Provide for custom error handling when Kafka Streams fails to 
produce

This PR creates and implements the `ProductionExceptionHandler` as described in 
[KIP-210](https://cwiki.apache.org/confluence/display/KAFKA/KIP-210+-+Provide+for+custom+error+handling++when+Kafka+Streams+fails+to+produce).

I've additionally provided a default implementation preserving the existing 
behavior. I fixed various compile errors in the tests that resulted from my 
changing of method signatures, and added tests to cover the new behavior.

Author: Matt Farmer 
Author: Matt Farmer 

Reviewers: Matthias J. Sax , Bill Bejeck 
, Damian Guy 

Closes #4165 from farmdawgnation/msf/kafka-6086


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/69777260
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/69777260
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/69777260

Branch: refs/heads/trunk
Commit: 69777260e05ab12ee8480c23cd2e6acc6e218a12
Parents: 68712dc
Author: Matt Farmer 
Authored: Fri Dec 15 12:53:17 2017 +
Committer: Damian Guy 
Committed: Fri Dec 15 12:53:17 2017 +

--
 .../org/apache/kafka/streams/StreamsConfig.java |  28 +++--
 .../DefaultProductionExceptionHandler.java  |  37 +++
 .../errors/ProductionExceptionHandler.java  |  59 +++
 .../internals/RecordCollectorImpl.java  |  86 
 .../streams/processor/internals/StreamTask.java |  10 +-
 ...lwaysContinueProductionExceptionHandler.java |  37 +++
 .../processor/internals/ProcessorNodeTest.java  |   5 +-
 .../internals/RecordCollectorTest.java  | 103 +--
 .../processor/internals/RecordQueueTest.java|   3 +-
 .../processor/internals/SinkNodeTest.java   |   5 +-
 .../processor/internals/StreamTaskTest.java |   4 +-
 .../streams/state/KeyValueStoreTestDriver.java  |   3 +-
 .../state/internals/RocksDBWindowStoreTest.java |  23 +++--
 .../state/internals/StoreChangeLoggerTest.java  |   3 +-
 .../apache/kafka/test/KStreamTestDriver.java|   5 +-
 15 files changed, 350 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/69777260/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java 
b/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
index d78fc0d..ecc8409 100644
--- a/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
+++ b/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
@@ -31,6 +31,8 @@ import org.apache.kafka.common.serialization.Serde;
 import org.apache.kafka.common.serialization.Serdes;
 import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
 import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
+import org.apache.kafka.streams.errors.ProductionExceptionHandler;
+import org.apache.kafka.streams.errors.DefaultProductionExceptionHandler;
 import org.apache.kafka.streams.errors.StreamsException;
 import org.apache.kafka.streams.processor.DefaultPartitionGrouper;
 import org.apache.kafka.streams.processor.FailOnInvalidTimestamp;
@@ -73,18 +75,18 @@ import static 
org.apache.kafka.common.requests.IsolationLevel.READ_COMMITTED;
  *
  * StreamsConfig streamsConfig = new StreamsConfig(streamsProperties);
  * }
- * 
+ *
  * Kafka Streams requires at least the following properties to be set:
  * 
  *  {@link #APPLICATION_ID_CONFIG "application.id"}
  *  {@link #BOOTSTRAP_SERVERS_CONFIG "bootstrap.servers"}
  * 
- * 
+ *
  * By default, Kafka Streams does not allow users to overwrite the following 
properties (Streams setting shown in parentheses):
  * 
  *   {@link ConsumerConfig#ENABLE_AUTO_COMMIT_CONFIG "enable.auto.commit"} 
(false) - Streams client will always disable/turn off auto committing
  * 
- * 
+ *
  * If {@link #PROCESSING_GUARANTEE_CONFIG "processing.guarantee"} is set to 
{@link #EXACTLY_ONCE "exactly_once"}, Kafka Streams does not allow users to 
overwrite the following properties (Streams setting shown in parentheses):
  * 
  *   {@link ConsumerConfig#ISOLATION_LEVEL_CONFIG "isolation.level"} 
(read_committed) - Consumers will always read committed data only
@@ -184,6 +186,11 @@ public class StreamsConfig extends AbstractConfig {
 public static final String 
DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG = 
"default.deserialization.exception.handler";
 private static final String 
DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_DOC = "Exception 

[1/2] kafka git commit: KAFKA-6121: Restore and global consumer should not use auto.offset.reset

2017-12-11 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 2bf2348b5 -> 043951753


http://git-wip-us.apache.org/repos/asf/kafka/blob/04395175/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
index 20cf125..df8d201 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
@@ -17,6 +17,7 @@
 package org.apache.kafka.streams.processor.internals;
 
 import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.InvalidOffsetException;
 import org.apache.kafka.clients.consumer.MockConsumer;
 import org.apache.kafka.clients.consumer.OffsetResetStrategy;
 import org.apache.kafka.common.PartitionInfo;
@@ -33,8 +34,8 @@ import org.apache.kafka.streams.errors.StreamsException;
 import org.apache.kafka.streams.processor.StateRestoreCallback;
 import org.apache.kafka.streams.processor.StateStore;
 import org.apache.kafka.streams.state.internals.OffsetCheckpoint;
+import org.apache.kafka.test.MockProcessorContext;
 import org.apache.kafka.test.MockStateRestoreListener;
-import org.apache.kafka.test.NoOpProcessorContext;
 import org.apache.kafka.test.NoOpReadOnlyStore;
 import org.apache.kafka.test.TestUtils;
 import org.junit.After;
@@ -70,45 +71,57 @@ public class GlobalStateManagerImplTest {
 private final MockTime time = new MockTime();
 private final TheStateRestoreCallback stateRestoreCallback = new 
TheStateRestoreCallback();
 private final MockStateRestoreListener stateRestoreListener = new 
MockStateRestoreListener();
+private final String storeName1 = "t1-store";
+private final String storeName2 = "t2-store";
+private final String storeName3 = "t3-store";
+private final String storeName4 = "t4-store";
 private final TopicPartition t1 = new TopicPartition("t1", 1);
 private final TopicPartition t2 = new TopicPartition("t2", 1);
+private final TopicPartition t3 = new TopicPartition("t3", 1);
+private final TopicPartition t4 = new TopicPartition("t4", 1);
 private GlobalStateManagerImpl stateManager;
-private NoOpProcessorContext context;
 private StateDirectory stateDirectory;
-private StreamsConfig config;
-private NoOpReadOnlyStore store1;
-private NoOpReadOnlyStore store2;
+private StreamsConfig streamsConfig;
+private NoOpReadOnlyStore store1, store2, store3, store4;
 private MockConsumer consumer;
 private File checkpointFile;
 private ProcessorTopology topology;
+private MockProcessorContext mockProcessorContext;
 
 @Before
 public void before() throws IOException {
 final Map storeToTopic = new HashMap<>();
-store1 = new NoOpReadOnlyStore<>("t1-store");
-store2 = new NoOpReadOnlyStore("t2-store");
-storeToTopic.put("t1-store", "t1");
-storeToTopic.put("t2-store", "t2");
 
-topology = 
ProcessorTopology.withGlobalStores(Utils.mkList(store1, store2), 
storeToTopic);
+storeToTopic.put(storeName1, t1.topic());
+storeToTopic.put(storeName2, t2.topic());
+storeToTopic.put(storeName3, t3.topic());
+storeToTopic.put(storeName4, t4.topic());
 
-context = new NoOpProcessorContext();
-config = new StreamsConfig(new Properties() {
+store1 = new NoOpReadOnlyStore<>(storeName1, true);
+store2 = new NoOpReadOnlyStore<>(storeName2, true);
+store3 = new NoOpReadOnlyStore<>(storeName3);
+store4 = new NoOpReadOnlyStore<>(storeName4);
+
+topology = 
ProcessorTopology.withGlobalStores(Utils.mkList(store1, store2, 
store3, store4), storeToTopic);
+
+streamsConfig = new StreamsConfig(new Properties() {
 {
 put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
 put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
 put(StreamsConfig.STATE_DIR_CONFIG, 
TestUtils.tempDirectory().getPath());
 }
 });
-stateDirectory = new StateDirectory(config, time);
-consumer = new MockConsumer<>(OffsetResetStrategy.EARLIEST);
+stateDirectory = new StateDirectory(streamsConfig, time);
+consumer = new MockConsumer<>(OffsetResetStrategy.NONE);
 stateManager = new GlobalStateManagerImpl(
-new LogContext("mock"),
+new LogContext("test"),
 topology,
 consumer,
 stateDirectory,
 stateRestoreListener,
-config);
+

[2/2] kafka git commit: KAFKA-6121: Restore and global consumer should not use auto.offset.reset

2017-12-11 Thread damianguy
KAFKA-6121: Restore and global consumer should not use auto.offset.reset

- set auto.offset.reste to "none" for restore and global consumer
- handle InvalidOffsetException for restore and global consumer
- add corresponding tests
- some minor cleanup

Author: Matthias J. Sax 

Reviewers: Damian Guy , 
GuozhangWang 

Closes #4215 from mjsax/kafka-6121-restore-global-consumer-handle-reset


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/04395175
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/04395175
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/04395175

Branch: refs/heads/trunk
Commit: 043951753b6fb6c8bae6d25d7a6a97e74b614cac
Parents: 2bf2348
Author: Matthias J. Sax 
Authored: Mon Dec 11 14:20:10 2017 +
Committer: Damian Guy 
Committed: Mon Dec 11 14:20:10 2017 +

--
 .../kafka/clients/consumer/MockConsumer.java|   8 +
 .../org/apache/kafka/streams/StreamsConfig.java |   1 +
 .../internals/AbstractProcessorContext.java |   5 +
 .../internals/AbstractStateManager.java | 114 ++
 .../processor/internals/AbstractTask.java   |   6 +-
 .../processor/internals/GlobalStateManager.java |   5 +-
 .../internals/GlobalStateManagerImpl.java   |  90 +---
 .../internals/GlobalStateUpdateTask.java|   4 +-
 .../processor/internals/GlobalStreamThread.java |  38 ++--
 .../internals/InternalProcessorContext.java |   7 +-
 .../internals/ProcessorStateManager.java|  70 +++---
 .../processor/internals/StandbyTask.java|   6 +-
 .../processor/internals/StateManager.java   |   6 +-
 .../internals/StoreChangelogReader.java |  21 +-
 .../streams/processor/internals/StreamTask.java |   6 -
 .../processor/internals/StreamThread.java   |  33 ++-
 .../internals/InnerMeteredKeyValueStore.java|   1 -
 .../internals/MeteredKeyValueBytesStore.java|   2 -
 .../apache/kafka/streams/StreamsConfigTest.java |   6 -
 .../processor/internals/AbstractTaskTest.java   | 147 +++--
 .../internals/GlobalStateManagerImplTest.java   | 216 +--
 .../internals/GlobalStreamThreadTest.java   | 111 --
 .../processor/internals/StateConsumerTest.java  |   2 +-
 .../processor/internals/StateManagerStub.java   |   8 +-
 .../internals/StoreChangelogReaderTest.java |  27 +++
 .../processor/internals/StreamTaskTest.java |  43 ++--
 .../processor/internals/StreamThreadTest.java   | 101 -
 .../kafka/test/GlobalStateManagerStub.java  |  17 +-
 .../apache/kafka/test/NoOpReadOnlyStore.java|  17 +-
 .../kafka/test/ProcessorTopologyTestDriver.java |  11 +-
 30 files changed, 862 insertions(+), 267 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/04395175/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java
--
diff --git 
a/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
b/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java
index 9b0c058..10aedbb 100644
--- a/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java
+++ b/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java
@@ -108,6 +108,14 @@ public class MockConsumer implements Consumer {
 }
 ensureNotClosed();
 this.subscriptions.subscribeFromPattern(topicsToSubscribe);
+final Set assignedPartitions = new HashSet<>();
+for (final String topic : topicsToSubscribe) {
+for (final PartitionInfo info : this.partitions.get(topic)) {
+assignedPartitions.add(new TopicPartition(topic, 
info.partition()));
+}
+
+}
+subscriptions.assignFromSubscribed(assignedPartitions);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/kafka/blob/04395175/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java 
b/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
index 49b8a3c..d78fc0d 100644
--- a/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
+++ b/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
@@ -763,6 +763,7 @@ public class StreamsConfig extends AbstractConfig {
 consumerProps.remove(ConsumerConfig.GROUP_ID_CONFIG);
 // add client id with stream client id prefix
 consumerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId + 
"-restore-consumer");
+

kafka git commit: MINOR: improve flaky Streams tests

2017-11-22 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/0.11.0 8ea4a2826 -> c89c6b873


MINOR: improve flaky Streams tests

Use TestUtil test directory for state directory instead of default 
/tmp/kafka-streams

Author: Matthias J. Sax 

Reviewers: Damian Guy 

Closes #4246 from mjsax/improve-flaky-streams-tests


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c89c6b87
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c89c6b87
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c89c6b87

Branch: refs/heads/0.11.0
Commit: c89c6b87365c8a8482e1ddac23079af7f9faff0c
Parents: 8ea4a28
Author: Matthias J. Sax 
Authored: Wed Nov 22 10:55:42 2017 +
Committer: Damian Guy 
Committed: Wed Nov 22 10:55:42 2017 +

--
 .../apache/kafka/streams/KafkaStreamsTest.java  | 81 
 .../integration/FanoutIntegrationTest.java  |  2 +
 2 files changed, 16 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/c89c6b87/streams/src/test/java/org/apache/kafka/streams/KafkaStreamsTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/KafkaStreamsTest.java 
b/streams/src/test/java/org/apache/kafka/streams/KafkaStreamsTest.java
index 8eea60c..064d7b8 100644
--- a/streams/src/test/java/org/apache/kafka/streams/KafkaStreamsTest.java
+++ b/streams/src/test/java/org/apache/kafka/streams/KafkaStreamsTest.java
@@ -16,7 +16,6 @@
  */
 package org.apache.kafka.streams;
 
-import org.apache.kafka.clients.consumer.ConsumerConfig;
 import org.apache.kafka.common.Metric;
 import org.apache.kafka.common.MetricName;
 import org.apache.kafka.common.config.ConfigException;
@@ -63,6 +62,7 @@ public class KafkaStreamsTest {
 // quick enough)
 @ClassRule
 public static final EmbeddedKafkaCluster CLUSTER = new 
EmbeddedKafkaCluster(NUM_BROKERS);
+
 private final KStreamBuilder builder = new KStreamBuilder();
 private KafkaStreams streams;
 private Properties props;
@@ -80,9 +80,6 @@ public class KafkaStreamsTest {
 
 @Test
 public void testStateChanges() throws Exception {
-final KStreamBuilder builder = new KStreamBuilder();
-final KafkaStreams streams = new KafkaStreams(builder, props);
-
 StateListenerStub stateListener = new StateListenerStub();
 streams.setStateListener(stateListener);
 Assert.assertEquals(streams.state(), KafkaStreams.State.CREATED);
@@ -101,9 +98,6 @@ public class KafkaStreamsTest {
 
 @Test
 public void testStateCloseAfterCreate() throws Exception {
-final KStreamBuilder builder = new KStreamBuilder();
-final KafkaStreams streams = new KafkaStreams(builder, props);
-
 StateListenerStub stateListener = new StateListenerStub();
 streams.setStateListener(stateListener);
 streams.close();
@@ -159,25 +153,20 @@ public class KafkaStreamsTest {
 
 @Test
 public void testStateThreadClose() throws Exception {
-final int numThreads = 2;
-final KStreamBuilder builder = new KStreamBuilder();
 // make sure we have the global state thread running too
 builder.globalTable("anyTopic");
-props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, numThreads);
-final KafkaStreams streams = new KafkaStreams(builder, props);
 
-testStateThreadCloseHelper(numThreads);
+streams = new KafkaStreams(new KStreamBuilder(), props);
+
+testStateThreadCloseHelper(NUM_THREADS);
 }
 
 @Test
 public void testStateGlobalThreadClose() throws Exception {
-final int numThreads = 2;
 final KStreamBuilder builder = new KStreamBuilder();
 // make sure we have the global state thread running too
 builder.globalTable("anyTopic");
-props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, numThreads);
-final KafkaStreams streams = new KafkaStreams(builder, props);
-
+streams = new KafkaStreams(builder, props);
 
 streams.start();
 TestUtils.waitForCondition(new TestCondition() {
@@ -260,7 +249,8 @@ public class KafkaStreamsTest {
 
 @Test
 public void testNumberDefaultMetrics() {
-final KafkaStreams streams = createKafkaStreams();
+props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 1);
+streams = new KafkaStreams(builder, props);
 final Map metrics = streams.metrics();
 // all 15 default StreamThread metrics + 1 metric that keeps track of 
number of metrics
 assertEquals(metrics.size(), 16);
@@ -268,11 +258,7 @@ public class KafkaStreamsTest {
 
 @Test
 public void 

kafka git commit: MINOR: improve flaky Streams system test

2017-11-22 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/1.0 3139d2aba -> 8ff00faea


MINOR: improve flaky Streams system test

Handle TimeoutException in Producer callback and retry sending input data

Author: Matthias J. Sax 

Reviewers: Damian Guy 

Closes #4244 from mjsax/improve-flaky-system-test

(cherry picked from commit 80038e6d205a037ee969f1c5839ec03925cd8ba4)
Signed-off-by: Damian Guy 


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/8ff00fae
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/8ff00fae
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/8ff00fae

Branch: refs/heads/1.0
Commit: 8ff00faea18a8bee54cdbb00b13ebd7baebca9cd
Parents: 3139d2a
Author: Matthias J. Sax 
Authored: Wed Nov 22 10:53:32 2017 +
Committer: Damian Guy 
Committed: Wed Nov 22 10:53:45 2017 +

--
 .../test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/8ff00fae/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java 
b/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
index 9f8bcc3..a5aef2a 100644
--- a/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
+++ b/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
@@ -139,6 +139,7 @@ public class SmokeTestDriver extends SmokeTestUtil {
 // no duplicates
 producerProps.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
 producerProps.put(ProducerConfig.ACKS_CONFIG, "all");
+producerProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 45000);
 
 KafkaProducer producer = new 
KafkaProducer<>(producerProps);
 



kafka git commit: MINOR: improve flaky Streams system test

2017-11-22 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 225b0b9c7 -> 80038e6d2


MINOR: improve flaky Streams system test

Handle TimeoutException in Producer callback and retry sending input data

Author: Matthias J. Sax 

Reviewers: Damian Guy 

Closes #4244 from mjsax/improve-flaky-system-test


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/80038e6d
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/80038e6d
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/80038e6d

Branch: refs/heads/trunk
Commit: 80038e6d205a037ee969f1c5839ec03925cd8ba4
Parents: 225b0b9
Author: Matthias J. Sax 
Authored: Wed Nov 22 10:53:32 2017 +
Committer: Damian Guy 
Committed: Wed Nov 22 10:53:32 2017 +

--
 .../test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/80038e6d/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java 
b/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
index 9f8bcc3..a5aef2a 100644
--- a/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
+++ b/streams/src/test/java/org/apache/kafka/streams/tests/SmokeTestDriver.java
@@ -139,6 +139,7 @@ public class SmokeTestDriver extends SmokeTestUtil {
 // no duplicates
 producerProps.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
 producerProps.put(ProducerConfig.ACKS_CONFIG, "all");
+producerProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 45000);
 
 KafkaProducer producer = new 
KafkaProducer<>(producerProps);
 



kafka git commit: MINOR: add hint for setting an uncaught exception handler to JavaDocs

2017-10-23 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 86cd558b3 -> c216adb4b


MINOR: add hint for setting an uncaught exception handler to JavaDocs

Author: Matthias J. Sax 

Reviewers: Bill Bejeck , Damian Guy 

Closes #4104 from mjsax/minor-uncaught-exception-handler


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c216adb4
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c216adb4
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c216adb4

Branch: refs/heads/trunk
Commit: c216adb4bbf8306977380a1ec371380e30137765
Parents: 86cd558
Author: Matthias J. Sax 
Authored: Mon Oct 23 10:33:51 2017 +0100
Committer: Damian Guy 
Committed: Mon Oct 23 10:33:51 2017 +0100

--
 .../src/main/java/org/apache/kafka/streams/KafkaStreams.java   | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/c216adb4/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java 
b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
index ae4ef34..6e48f19 100644
--- a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
+++ b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
@@ -723,6 +723,12 @@ public class KafkaStreams {
  * Start the {@code KafkaStreams} instance by starting all its threads.
  * This function is expected to be called only once during the life cycle 
of the client.
  * 
+ * Because threads are started in the background, this method does not 
block.
+ * As a consequence, any fatal exception that happens during processing is 
by default only logged.
+ * If you want to be notified about dying threads, you can
+ * {@link #setUncaughtExceptionHandler(Thread.UncaughtExceptionHandler) 
register an uncaught exception handler}
+ * before starting the {@code KafkaStreams} instance.
+ * 
  * Note, for brokers with version {@code 0.9.x} or lower, the broker 
version cannot be checked.
  * There will be no error and the client will hang and retry to verify the 
broker version until it
  * {@link StreamsConfig#REQUEST_TIMEOUT_MS_CONFIG times out}.



kafka git commit: MINOR: add hint for setting an uncaught exception handler to JavaDocs

2017-10-23 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/1.0 5ee157126 -> 2a3219413


MINOR: add hint for setting an uncaught exception handler to JavaDocs

Author: Matthias J. Sax 

Reviewers: Bill Bejeck , Damian Guy 

Closes #4104 from mjsax/minor-uncaught-exception-handler

(cherry picked from commit c216adb4bbf8306977380a1ec371380e30137765)
Signed-off-by: Damian Guy 


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/2a321941
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/2a321941
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/2a321941

Branch: refs/heads/1.0
Commit: 2a321941387c7739f2fbbbe592d017b703223ada
Parents: 5ee1571
Author: Matthias J. Sax 
Authored: Mon Oct 23 10:33:51 2017 +0100
Committer: Damian Guy 
Committed: Mon Oct 23 10:34:04 2017 +0100

--
 .../src/main/java/org/apache/kafka/streams/KafkaStreams.java   | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/2a321941/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java 
b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
index ae4ef34..6e48f19 100644
--- a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
+++ b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
@@ -723,6 +723,12 @@ public class KafkaStreams {
  * Start the {@code KafkaStreams} instance by starting all its threads.
  * This function is expected to be called only once during the life cycle 
of the client.
  * 
+ * Because threads are started in the background, this method does not 
block.
+ * As a consequence, any fatal exception that happens during processing is 
by default only logged.
+ * If you want to be notified about dying threads, you can
+ * {@link #setUncaughtExceptionHandler(Thread.UncaughtExceptionHandler) 
register an uncaught exception handler}
+ * before starting the {@code KafkaStreams} instance.
+ * 
  * Note, for brokers with version {@code 0.9.x} or lower, the broker 
version cannot be checked.
  * There will be no error and the client will hang and retry to verify the 
broker version until it
  * {@link StreamsConfig#REQUEST_TIMEOUT_MS_CONFIG times out}.



kafka git commit: KAFKA-6069: Properly tag KafkaStreams metrics with the client id.

2017-10-19 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/1.0 a1a32c1d3 -> fdc9b553a


KAFKA-6069: Properly tag KafkaStreams metrics with the client id.

Author: Tommy Becker 

Reviewers: Bill Bejeck , Damian Guy 

Closes #4081 from twbecker/KAFKA-6069

(cherry picked from commit 249e398bf84cdd475af6529e163e78486b43c570)
Signed-off-by: Damian Guy 


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/fdc9b553
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/fdc9b553
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/fdc9b553

Branch: refs/heads/1.0
Commit: fdc9b553aab5186a8d5cd786b3c92893fc3e4f28
Parents: a1a32c1
Author: Tommy Becker 
Authored: Thu Oct 19 15:40:26 2017 +0100
Committer: Damian Guy 
Committed: Thu Oct 19 15:43:44 2017 +0100

--
 .../processor/internals/StreamsKafkaClient.java |  4 +-
 .../internals/StreamsKafkaClientTest.java   | 44 
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/fdc9b553/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
index d725ed8..1e99ad2 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
@@ -111,7 +111,8 @@ public class StreamsKafkaClient {
 final Time time = new SystemTime();
 
 final Map metricTags = new LinkedHashMap<>();
-metricTags.put("client-id", StreamsConfig.CLIENT_ID_CONFIG);
+final String clientId = 
streamsConfig.getString(StreamsConfig.CLIENT_ID_CONFIG);
+metricTags.put("client-id", clientId);
 
 final Metadata metadata = new Metadata(streamsConfig.getLong(
 StreamsConfig.RETRY_BACKOFF_MS_CONFIG),
@@ -129,7 +130,6 @@ public class StreamsKafkaClient {
 final Metrics metrics = new Metrics(metricConfig, reporters, time);
 
 final ChannelBuilder channelBuilder = 
ClientUtils.createChannelBuilder(streamsConfig);
-final String clientId = 
streamsConfig.getString(StreamsConfig.CLIENT_ID_CONFIG);
 final LogContext logContext = createLogContext(clientId);
 
 final Selector selector = new Selector(

http://git-wip-us.apache.org/repos/asf/kafka/blob/fdc9b553/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
index 7a75b81..0bb7682 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
@@ -17,11 +17,13 @@
 package org.apache.kafka.streams.processor.internals;
 
 import org.apache.kafka.clients.MockClient;
+import org.apache.kafka.common.MetricName;
 import org.apache.kafka.common.Node;
 import org.apache.kafka.common.TopicPartition;
 import org.apache.kafka.common.config.AbstractConfig;
 import org.apache.kafka.common.config.SaslConfigs;
 import org.apache.kafka.common.config.TopicConfig;
+import org.apache.kafka.common.metrics.KafkaMetric;
 import org.apache.kafka.common.metrics.MetricsReporter;
 import org.apache.kafka.common.protocol.ApiKeys;
 import org.apache.kafka.common.protocol.Errors;
@@ -46,6 +48,7 @@ import java.util.Map;
 import static java.util.Arrays.asList;
 import static org.hamcrest.CoreMatchers.equalTo;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertThat;
 
 public class StreamsKafkaClientTest {
@@ -130,6 +133,17 @@ public class StreamsKafkaClientTest {
 verifyCorrectTopicConfigs(streamsKafkaClient, 
topicConfigWithNoOverrides, Collections.singletonMap("cleanup.policy", 
"delete"));
 }
 
+@Test
+public void metricsShouldBeTaggedWithClientId() {
+config.put(StreamsConfig.CLIENT_ID_CONFIG, "some_client_id");
+config.put(StreamsConfig.METRIC_REPORTER_CLASSES_CONFIG, 
TestMetricsReporter.class.getName());
+StreamsKafkaClient.create(new StreamsConfig(config));
+

kafka git commit: KAFKA-6069: Properly tag KafkaStreams metrics with the client id.

2017-10-19 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 7fdafda97 -> 249e398bf


KAFKA-6069: Properly tag KafkaStreams metrics with the client id.

Author: Tommy Becker 

Reviewers: Bill Bejeck , Damian Guy 

Closes #4081 from twbecker/KAFKA-6069


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/249e398b
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/249e398b
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/249e398b

Branch: refs/heads/trunk
Commit: 249e398bf84cdd475af6529e163e78486b43c570
Parents: 7fdafda
Author: Tommy Becker 
Authored: Thu Oct 19 15:40:26 2017 +0100
Committer: Damian Guy 
Committed: Thu Oct 19 15:40:26 2017 +0100

--
 .../processor/internals/StreamsKafkaClient.java |  4 +-
 .../internals/StreamsKafkaClientTest.java   | 44 
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/249e398b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
index d725ed8..1e99ad2 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java
@@ -111,7 +111,8 @@ public class StreamsKafkaClient {
 final Time time = new SystemTime();
 
 final Map metricTags = new LinkedHashMap<>();
-metricTags.put("client-id", StreamsConfig.CLIENT_ID_CONFIG);
+final String clientId = 
streamsConfig.getString(StreamsConfig.CLIENT_ID_CONFIG);
+metricTags.put("client-id", clientId);
 
 final Metadata metadata = new Metadata(streamsConfig.getLong(
 StreamsConfig.RETRY_BACKOFF_MS_CONFIG),
@@ -129,7 +130,6 @@ public class StreamsKafkaClient {
 final Metrics metrics = new Metrics(metricConfig, reporters, time);
 
 final ChannelBuilder channelBuilder = 
ClientUtils.createChannelBuilder(streamsConfig);
-final String clientId = 
streamsConfig.getString(StreamsConfig.CLIENT_ID_CONFIG);
 final LogContext logContext = createLogContext(clientId);
 
 final Selector selector = new Selector(

http://git-wip-us.apache.org/repos/asf/kafka/blob/249e398b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
index 7a75b81..0bb7682 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClientTest.java
@@ -17,11 +17,13 @@
 package org.apache.kafka.streams.processor.internals;
 
 import org.apache.kafka.clients.MockClient;
+import org.apache.kafka.common.MetricName;
 import org.apache.kafka.common.Node;
 import org.apache.kafka.common.TopicPartition;
 import org.apache.kafka.common.config.AbstractConfig;
 import org.apache.kafka.common.config.SaslConfigs;
 import org.apache.kafka.common.config.TopicConfig;
+import org.apache.kafka.common.metrics.KafkaMetric;
 import org.apache.kafka.common.metrics.MetricsReporter;
 import org.apache.kafka.common.protocol.ApiKeys;
 import org.apache.kafka.common.protocol.Errors;
@@ -46,6 +48,7 @@ import java.util.Map;
 import static java.util.Arrays.asList;
 import static org.hamcrest.CoreMatchers.equalTo;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertThat;
 
 public class StreamsKafkaClientTest {
@@ -130,6 +133,17 @@ public class StreamsKafkaClientTest {
 verifyCorrectTopicConfigs(streamsKafkaClient, 
topicConfigWithNoOverrides, Collections.singletonMap("cleanup.policy", 
"delete"));
 }
 
+@Test
+public void metricsShouldBeTaggedWithClientId() {
+config.put(StreamsConfig.CLIENT_ID_CONFIG, "some_client_id");
+config.put(StreamsConfig.METRIC_REPORTER_CLASSES_CONFIG, 
TestMetricsReporter.class.getName());
+StreamsKafkaClient.create(new StreamsConfig(config));
+assertFalse(TestMetricsReporter.METRICS.isEmpty());
+for (KafkaMetric kafkaMetric : TestMetricsReporter.METRICS.values()) 

kafka git commit: KAFKA-6023 ThreadCache#sizeBytes() should check overflow

2017-10-18 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 5cc162f30 -> 68f324f4b


KAFKA-6023 ThreadCache#sizeBytes() should check overflow

long sizeBytes() {
long sizeInBytes = 0;
for (final NamedCache namedCache : caches.values()) {
sizeInBytes += namedCache.sizeInBytes();
}
return sizeInBytes;
}
The summation w.r.t. sizeInBytes may overflow.
Check similar to what is done in size() should be performed.

Author: siva santhalingam 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Damian Guy 

Closes #4041 from shivsantham/kafka-6023


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/68f324f4
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/68f324f4
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/68f324f4

Branch: refs/heads/trunk
Commit: 68f324f4bf0003d5dcfd79c5ab7f9c53bd0c1522
Parents: 5cc162f
Author: siva santhalingam 
Authored: Wed Oct 18 09:44:39 2017 +0100
Committer: Damian Guy 
Committed: Wed Oct 18 09:44:39 2017 +0100

--
 .../apache/kafka/streams/state/internals/ThreadCache.java| 7 +++
 .../kafka/streams/state/internals/ThreadCacheTest.java   | 8 
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/68f324f4/streams/src/main/java/org/apache/kafka/streams/state/internals/ThreadCache.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/ThreadCache.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/ThreadCache.java
index aab9671..01a4bef 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/ThreadCache.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/ThreadCache.java
@@ -205,10 +205,6 @@ public class ThreadCache {
 return Long.MAX_VALUE;
 }
 }
-
-if (isOverflowing(size)) {
-return Long.MAX_VALUE;
-}
 return size;
 }
 
@@ -220,6 +216,9 @@ public class ThreadCache {
 long sizeInBytes = 0;
 for (final NamedCache namedCache : caches.values()) {
 sizeInBytes += namedCache.sizeInBytes();
+if (isOverflowing(sizeInBytes)) {
+return Long.MAX_VALUE;
+}
 }
 return sizeInBytes;
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/68f324f4/streams/src/test/java/org/apache/kafka/streams/state/internals/ThreadCacheTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/ThreadCacheTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/ThreadCacheTest.java
index 16fd34b..164e71e 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/ThreadCacheTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/ThreadCacheTest.java
@@ -511,6 +511,14 @@ public class ThreadCacheTest {
 assertNull(threadCache.get(namespace, null));
 }
 
+@Test
+public void shouldCalculateSizeInBytes() {
+final ThreadCache cache = new ThreadCache(logContext, 10, new 
MockStreamsMetrics(new Metrics()));
+NamedCache.LRUNode node = new NamedCache.LRUNode(Bytes.wrap(new 
byte[]{1}), dirtyEntry(new byte[]{0}));
+cache.put(namespace1, Bytes.wrap(new byte[]{1}), cleanEntry(new 
byte[]{0}));
+assertEquals(cache.sizeBytes(), node.size());
+}
+
 private LRUCacheEntry dirtyEntry(final byte[] key) {
 return new LRUCacheEntry(key, true, -1, -1, -1, "");
 }



kafka git commit: MINOR: improve Store parameter checks

2017-10-12 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 62682d078 -> 53c23bb5e


MINOR: improve Store parameter checks

Author: Matthias J. Sax 

Reviewers: Bill Bejeck , Damian Guy 

Closes #4063 from mjsax/minor-improve-store-parameter-checks


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/53c23bb5
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/53c23bb5
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/53c23bb5

Branch: refs/heads/trunk
Commit: 53c23bb5e65c147d7b2cae0a7fd9b3ba46c8fce5
Parents: 62682d0
Author: Matthias J. Sax 
Authored: Thu Oct 12 15:55:43 2017 +0100
Committer: Damian Guy 
Committed: Thu Oct 12 15:55:43 2017 +0100

--
 .../org/apache/kafka/streams/state/Stores.java  | 50 +++
 .../apache/kafka/streams/state/StoresTest.java  | 65 
 2 files changed, 102 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/53c23bb5/streams/src/main/java/org/apache/kafka/streams/state/Stores.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/state/Stores.java 
b/streams/src/main/java/org/apache/kafka/streams/state/Stores.java
index c9c44af..0ce6d9e 100644
--- a/streams/src/main/java/org/apache/kafka/streams/state/Stores.java
+++ b/streams/src/main/java/org/apache/kafka/streams/state/Stores.java
@@ -40,6 +40,7 @@ import org.slf4j.LoggerFactory;
 import java.nio.ByteBuffer;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.Objects;
 
 /**
  * Factory for creating state stores in Kafka Streams.
@@ -85,21 +86,23 @@ public class Stores {
 
 /**
  * Create a persistent {@link KeyValueBytesStoreSupplier}.
- * @param name  name of the store
+ * @param name  name of the store (cannot be {@code null})
  * @return  an instance of a {@link KeyValueBytesStoreSupplier} that can 
be used
  * to build a persistent store
  */
 public static KeyValueBytesStoreSupplier persistentKeyValueStore(final 
String name) {
+Objects.requireNonNull(name, "name cannot be null");
 return new RocksDbKeyValueBytesStoreSupplier(name);
 }
 
 /**
  * Create an in-memory {@link KeyValueBytesStoreSupplier}.
- * @param name  name of the store
+ * @param name  name of the store (cannot be {@code null})
  * @return  an instance of a {@link KeyValueBytesStoreSupplier} than can 
be used to
  * build an in-memory store
  */
 public static KeyValueBytesStoreSupplier inMemoryKeyValueStore(final 
String name) {
+Objects.requireNonNull(name, "name cannot be null");
 return new KeyValueBytesStoreSupplier() {
 @Override
 public String name() {
@@ -120,12 +123,16 @@ public class Stores {
 
 /**
  * Create a LRU Map {@link KeyValueBytesStoreSupplier}.
- * @param name  name of the store
- * @param maxCacheSize  maximum number of items in the LRU
+ * @param name  name of the store (cannot be {@code null})
+ * @param maxCacheSize  maximum number of items in the LRU (cannot be 
negative)
  * @return an instance of a {@link KeyValueBytesStoreSupplier} that can be 
used to build
  * an LRU Map based store
  */
 public static KeyValueBytesStoreSupplier lruMap(final String name, final 
int maxCacheSize) {
+Objects.requireNonNull(name, "name cannot be null");
+if (maxCacheSize < 0) {
+throw new IllegalArgumentException("maxCacheSize cannot be 
negative");
+}
 return new KeyValueBytesStoreSupplier() {
 @Override
 public String name() {
@@ -146,10 +153,10 @@ public class Stores {
 
 /**
  * Create a persistent {@link WindowBytesStoreSupplier}.
- * @param name  name of the store
- * @param retentionPeriod   length of time to retain data in the store
- * @param numSegments   number of db segments
- * @param windowSizesize of the windows
+ * @param name  name of the store (cannot be {@code null})
+ * @param retentionPeriod   length of time to retain data in the store 
(cannot be negative)
+ * @param numSegments   number of db segments (cannot be zero or 
negative)
+ * @param windowSizesize of the windows (cannot be negative)
  * @param retainDuplicates  whether or not to retain duplicates.
  * @return an instance of {@link WindowBytesStoreSupplier}
  */
@@ -158,24 +165,38 @@ public class Stores {
  final int 
numSegments,

kafka-site git commit: Adding google tracking file for youtube metrics

2017-10-06 Thread damianguy
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 3dafcaad2 -> 450fdd86e


Adding google tracking file for youtube metrics

guozhangwang dguy Please review this. Thanks!!

Author: Manjula K 

Reviewers: Damian Guy 

Closes #91 from manjuapu/asf-site


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/450fdd86
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/450fdd86
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/450fdd86

Branch: refs/heads/asf-site
Commit: 450fdd86e39baf585b4eeaeb5009cc85a3f39969
Parents: 3dafcaa
Author: Manjula K 
Authored: Fri Oct 6 09:00:54 2017 -0700
Committer: Damian Guy 
Committed: Fri Oct 6 09:00:54 2017 -0700

--
 google29eadbd0256e020c.html | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/450fdd86/google29eadbd0256e020c.html
--
diff --git a/google29eadbd0256e020c.html b/google29eadbd0256e020c.html
new file mode 100644
index 000..f022843
--- /dev/null
+++ b/google29eadbd0256e020c.html
@@ -0,0 +1 @@
+google-site-verification: google29eadbd0256e020c.html
\ No newline at end of file



kafka git commit: KAFKA-5989; resume consumption of tasks that have state stores but no changelogging

2017-10-05 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 6ea4fffdd -> e61002e2a


KAFKA-5989; resume consumption of tasks that have state stores but no 
changelogging

Stores where logging is disabled where never consumed as the partitions were 
paused, but never resumed.

Author: Damian Guy 

Reviewers: tedyu , Matthias J. Sax 
, Guozhang Wang 

Closes #4002 from dguy/restore


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e61002e2
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e61002e2
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e61002e2

Branch: refs/heads/trunk
Commit: e61002e2ab70454304e12f2834cbbcb4bed002d7
Parents: 6ea4fff
Author: Damian Guy 
Authored: Thu Oct 5 08:23:15 2017 -0700
Committer: Damian Guy 
Committed: Thu Oct 5 08:23:15 2017 -0700

--
 .../processor/internals/AssignedTasks.java  |  19 ++--
 .../processor/internals/TaskManager.java|   4 +-
 .../integration/RestoreIntegrationTest.java | 109 ++-
 .../processor/internals/AssignedTasksTest.java  |  14 ++-
 .../processor/internals/TaskManagerTest.java|  25 -
 5 files changed, 154 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/e61002e2/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
index 4448a78..12c3f79 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
@@ -109,10 +109,12 @@ class AssignedTasks implements RestoringTasks {
 }
 
 /**
+ * @return partitions that are ready to be resumed
  * @throws IllegalStateException If store gets registered after 
initialized is already finished
  * @throws StreamsException if the store's change log does not contain the 
partition
  */
-void initializeNewTasks() {
+Set initializeNewTasks() {
+final Set readyPartitions = new HashSet<>();
 if (!created.isEmpty()) {
 log.debug("Initializing {}s {}", taskTypeName, created.keySet());
 }
@@ -123,7 +125,7 @@ class AssignedTasks implements RestoringTasks {
 log.debug("transitioning {} {} to restoring", 
taskTypeName, entry.getKey());
 addToRestoring(entry.getValue());
 } else {
-transitionToRunning(entry.getValue());
+transitionToRunning(entry.getValue(), readyPartitions);
 }
 it.remove();
 } catch (final LockException e) {
@@ -131,6 +133,7 @@ class AssignedTasks implements RestoringTasks {
 log.trace("Could not create {} {} due to {}; will retry", 
taskTypeName, entry.getKey(), e.getMessage());
 }
 }
+return readyPartitions;
 }
 
 Set updateRestored(final Collection 
restored) {
@@ -144,8 +147,7 @@ class AssignedTasks implements RestoringTasks {
 final Map.Entry entry = it.next();
 final Task task = entry.getValue();
 if (restoredPartitions.containsAll(task.changelogPartitions())) {
-transitionToRunning(task);
-resume.addAll(task.partitions());
+transitionToRunning(task, resume);
 it.remove();
 } else {
 if (log.isTraceEnabled()) {
@@ -262,11 +264,11 @@ class AssignedTasks implements RestoringTasks {
 suspended.remove(taskId);
 throw e;
 }
-transitionToRunning(task);
+transitionToRunning(task, new HashSet());
 log.trace("resuming suspended {} {}", taskTypeName, task.id());
 return true;
 } else {
-log.trace("couldn't resume task {} assigned partitions {}, 
task partitions {}", taskId, partitions, task.partitions());
+log.warn("couldn't resume task {} assigned partitions {}, task 
partitions {}", taskId, partitions, task.partitions());
 }
 }
 return false;
@@ -282,11 +284,14 @@ class AssignedTasks implements RestoringTasks {
 }
 }
 
-private void transitionToRunning(final Task task) {
+private void transitionToRunning(final Task task, final 
Set readyPartitions) {
 log.debug("transitioning {} 

kafka git commit: KAFKA-5967; Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-10-04 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/0.11.0 fae2d2386 -> 51ea8e76b


KAFKA-5967; Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore#approximateNumEntries()

package name: org.apache.kafka.streams.state.internals
Minor change to approximateNumEntries() method in 
CompositeReadOnlyKeyValueStore class.

long total = 0;
   for (ReadOnlyKeyValueStore store : stores) {
  total += store.approximateNumEntries();
   }

return total < 0 ? Long.MAX_VALUE : total;

The check for negative value seems to account for wrapping. However, wrapping 
can happen within the for loop. So the check should be performed inside the 
loop.

Author: siva santhalingam 

Reviewers: Matthias J. Sax , Damian Guy 


Closes #3988 from shivsantham/trunk

(cherry picked from commit 5afeddaa99c48ac827d1cade7812deb83b1f80bd)
Signed-off-by: Damian Guy 


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/51ea8e76
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/51ea8e76
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/51ea8e76

Branch: refs/heads/0.11.0
Commit: 51ea8e76baa687411163ac3763877b0dce42a545
Parents: fae2d23
Author: siva santhalingam 
Authored: Wed Oct 4 10:11:11 2017 -0700
Committer: Damian Guy 
Committed: Wed Oct 4 10:15:02 2017 -0700

--
 .../CompositeReadOnlyKeyValueStore.java  |  5 -
 .../CompositeReadOnlyKeyValueStoreTest.java  | 19 +++
 2 files changed, 23 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/51ea8e76/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
index 6366351..1ce5976 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
@@ -102,8 +102,11 @@ public class CompositeReadOnlyKeyValueStore 
implements ReadOnlyKeyValueSto
 long total = 0;
 for (ReadOnlyKeyValueStore store : stores) {
 total += store.approximateNumEntries();
+if (total < 0) {
+return Long.MAX_VALUE;
+}
 }
-return total < 0 ? Long.MAX_VALUE : total;
+return total;
 }
 
 interface NextIteratorFunction {

http://git-wip-us.apache.org/repos/asf/kafka/blob/51ea8e76/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
index 2e5b872..3d5bb1b 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
@@ -37,6 +37,7 @@ import static org.junit.Assert.assertTrue;
 public class CompositeReadOnlyKeyValueStoreTest {
 
 private final String storeName = "my-store";
+private final String storeNameA = "my-storeA";
 private StateStoreProviderStub stubProviderTwo;
 private KeyValueStore stubOneUnderlying;
 private CompositeReadOnlyKeyValueStore theStore;
@@ -196,6 +197,24 @@ public class CompositeReadOnlyKeyValueStoreTest {
 assertEquals(Long.MAX_VALUE, theStore.approximateNumEntries());
 }
 
+@Test
+public void shouldReturnLongMaxValueOnUnderflow() {
+stubProviderTwo.addStore(storeName, new NoOpReadOnlyStore() {
+@Override
+public long approximateNumEntries() {
+return Long.MAX_VALUE;
+}
+});
+stubProviderTwo.addStore(storeNameA, new NoOpReadOnlyStore() {
+@Override
+public long approximateNumEntries() {
+return Long.MAX_VALUE;
+}
+});
+
+assertEquals(Long.MAX_VALUE, theStore.approximateNumEntries());
+}
+
 private CompositeReadOnlyKeyValueStore rebalancing() {
 return new CompositeReadOnlyKeyValueStore<>(new 

kafka git commit: KAFKA-5967; Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-10-04 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 5383f9bed -> 5afeddaa9


KAFKA-5967; Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore#approximateNumEntries()

package name: org.apache.kafka.streams.state.internals
Minor change to approximateNumEntries() method in 
CompositeReadOnlyKeyValueStore class.

long total = 0;
   for (ReadOnlyKeyValueStore store : stores) {
  total += store.approximateNumEntries();
   }

return total < 0 ? Long.MAX_VALUE : total;

The check for negative value seems to account for wrapping. However, wrapping 
can happen within the for loop. So the check should be performed inside the 
loop.

Author: siva santhalingam 

Reviewers: Matthias J. Sax , Damian Guy 


Closes #3988 from shivsantham/trunk


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/5afeddaa
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/5afeddaa
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/5afeddaa

Branch: refs/heads/trunk
Commit: 5afeddaa99c48ac827d1cade7812deb83b1f80bd
Parents: 5383f9b
Author: siva santhalingam 
Authored: Wed Oct 4 10:11:11 2017 -0700
Committer: Damian Guy 
Committed: Wed Oct 4 10:11:11 2017 -0700

--
 .../CompositeReadOnlyKeyValueStore.java  |  5 -
 .../CompositeReadOnlyKeyValueStoreTest.java  | 19 +++
 2 files changed, 23 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/5afeddaa/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
index e3354e4..2c895ef 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStore.java
@@ -104,8 +104,11 @@ public class CompositeReadOnlyKeyValueStore 
implements ReadOnlyKeyValueSto
 long total = 0;
 for (ReadOnlyKeyValueStore store : stores) {
 total += store.approximateNumEntries();
+if (total < 0) {
+return Long.MAX_VALUE;
+}
 }
-return total < 0 ? Long.MAX_VALUE : total;
+return total;
 }
 
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/5afeddaa/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
index ad3a1f2..4ff0b90 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/CompositeReadOnlyKeyValueStoreTest.java
@@ -40,6 +40,7 @@ import static org.junit.Assert.fail;
 public class CompositeReadOnlyKeyValueStoreTest {
 
 private final String storeName = "my-store";
+private final String storeNameA = "my-storeA";
 private StateStoreProviderStub stubProviderTwo;
 private KeyValueStore stubOneUnderlying;
 private CompositeReadOnlyKeyValueStore theStore;
@@ -257,6 +258,24 @@ public class CompositeReadOnlyKeyValueStoreTest {
 assertEquals(Long.MAX_VALUE, theStore.approximateNumEntries());
 }
 
+@Test
+public void shouldReturnLongMaxValueOnUnderflow() {
+stubProviderTwo.addStore(storeName, new NoOpReadOnlyStore() {
+@Override
+public long approximateNumEntries() {
+return Long.MAX_VALUE;
+}
+});
+stubProviderTwo.addStore(storeNameA, new NoOpReadOnlyStore() {
+@Override
+public long approximateNumEntries() {
+return Long.MAX_VALUE;
+}
+});
+
+assertEquals(Long.MAX_VALUE, theStore.approximateNumEntries());
+}
+
 private CompositeReadOnlyKeyValueStore rebalancing() {
 return new CompositeReadOnlyKeyValueStore<>(new 
WrappingStoreProvider(Collections.singletonList(new 
StateStoreProviderStub(true))),
 QueryableStoreTypes.keyValueStore(), storeName);



[2/2] kafka git commit: MINOR: fix JavaDocs warnings

2017-10-03 Thread damianguy
MINOR: fix JavaDocs warnings

 - add some missing annotations for deprecated methods

Author: Matthias J. Sax 

Reviewers: Michael G. Noll , Damian Guy 


Closes #4005 from mjsax/minor-fix-javadoc-warnings


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/3dcbbf70
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/3dcbbf70
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/3dcbbf70

Branch: refs/heads/trunk
Commit: 3dcbbf703017985c9e212ad69cc7afdddbf358eb
Parents: 716330a
Author: Matthias J. Sax 
Authored: Tue Oct 3 07:35:42 2017 -0700
Committer: Damian Guy 
Committed: Tue Oct 3 07:35:42 2017 -0700

--
 .../org/apache/kafka/streams/KafkaStreams.java  |   9 +-
 .../kafka/streams/TopologyDescription.java  |   4 +-
 .../kafka/streams/kstream/Aggregator.java   |  12 +-
 .../kafka/streams/kstream/GlobalKTable.java |   2 +-
 .../kafka/streams/kstream/Initializer.java  |  12 +-
 .../kafka/streams/kstream/JoinWindows.java  |   2 +-
 .../kafka/streams/kstream/KGroupedStream.java   | 124 ++--
 .../kafka/streams/kstream/KGroupedTable.java|  27 +--
 .../apache/kafka/streams/kstream/KStream.java   |  88 -
 .../kafka/streams/kstream/KStreamBuilder.java   |  15 +-
 .../apache/kafka/streams/kstream/KTable.java| 189 +++
 .../apache/kafka/streams/kstream/Printed.java   |   1 -
 .../apache/kafka/streams/kstream/Reducer.java   |  12 +-
 .../streams/kstream/SessionWindowedKStream.java |  14 +-
 .../kafka/streams/kstream/SessionWindows.java   |   7 +-
 .../kafka/streams/kstream/TimeWindows.java  |   7 +-
 .../kafka/streams/kstream/UnlimitedWindows.java |   7 +-
 .../apache/kafka/streams/kstream/Windowed.java  |  14 +-
 .../streams/state/QueryableStoreTypes.java  |   6 +-
 .../org/apache/kafka/streams/state/Stores.java  |   2 +-
 20 files changed, 288 insertions(+), 266 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/3dcbbf70/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java 
b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
index 928d0e9..fd9c729 100644
--- a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
+++ b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
@@ -37,6 +37,7 @@ import 
org.apache.kafka.streams.errors.ProcessorStateException;
 import org.apache.kafka.streams.errors.StreamsException;
 import org.apache.kafka.streams.kstream.KStream;
 import org.apache.kafka.streams.kstream.KTable;
+import org.apache.kafka.streams.kstream.Produced;
 import org.apache.kafka.streams.processor.Processor;
 import org.apache.kafka.streams.processor.StateRestoreListener;
 import org.apache.kafka.streams.processor.StateStore;
@@ -948,10 +949,10 @@ public class KafkaStreams {
  * 
  * This will use the default Kafka Streams partitioner to locate the 
partition.
  * If a {@link StreamPartitioner custom partitioner} has been
- * {@link ProducerConfig#PARTITIONER_CLASS_CONFIG configured} via {@link 
StreamsConfig},
- * {@link KStream#through(StreamPartitioner, String)}, or {@link 
KTable#through(StreamPartitioner, String, String)},
- * or if the original {@link KTable}'s input {@link 
StreamsBuilder#table(String, String) topic} is partitioned
- * differently, please use {@link #metadataForKey(String, Object, 
StreamPartitioner)}.
+ * {@link ProducerConfig#PARTITIONER_CLASS_CONFIG configured} via {@link 
StreamsConfig} or
+ * {@link KStream#through(String, Produced)}, or if the original {@link 
KTable}'s input
+ * {@link StreamsBuilder#table(String) topic} is partitioned differently, 
please use
+ * {@link #metadataForKey(String, Object, StreamPartitioner)}.
  * 
  * Note:
  * 

http://git-wip-us.apache.org/repos/asf/kafka/blob/3dcbbf70/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java 
b/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
index 1b520c6..01af8bf 100644
--- a/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
+++ b/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
@@ -55,9 +55,9 @@ public interface TopologyDescription {
 }
 
 /**
- * Represents a {@link 
Topology#addGlobalStore(org.apache.kafka.streams.processor.StateStoreSupplier, 
String,
+ * Represents a {@link 

[1/2] kafka git commit: MINOR: fix JavaDocs warnings

2017-10-03 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 716330a5b -> 3dcbbf703


http://git-wip-us.apache.org/repos/asf/kafka/blob/3dcbbf70/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
index 66ec0d7..1abc5e7 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
@@ -166,7 +166,7 @@ public interface KTable {
  *  (i.e., that would be equivalent to calling 
{@link KTable#filter(Predicate)}.
  * @return a {@code KTable} that contains only those records that satisfy 
the given predicate
  * @see #filterNot(Predicate, Materialized)
- * @deprecated use {@link #filter(Predicate, Materialized)}
+ * @deprecated use {@link #filter(Predicate, Materialized) 
filter(predicate, Materialized.as(queryableStoreName))}
  */
 @Deprecated
 KTable filter(final Predicate predicate, final 
String queryableStoreName);
@@ -203,7 +203,7 @@ public interface KTable {
  * @param storeSupplier user defined state store supplier. Cannot be 
{@code null}.
  * @return a {@code KTable} that contains only those records that satisfy 
the given predicate
  * @see #filterNot(Predicate, Materialized)
- * @deprecated use {@link #filter(Predicate, Materialized)}
+ * @deprecated use {@link #filter(Predicate, Materialized) 
filter(predicate, Materialized.as(KeyValueByteStoreSupplier))}
  */
 @Deprecated
 KTable filter(final Predicate predicate, final 
StateStoreSupplier storeSupplier);
@@ -297,7 +297,7 @@ public interface KTable {
  * @param storeSupplier user defined state store supplier. Cannot be 
{@code null}.
  * @return a {@code KTable} that contains only those records that do 
not satisfy the given predicate
  * @see #filter(Predicate, Materialized)
- * @deprecated use {@link #filterNot(Predicate, Materialized)}
+ * @deprecated use {@link #filterNot(Predicate, Materialized) 
filterNot(predicate, Materialized.as(KeyValueByteStoreSupplier))}
  */
 @Deprecated
 KTable filterNot(final Predicate predicate, 
final StateStoreSupplier storeSupplier);
@@ -336,7 +336,7 @@ public interface KTable {
  * (i.e., that would be equivalent to calling {@link 
KTable#filterNot(Predicate)}.
  * @return a {@code KTable} that contains only those records that do 
not satisfy the given predicate
  * @see #filter(Predicate, Materialized)
- * @deprecated use {@link #filter(Predicate, Materialized)}
+ * @deprecated use {@link #filter(Predicate, Materialized) 
filterNot(predicate, Materialized.as(queryableStoreName))}
  */
 @Deprecated
 KTable filterNot(final Predicate predicate, 
final String queryableStoreName);
@@ -463,7 +463,7 @@ public interface KTable {
  * @paramthe value type of the result {@code KTable}
  *
  * @return a {@code KTable} that contains records with unmodified keys and 
new values (possibly of different type)
- * @deprecated use {@link #mapValues(ValueMapper, Materialized)}
+ * @deprecated use {@link #mapValues(ValueMapper, Materialized) 
mapValues(mapper, 
Materialized.as(queryableStoreName).withValueSerde(valueSerde))}
  */
 @Deprecated
  KTable mapValues(final ValueMapper 
mapper, final Serde valueSerde, final String queryableStoreName);
@@ -507,7 +507,7 @@ public interface KTable {
  * @param storeSupplier user defined state store supplier. Cannot be 
{@code null}.
  * @paramthe value type of the result {@code KTable}
  * @return a {@code KTable} that contains records with unmodified keys and 
new values (possibly of different type)
- * @deprecated use {@link #mapValues(ValueMapper, Materialized)}
+ * @deprecated use {@link #mapValues(ValueMapper, Materialized) 
mapValues(mapper, 
Materialized.as(KeyValueByteStoreSupplier).withValueSerde(valueSerde))}
  */
 @Deprecated
  KTable mapValues(final ValueMapper 
mapper,
@@ -530,7 +530,8 @@ public interface KTable {
  * update record.
  * @deprecated Use the Interactive Queries APIs (e.g., {@link 
KafkaStreams#store(String, QueryableStoreType) }
  * followed by {@link ReadOnlyKeyValueStore#all()}) to iterate over the 
keys of a KTable. Alternatively
- * convert to a KStream using {@code toStream()} and then use {@link 
KStream#print()} on the result.
+ * convert to a {@link KStream} using {@link #toStream()} and then use
+ * {@link KStream#print(Printed) print(Printed.toSysOut())} on the result.
  */
 @Deprecated
 void print();
@@ -551,7 +552,8 @@ public interface KTable {
  * @param label the name used to label the 

kafka git commit: KAFKA-5225; StreamsResetter doesn't allow custom Consumer properties

2017-10-02 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk cc84686a4 -> f9865d52e


KAFKA-5225; StreamsResetter doesn't allow custom Consumer properties

Author: Matthias J. Sax 
Author: Bharat Viswanadham 

Reviewers: Ismael Juma , Damian Guy 

Closes #3970 from mjsax/kafka-5225-streams-resetter-properties


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/f9865d52
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/f9865d52
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/f9865d52

Branch: refs/heads/trunk
Commit: f9865d52e81bbdddb7889d6c3cc7be537e610826
Parents: cc84686
Author: Matthias J. Sax 
Authored: Mon Oct 2 13:47:45 2017 -0700
Committer: Damian Guy 
Committed: Mon Oct 2 13:47:45 2017 -0700

--
 build.gradle|   1 +
 .../main/scala/kafka/tools/StreamsResetter.java |  60 ++-
 .../AbstractResetIntegrationTest.java   | 473 +++
 .../integration/ResetIntegrationTest.java   | 352 +-
 .../ResetIntegrationWithSslTest.java|  96 
 .../integration/utils/KafkaEmbedded.java|   5 +-
 6 files changed, 615 insertions(+), 372 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/f9865d52/build.gradle
--
diff --git a/build.gradle b/build.gradle
index cbae7b0..d7b799b 100644
--- a/build.gradle
+++ b/build.gradle
@@ -893,6 +893,7 @@ project(':streams') {
 testCompile project(':core').sourceSets.test.output
 testCompile libs.junit
 testCompile libs.easymock
+testCompile libs.bcpkix
 
 testRuntime libs.slf4jlog4j
   }

http://git-wip-us.apache.org/repos/asf/kafka/blob/f9865d52/core/src/main/scala/kafka/tools/StreamsResetter.java
--
diff --git a/core/src/main/scala/kafka/tools/StreamsResetter.java 
b/core/src/main/scala/kafka/tools/StreamsResetter.java
index 09d0d75..5539258 100644
--- a/core/src/main/scala/kafka/tools/StreamsResetter.java
+++ b/core/src/main/scala/kafka/tools/StreamsResetter.java
@@ -16,7 +16,11 @@
  */
 package kafka.tools;
 
-
+import joptsimple.OptionException;
+import joptsimple.OptionParser;
+import joptsimple.OptionSet;
+import joptsimple.OptionSpec;
+import joptsimple.OptionSpecBuilder;
 import org.apache.kafka.clients.admin.AdminClient;
 import org.apache.kafka.clients.admin.DeleteTopicsResult;
 import org.apache.kafka.clients.admin.KafkaAdminClient;
@@ -27,9 +31,11 @@ import org.apache.kafka.common.TopicPartition;
 import org.apache.kafka.common.annotation.InterfaceStability;
 import org.apache.kafka.common.serialization.ByteArrayDeserializer;
 import org.apache.kafka.common.utils.Exit;
+import org.apache.kafka.common.utils.Utils;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.LinkedList;
 import java.util.List;
@@ -38,12 +44,6 @@ import java.util.Properties;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
-import joptsimple.OptionException;
-import joptsimple.OptionParser;
-import joptsimple.OptionSet;
-import joptsimple.OptionSpec;
-import joptsimple.OptionSpecBuilder;
-
 /**
  * {@link StreamsResetter} resets the processing state of a Kafka Streams 
application so that, for example, you can reprocess its input from scratch.
  * 
@@ -71,14 +71,13 @@ public class StreamsResetter {
 private static final int EXIT_CODE_ERROR = 1;
 
 private static OptionSpec bootstrapServerOption;
-private static OptionSpecBuilder zookeeperOption;
 private static OptionSpec applicationIdOption;
 private static OptionSpec inputTopicsOption;
 private static OptionSpec intermediateTopicsOption;
 private static OptionSpecBuilder dryRunOption;
+private static OptionSpec commandConfigOption;
 
 private OptionSet options = null;
-private final Properties consumerConfig = new Properties();
 private final List allTopics = new LinkedList<>();
 private boolean dryRun = false;
 
@@ -86,10 +85,8 @@ public class StreamsResetter {
 return run(args, new Properties());
 }
 
-public int run(final String[] args, final Properties config) {
-consumerConfig.clear();
-consumerConfig.putAll(config);
-
+public int run(final String[] args,
+   final Properties config) {
 int exitCode = EXIT_CODE_SUCCESS;
 
 KafkaAdminClient kafkaAdminClient = null;
@@ -99,12 +96,14 @@ public class StreamsResetter {
 dryRun = options.has(dryRunOption);
 
 final String groupId = options.valueOf(applicationIdOption);
+   

kafka git commit: MINOR: additional kip-182 doc updates

2017-10-02 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk cdbf806e2 -> cc84686a4


MINOR: additional kip-182 doc updates

Author: Damian Guy 

Reviewers: Michael G. Noll , Bill Bejeck 
, Matthias J. Sax , Ismael Juma 


Closes #3971 from dguy/kip-182-docs


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/cc84686a
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/cc84686a
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/cc84686a

Branch: refs/heads/trunk
Commit: cc84686a4aa24e541f7ca5ee9dcb0dea0ddbd79a
Parents: cdbf806
Author: Damian Guy 
Authored: Mon Oct 2 13:20:49 2017 -0700
Committer: Damian Guy 
Committed: Mon Oct 2 13:20:49 2017 -0700

--
 docs/streams/developer-guide.html | 244 +
 1 file changed, 128 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/cc84686a/docs/streams/developer-guide.html
--
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index a064a5d..842325b 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -1383,65 +1383,72 @@ Note that in the WordCountProcessor 
implementation, users need to r
 // Java 8+ examples, using lambda expressions
 
 // Aggregating with time-based windowing (here: with 
5-minute tumbling windows)
-KTableWindowedString, Long 
timeWindowedAggregatedStream = groupedStream.aggregate(
-() -> 0L, /* initializer */
-(aggKey, newValue, aggValue) -> aggValue + newValue, 
/* adder */
-TimeWindows.of(TimeUnit.MINUTES.toMillis(5)), /* 
time-based window */
-Serdes.Long(), /* serde for aggregate value */
-"time-windowed-aggregated-stream-store" /* state store 
name */);
+KTableWindowedString, Long 
timeWindowedAggregatedStream = groupedStream
+
.windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5))) /* time-based window 
*/
+.aggregate(
+() -> 0L, /* initializer */
+(aggKey, newValue, aggValue) -> aggValue + 
newValue, /* adder */
+Materialized.String, Long, 
WindowStoreBytes, byte[]as("time-windowed-aggregated-stream-store") 
/* state store name */
+.withValueSerde(Serdes.Long())); /* serde for 
aggregate value */
+
 
 // Aggregating with session-based windowing (here: with an 
inactivity gap of 5 minutes)
-KTableWindowedString, Long 
sessionizedAggregatedStream = groupedStream.aggregate(
-() -> 0L, /* initializer */
-(aggKey, newValue, aggValue) -> aggValue + newValue, 
/* adder */
-(aggKey, leftAggValue, rightAggValue) -> leftAggValue 
+ rightAggValue, /* session merger */
-SessionWindows.with(TimeUnit.MINUTES.toMillis(5)), /* 
session window */
-Serdes.Long(), /* serde for aggregate value */
-"sessionized-aggregated-stream-store" /* state store 
name */);
+KTableWindowedString, Long 
sessionizedAggregatedStream = groupedStream
+
.windowedBy(SessionWindows.with(TimeUnit.MINUTES.toMillis(5))) /* session 
window */
+.aggregate(
+() -> 0L, /* initializer */
+(aggKey, newValue, aggValue) -> aggValue + 
newValue, /* adder */
+(aggKey, leftAggValue, rightAggValue) -> 
leftAggValue + rightAggValue, /* session merger */
+Materialized.String, Long, 
SessionStoreBytes, byte[]as("sessionized-aggregated-stream-store") 
/* state store name */
+.withValueSerde(Serdes.Long())); /* serde for 
aggregate value */
 
 // Java 7 examples
 
 // Aggregating with time-based windowing (here: with 
5-minute tumbling windows)
-KTableWindowedString, Long 
timeWindowedAggregatedStream = groupedStream.aggregate(
-new InitializerLong() { /* initializer */
-  @Override
-  public Long apply() {
-return 0L;
-  }
-},
-new AggregatorString, Long, Long() { /* adder 
*/

kafka git commit: KAFKA-5985; update javadoc regarding closing iterators

2017-10-02 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 4f4f99532 -> 39d5cdccc


KAFKA-5985; update javadoc regarding closing iterators

Author: Bill Bejeck 

Reviewers: Matthias J. Sax , Michael G. Noll 
, Damian Guy 

Closes #3994 from bbejeck/KAFKA-5985_document_need_to_close_iterators


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/39d5cdcc
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/39d5cdcc
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/39d5cdcc

Branch: refs/heads/trunk
Commit: 39d5cdcccfc0f7d7893188bb22580da0c842a993
Parents: 4f4f995
Author: Bill Bejeck 
Authored: Mon Oct 2 11:49:22 2017 -0700
Committer: Damian Guy 
Committed: Mon Oct 2 11:49:22 2017 -0700

--
 docs/streams/developer-guide.html| 8 
 .../org/apache/kafka/streams/state/KeyValueIterator.java | 2 +-
 .../apache/kafka/streams/state/ReadOnlyKeyValueStore.java| 4 ++--
 .../org/apache/kafka/streams/state/ReadOnlySessionStore.java | 6 --
 .../org/apache/kafka/streams/state/ReadOnlyWindowStore.java  | 4 
 .../java/org/apache/kafka/streams/state/SessionStore.java| 4 
 .../org/apache/kafka/streams/state/WindowStoreIterator.java  | 2 +-
 7 files changed, 24 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/39d5cdcc/docs/streams/developer-guide.html
--
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index 3368757..a064a5d 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -185,6 +185,7 @@
 In the init method, schedule the punctuation every 1 
second and retrieve the local state store by its name "Counts".
 In the process method, upon each received record, 
split the value string into words, and update their counts into the state store 
(we will talk about this feature later in the section).
 In the scheduled punctuate method, iterate the local 
state store and send the aggregated counts to the downstream processor, and 
commit the current stream state.
+When done with the KeyValueIteratorString, 
Long you must close the iterator, as shown above or use the 
try-with-resources statement.
 
 
 
@@ -253,6 +254,13 @@ With deletion enabled, old windows that have expired will 
be cleaned up by Kafka
 The default retention setting is Windows#maintainMs() + 1 day. 
This setting can be overriden by specifying 
StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG
 in the StreamsConfig.
 
 
+
+One additional note regarding the use of state stores.  Any time you open an 
Iterator from a state store you must call 
close() on the iterator
+when you are done working with it to reclaim resources.  Or you can use the 
iterator from within a try-with-resources statement.
+By not closing an iterator, you may likely encounter an OOM error.
+
+
+
 Monitoring the 
Restoration Progress of Fault-tolerant State Stores
 
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/39d5cdcc/streams/src/main/java/org/apache/kafka/streams/state/KeyValueIterator.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/KeyValueIterator.java 
b/streams/src/main/java/org/apache/kafka/streams/state/KeyValueIterator.java
index 3f44635..70a142b 100644
--- a/streams/src/main/java/org/apache/kafka/streams/state/KeyValueIterator.java
+++ b/streams/src/main/java/org/apache/kafka/streams/state/KeyValueIterator.java
@@ -24,7 +24,7 @@ import java.util.Iterator;
 /**
  * Iterator interface of {@link KeyValue}.
  *
- * Users need to call its {@code close} method explicitly upon completeness to 
release resources,
+ * Users must call its {@code close} method explicitly upon completeness to 
release resources,
  * or use try-with-resources statement (available since JDK7) for this {@link 
Closeable} class.
  *
  * @param  Type of keys

http://git-wip-us.apache.org/repos/asf/kafka/blob/39d5cdcc/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
index 76bb47b..0632980 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
@@ -39,7 +39,7 @@ public interface ReadOnlyKeyValueStore 

kafka git commit: KAFKA-5986; Streams State Restoration never completes when logging is disabled

2017-09-29 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/0.11.0 84bc74a4a -> fae2d2386


KAFKA-5986; Streams State Restoration never completes when logging is disabled

When logging is disabled and there are state stores the task never transitions 
from restoring to running. This is because we only ever check if the task has 
state stores and return false on initialization if it does. The check should be 
if we have changelog partitions, i.e., we need to restore.

Author: Damian Guy 

Reviewers: Matthias J. Sax , Bill Bejeck 
, tedyu , Ismael Juma 


Closes #3983 from dguy/restore-test

(cherry picked from commit 3107a6c5c8d1358b8e705c5d5a16b7441d2225a6)
Signed-off-by: Damian Guy 


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/fae2d238
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/fae2d238
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/fae2d238

Branch: refs/heads/0.11.0
Commit: fae2d23868e22ee2e6cd59809db0a6defc3734bc
Parents: 84bc74a
Author: Damian Guy 
Authored: Fri Sep 29 15:07:41 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 29 15:24:32 2017 +0100

--
 .../streams/processor/internals/StreamTask.java | 18 ---
 .../processor/internals/StreamTaskTest.java | 57 
 2 files changed, 67 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/fae2d238/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
index 149b938..de45800 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
@@ -146,6 +146,16 @@ public class StreamTask extends AbstractTask implements 
Punctuator {
 }
 }
 
+@Override
+public boolean initialize() {
+log.trace("Initializing");
+initializeStateStores();
+initTopology();
+processorContext.initialized();
+taskInitialized = true;
+return changelogPartitions().isEmpty();
+}
+
 /**
  * 
  * - re-initialize the task
@@ -561,12 +571,4 @@ public class StreamTask extends AbstractTask implements 
Punctuator {
 return new RecordCollectorImpl(producer, id.toString());
 }
 
-public boolean initialize() {
-log.debug("{} Initializing", logPrefix);
-initializeStateStores();
-initTopology();
-processorContext.initialized();
-return topology.stateStores().isEmpty();
-}
-
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/fae2d238/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamTaskTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamTaskTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamTaskTest.java
index cff145e..781979e 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamTaskTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamTaskTest.java
@@ -44,6 +44,7 @@ import 
org.apache.kafka.streams.state.internals.InMemoryKeyValueStore;
 import org.apache.kafka.streams.state.internals.OffsetCheckpoint;
 import org.apache.kafka.test.MockProcessorNode;
 import org.apache.kafka.test.MockSourceNode;
+import org.apache.kafka.test.MockStateStoreSupplier;
 import org.apache.kafka.test.MockTimestampExtractor;
 import org.apache.kafka.test.NoOpProcessorContext;
 import org.apache.kafka.test.NoOpRecordCollector;
@@ -800,6 +801,62 @@ public class StreamTaskTest {
 }
 }
 
+@Test
+public void shouldBeInitializedIfChangelogPartitionsIsEmpty() {
+final ProcessorTopology topology = new 
ProcessorTopology(Collections.singletonList(source1),
+ 
Collections.singletonMap(topic1[0], source1),
+ 
Collections.emptyMap(),
+ 
Collections.singletonList(
+ new 
MockStateStoreSupplier.MockStateStore("store",
+

kafka git commit: KAFKA-5986; Streams State Restoration never completes when logging is disabled

2017-09-29 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 36556b804 -> 3107a6c5c


KAFKA-5986; Streams State Restoration never completes when logging is disabled

When logging is disabled and there are state stores the task never transitions 
from restoring to running. This is because we only ever check if the task has 
state stores and return false on initialization if it does. The check should be 
if we have changelog partitions, i.e., we need to restore.

Author: Damian Guy 

Reviewers: Matthias J. Sax , Bill Bejeck 
, tedyu , Ismael Juma 


Closes #3983 from dguy/restore-test


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/3107a6c5
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/3107a6c5
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/3107a6c5

Branch: refs/heads/trunk
Commit: 3107a6c5c8d1358b8e705c5d5a16b7441d2225a6
Parents: 36556b8
Author: Damian Guy 
Authored: Fri Sep 29 15:07:41 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 29 15:07:41 2017 +0100

--
 .../streams/processor/internals/StreamTask.java |  2 +-
 .../integration/RestoreIntegrationTest.java | 56 ++-
 .../processor/internals/StreamTaskTest.java | 58 +++-
 3 files changed, 101 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/3107a6c5/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
index 3d6c9b9..8180b2c 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
@@ -164,7 +164,7 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 initTopology();
 processorContext.initialized();
 taskInitialized = true;
-return topology.stateStores().isEmpty();
+return changelogPartitions().isEmpty();
 }
 
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/3107a6c5/streams/src/test/java/org/apache/kafka/streams/integration/RestoreIntegrationTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/RestoreIntegrationTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/RestoreIntegrationTest.java
index 31b7222..ae36ad8 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/RestoreIntegrationTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/RestoreIntegrationTest.java
@@ -27,17 +27,22 @@ import org.apache.kafka.common.TopicPartition;
 import org.apache.kafka.common.serialization.IntegerDeserializer;
 import org.apache.kafka.common.serialization.IntegerSerializer;
 import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.streams.Consumed;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.StreamsBuilder;
 import org.apache.kafka.streams.StreamsConfig;
 import org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster;
 import org.apache.kafka.streams.kstream.ForeachAction;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.Materialized;
+import org.apache.kafka.streams.kstream.Reducer;
 import org.apache.kafka.streams.processor.StateRestoreListener;
+import org.apache.kafka.streams.state.KeyValueStore;
 import org.apache.kafka.test.IntegrationTest;
 import org.apache.kafka.test.TestUtils;
 import org.junit.After;
-import org.junit.Before;
+import org.junit.BeforeClass;
 import org.junit.ClassRule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -65,19 +70,15 @@ public class RestoreIntegrationTest {
 @ClassRule
 public static final EmbeddedKafkaCluster CLUSTER =
 new EmbeddedKafkaCluster(NUM_BROKERS);
-private final String inputStream = "input-stream";
+private static final String INPUT_STREAM = "input-stream";
 private final int numberOfKeys = 1;
 private KafkaStreams kafkaStreams;
 private String applicationId = "restore-test";
 
 
-private void createTopics() throws InterruptedException {
-CLUSTER.createTopic(inputStream, 2, 1);
-}
-
-@Before
-public void before() throws IOException, InterruptedException {
-createTopics();
+

kafka git commit: KAFKA-5932; Avoid call to fetchPrevious in FlushListeners

2017-09-29 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 082def05c -> 36556b804


KAFKA-5932; Avoid call to fetchPrevious in FlushListeners

Author: Bill Bejeck 

Reviewers: Matthias J. Sax , Damian Guy 


Closes #3978 from 
bbejeck/KAFKA-5932_no_fetch_previous_when_no_old_values_returned


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/36556b80
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/36556b80
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/36556b80

Branch: refs/heads/trunk
Commit: 36556b8041d3647375380e6fd70b8f37ba572ddc
Parents: 082def0
Author: Bill Bejeck 
Authored: Fri Sep 29 11:11:12 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 29 11:11:12 2017 +0100

--
 .../kstream/internals/TupleForwarder.java   |  2 +-
 .../state/internals/CachedStateStore.java   |  4 +-
 .../state/internals/CachingKeyValueStore.java   |  9 +++-
 .../state/internals/CachingSessionStore.java|  7 ++-
 .../state/internals/CachingWindowStore.java |  9 +++-
 .../internals/CachingKeyValueStoreTest.java | 17 +--
 .../internals/CachingSessionStoreTest.java  | 51 +++-
 .../state/internals/CachingWindowStoreTest.java | 14 +-
 8 files changed, 100 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/36556b80/streams/src/main/java/org/apache/kafka/streams/kstream/internals/TupleForwarder.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/TupleForwarder.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/TupleForwarder.java
index f07d7bb..4c02d1d 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/TupleForwarder.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/TupleForwarder.java
@@ -42,7 +42,7 @@ class TupleForwarder {
 this.context = context;
 this.sendOldValues = sendOldValues;
 if (this.cachedStateStore != null) {
-cachedStateStore.setFlushListener(flushListener);
+cachedStateStore.setFlushListener(flushListener, sendOldValues);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/36556b80/streams/src/main/java/org/apache/kafka/streams/state/internals/CachedStateStore.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CachedStateStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CachedStateStore.java
index 2f0fa1c..4bc813c 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CachedStateStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CachedStateStore.java
@@ -23,6 +23,8 @@ public interface CachedStateStore {
  * Set the {@link CacheFlushListener} to be notified when entries are 
flushed from the
  * cache to the underlying {@link 
org.apache.kafka.streams.processor.StateStore}
  * @param listener
+ * @param sendOldValues
  */
-void setFlushListener(final CacheFlushListener listener);
+void setFlushListener(final CacheFlushListener listener,
+  final boolean sendOldValues);
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/36556b80/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
index a89c741..f0669a4 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
@@ -38,6 +38,7 @@ class CachingKeyValueStore extends 
WrappedStateStore.AbstractStateStore im
 private final Serde keySerde;
 private final Serde valueSerde;
 private CacheFlushListener flushListener;
+private boolean sendOldValues;
 private String cacheName;
 private ThreadCache cache;
 private InternalProcessorContext context;
@@ -87,9 +88,10 @@ class CachingKeyValueStore extends 
WrappedStateStore.AbstractStateStore im
 context.setRecordContext(entry.recordContext());
 if (flushListener != null) {
 
+final V oldValue = sendOldValues ? 
serdes.valueFrom(underlying.get(entry.key())) : null;
 

kafka git commit: MINOR: always set Serde.Long on count operations

2017-09-29 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk b79b17971 -> 082def05c


MINOR: always set Serde.Long on count operations

Author: Damian Guy 

Reviewers: Guozhang Wang , Ismael Juma , 
Bill Bejeck , Matthias J. Sax 

Closes #3943 from dguy/count-materialized


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/082def05
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/082def05
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/082def05

Branch: refs/heads/trunk
Commit: 082def05ca5af4f30e05aa28ba83fa299f30337b
Parents: b79b179
Author: Damian Guy 
Authored: Fri Sep 29 11:06:34 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 29 11:06:34 2017 +0100

--
 .../java/org/apache/kafka/streams/kstream/KGroupedStream.java  | 2 ++
 .../apache/kafka/streams/kstream/SessionWindowedKStream.java   | 4 +++-
 .../org/apache/kafka/streams/kstream/TimeWindowedKStream.java  | 4 +++-
 .../kafka/streams/kstream/internals/KGroupedStreamImpl.java| 6 ++
 .../streams/kstream/internals/SessionWindowedKStreamImpl.java  | 5 +
 .../streams/kstream/internals/TimeWindowedKStreamImpl.java | 5 +
 .../streams/kstream/internals/KGroupedStreamImplTest.java  | 3 +--
 .../kstream/internals/SessionWindowedKStreamImplTest.java  | 3 +--
 8 files changed, 26 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/082def05/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
index 1ff1759..1c72ebf 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
@@ -18,6 +18,7 @@ package org.apache.kafka.streams.kstream;
 
 import org.apache.kafka.common.annotation.InterfaceStability;
 import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.serialization.Serdes;
 import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.KeyValue;
@@ -177,6 +178,7 @@ public interface KGroupedStream {
  * query the value of the key on a parallel running instance of your Kafka 
Streams application.
  *
  * @param materialized  an instance of {@link Materialized} used to 
materialize a state store. Cannot be {@code null}.
+ *  Note: the valueSerde will be automatically set to 
{@link Serdes#Long()} if there is no valueSerde provided
  * @return a {@link KTable} that contains "update" records with unmodified 
keys and {@link Long} values that
  * represent the latest (rolling) count (i.e., number of records) for each 
key
  */

http://git-wip-us.apache.org/repos/asf/kafka/blob/082def05/streams/src/main/java/org/apache/kafka/streams/kstream/SessionWindowedKStream.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/SessionWindowedKStream.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/SessionWindowedKStream.java
index d8044ac..3c3ef7e 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/SessionWindowedKStream.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/SessionWindowedKStream.java
@@ -16,6 +16,7 @@
  */
 package org.apache.kafka.streams.kstream;
 
+import org.apache.kafka.common.serialization.Serdes;
 import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.KeyValue;
@@ -90,7 +91,8 @@ public interface SessionWindowedKStream {
  * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
  * query the value of the key on a parallel running instance of your Kafka 
Streams application.
  *
- * @param materialized an instance of {@link Materialized} used to 
materialize a state store. Cannot be {@code null}
+ * @param materialized  an instance of {@link Materialized} used to 
materialize a state store. Cannot be {@code null}.
+ *  Note: the valueSerde will be automatically set to 
{@link Serdes#Long()} if there is no valueSerde provided
  * @return a windowed {@link KTable} that contains "update" records with 
unmodified keys and {@link Long} values
  * that represent the latest (rolling) count (i.e., number of records) for 
each key 

kafka git commit: KAFKA-5949; Follow-up after latest KIP-161 changes

2017-09-29 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk eaabb6cd0 -> b79b17971


KAFKA-5949; Follow-up after latest KIP-161 changes

 - compare KAFKA-5958

Author: Matthias J. Sax 

Reviewers: Damian Guy 

Closes #3986 from mjsax/kafka-5949-exceptions-user-callbacks-KIP-161-follow-up


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/b79b1797
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/b79b1797
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/b79b1797

Branch: refs/heads/trunk
Commit: b79b179716b5f8bacb870a53a5a9216a0687b3c9
Parents: eaabb6c
Author: Matthias J. Sax 
Authored: Fri Sep 29 10:21:57 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 29 10:21:57 2017 +0100

--
 .../org/apache/kafka/streams/KafkaStreams.java  | 30 ++--
 .../internals/CompositeRestoreListener.java | 30 ++--
 2 files changed, 30 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/b79b1797/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java 
b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
index 2f5ce4b..928d0e9 100644
--- a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
+++ b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
@@ -562,21 +562,45 @@ public class KafkaStreams {
 @Override
 public void onRestoreStart(final TopicPartition topicPartition, 
final String storeName, final long startingOffset, final long endingOffset) {
 if (globalStateRestoreListener != null) {
-globalStateRestoreListener.onRestoreStart(topicPartition, 
storeName, startingOffset, endingOffset);
+try {
+
globalStateRestoreListener.onRestoreStart(topicPartition, storeName, 
startingOffset, endingOffset);
+} catch (final Exception fatalUserException) {
+throw new StreamsException(
+String.format("Fatal user code error in store 
restore listener for store %s, partition %s.",
+storeName,
+topicPartition),
+fatalUserException);
+}
 }
 }
 
 @Override
 public void onBatchRestored(final TopicPartition topicPartition, 
final String storeName, final long batchEndOffset, final long numRestored) {
 if (globalStateRestoreListener != null) {
-globalStateRestoreListener.onBatchRestored(topicPartition, 
storeName, batchEndOffset, numRestored);
+try {
+
globalStateRestoreListener.onBatchRestored(topicPartition, storeName, 
batchEndOffset, numRestored);
+} catch (final Exception fatalUserException) {
+throw new StreamsException(
+String.format("Fatal user code error in store 
restore listener for store %s, partition %s.",
+storeName,
+topicPartition),
+fatalUserException);
+}
 }
 }
 
 @Override
 public void onRestoreEnd(final TopicPartition topicPartition, 
final String storeName, final long totalRestored) {
 if (globalStateRestoreListener != null) {
-globalStateRestoreListener.onRestoreEnd(topicPartition, 
storeName, totalRestored);
+try {
+
globalStateRestoreListener.onRestoreEnd(topicPartition, storeName, 
totalRestored);
+} catch (final Exception fatalUserException) {
+throw new StreamsException(
+String.format("Fatal user code error in store 
restore listener for store %s, partition %s.",
+storeName,
+topicPartition),
+fatalUserException);
+}
 }
 }
 };

http://git-wip-us.apache.org/repos/asf/kafka/blob/b79b1797/streams/src/main/java/org/apache/kafka/streams/processor/internals/CompositeRestoreListener.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/CompositeRestoreListener.java
 

kafka git commit: KAFKA-4593; Don't throw IllegalStateException and die on task migration

2017-09-29 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 177dd7f21 -> eaabb6cd0


KAFKA-4593; Don't throw IllegalStateException and die on task migration

Author: Matthias J. Sax 

Reviewers: Damian Guy , Guozhang Wang 

Closes #3948 from mjsax/kafka-4593-illegal-state-exception-in-restore


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/eaabb6cd
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/eaabb6cd
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/eaabb6cd

Branch: refs/heads/trunk
Commit: eaabb6cd0173c4f6854eb5da39194a7e3fc0162c
Parents: 177dd7f
Author: Matthias J. Sax 
Authored: Fri Sep 29 10:00:13 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 29 10:00:13 2017 +0100

--
 .../streams/errors/TaskMigratedException.java   |  52 +++
 .../processor/internals/AssignedTasks.java  |  89 +---
 .../processor/internals/ChangelogReader.java|   2 +-
 .../processor/internals/PunctuationQueue.java   |   6 +-
 .../processor/internals/RestoringTasks.java |  23 +++
 .../internals/StoreChangelogReader.java |  43 +++---
 .../streams/processor/internals/StreamTask.java | 111 ++-
 .../processor/internals/StreamThread.java   |  36 -
 .../processor/internals/TaskManager.java|  33 -
 .../processor/internals/AssignedTasksTest.java  | 140 ---
 .../internals/MockChangelogReader.java  |  53 +++
 .../internals/ProcessorStateManagerTest.java|   1 -
 .../internals/StoreChangelogReaderTest.java |  87 +---
 .../processor/internals/StreamThreadTest.java   |  16 ++-
 .../processor/internals/TaskManagerTest.java|   4 +-
 .../apache/kafka/test/MockChangelogReader.java  |  55 
 16 files changed, 530 insertions(+), 221 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/eaabb6cd/streams/src/main/java/org/apache/kafka/streams/errors/TaskMigratedException.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/errors/TaskMigratedException.java
 
b/streams/src/main/java/org/apache/kafka/streams/errors/TaskMigratedException.java
new file mode 100644
index 000..f2fa594
--- /dev/null
+++ 
b/streams/src/main/java/org/apache/kafka/streams/errors/TaskMigratedException.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.errors;
+
+
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.streams.processor.internals.Task;
+
+/**
+ * Indicates that a task got migrated to another thread.
+ * Thus, the task raising this exception can be cleaned up and closed as 
"zombie".
+ */
+public class TaskMigratedException extends StreamsException {
+
+private final static long serialVersionUID = 1L;
+
+public TaskMigratedException(final Task task) {
+this(task, null);
+}
+
+public TaskMigratedException(final Task task,
+ final TopicPartition topicPartition,
+ final long endOffset,
+ final long pos) {
+super(String.format("Log end offset of %s should not change while 
restoring: old end offset %d, current offset %d%n%s",
+topicPartition,
+endOffset,
+pos,
+task.toString("> ")),
+null);
+}
+
+public TaskMigratedException(final Task task,
+ final Throwable throwable) {
+super(task.toString(), throwable);
+}
+
+}

http://git-wip-us.apache.org/repos/asf/kafka/blob/eaabb6cd/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
--
diff --git 

kafka git commit: HOTFIX: fix build compilation error

2017-09-28 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 071f9fe1a -> 9d0a89aea


HOTFIX: fix build compilation error

Author: Damian Guy 

Reviewers: Ismael Juma 

Closes #3981 from dguy/fix-build


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/9d0a89ae
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/9d0a89ae
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/9d0a89ae

Branch: refs/heads/trunk
Commit: 9d0a89aea5252e5b7ff8deff707547e6a6dbbc4f
Parents: 071f9fe
Author: Damian Guy 
Authored: Thu Sep 28 12:53:30 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 28 12:53:30 2017 +0100

--
 .../streams/processor/internals/GlobalStateManagerImplTest.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/9d0a89ae/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
index 0519fb0..e9d61f5 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImplTest.java
@@ -214,7 +214,7 @@ public class GlobalStateManagerImplTest {
 stateManager.initialize(context);
 
 final TheStateRestoreCallback stateRestoreCallback = new 
TheStateRestoreCallback();
-stateManager.register(store1, false, stateRestoreCallback);
+stateManager.register(store1, stateRestoreCallback);
 
 assertThat(stateRestoreListener.restoreStartOffset, equalTo(1L));
 assertThat(stateRestoreListener.restoreEndOffset, equalTo(5L));



kafka git commit: KAFKA-5979; Use single AtomicCounter to generate internal names

2017-09-28 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk e5f2471c5 -> e846daa89


KAFKA-5979; Use single AtomicCounter to generate internal names

Author: Matthias J. Sax 

Reviewers: Bill Bejeck , Damian Guy 

Closes #3979 from mjsax/kafka-5979-kip-120-regression


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e846daa8
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e846daa8
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e846daa8

Branch: refs/heads/trunk
Commit: e846daa89b1cbdf7c08f1b719fcf1ffc3885614f
Parents: e5f2471
Author: Matthias J. Sax 
Authored: Thu Sep 28 11:07:54 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 28 11:07:54 2017 +0100

--
 .../apache/kafka/streams/kstream/KStreamBuilder.java|  7 ++-
 .../kstream/internals/InternalStreamsBuilder.java   |  2 +-
 .../kafka/streams/kstream/KStreamBuilderTest.java   | 12 ++--
 3 files changed, 9 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/e846daa8/streams/src/main/java/org/apache/kafka/streams/kstream/KStreamBuilder.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KStreamBuilder.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KStreamBuilder.java
index e7bcc95..ab666ba 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KStreamBuilder.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KStreamBuilder.java
@@ -37,7 +37,6 @@ import 
org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreSupplier;
 
 import java.util.Collections;
 import java.util.Objects;
-import java.util.concurrent.atomic.AtomicInteger;
 import java.util.regex.Pattern;
 
 /**
@@ -52,8 +51,6 @@ import java.util.regex.Pattern;
 @Deprecated
 public class KStreamBuilder extends 
org.apache.kafka.streams.processor.TopologyBuilder {
 
-private final AtomicInteger index = new AtomicInteger(0);
-
 private final InternalStreamsBuilder internalStreamsBuilder = new 
InternalStreamsBuilder(super.internalTopologyBuilder);
 
 private Topology.AutoOffsetReset translateAutoOffsetReset(final 
org.apache.kafka.streams.processor.TopologyBuilder.AutoOffsetReset resetPolicy) 
{
@@ -1249,7 +1246,7 @@ public class KStreamBuilder extends 
org.apache.kafka.streams.processor.TopologyB
  * @return a new unique name
  */
 public String newName(final String prefix) {
-return prefix + String.format("%010d", index.getAndIncrement());
+return internalStreamsBuilder.newName(prefix);
 }
 
 /**
@@ -1261,7 +1258,7 @@ public class KStreamBuilder extends 
org.apache.kafka.streams.processor.TopologyB
  * @return a new unique name
  */
 public String newStoreName(final String prefix) {
-return prefix + String.format(KTableImpl.STATE_STORE_NAME + "%010d", 
index.getAndIncrement());
+return internalStreamsBuilder.newStoreName(prefix);
 }
 
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/e846daa8/streams/src/main/java/org/apache/kafka/streams/kstream/internals/InternalStreamsBuilder.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/InternalStreamsBuilder.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/InternalStreamsBuilder.java
index fa696fe..357a70c 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/InternalStreamsBuilder.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/InternalStreamsBuilder.java
@@ -159,7 +159,7 @@ public class InternalStreamsBuilder {
 return new GlobalKTableImpl<>(new KTableSourceValueGetterSupplier(storeBuilder.name()));
 }
 
-String newName(final String prefix) {
+public String newName(final String prefix) {
 return prefix + String.format("%010d", index.getAndIncrement());
 }
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/e846daa8/streams/src/test/java/org/apache/kafka/streams/kstream/KStreamBuilderTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/kstream/KStreamBuilderTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/kstream/KStreamBuilderTest.java
index c0bfa99..5ffedb8 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/kstream/KStreamBuilderTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/kstream/KStreamBuilderTest.java
@@ -19,8 +19,8 @@ package org.apache.kafka.streams.kstream;
 import 

[2/2] kafka git commit: KAFKA-5949; User Callback Exceptions need to be handled properly

2017-09-28 Thread damianguy
KAFKA-5949; User Callback Exceptions need to be handled properly

 - catch user exception in user callback (TimestampExtractor, 
DeserializationHandler, StateRestoreListener) and wrap with StreamsException

Additional cleanup:
 - rename globalRestoreListener to userRestoreListener
 - remove unnecessary interface -> collapse SourceNodeRecordDeserializer and 
RecordDeserializer
 - removed unused parameter loggingEnabled from ProcessorContext#register

Author: Matthias J. Sax 

Reviewers: Bill Bejeck , Guozhang Wang , 
Damian Guy 

Closes #3939 from mjsax/kafka-5949-exceptions-user-callbacks


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e5f2471c
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e5f2471c
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e5f2471c

Branch: refs/heads/trunk
Commit: e5f2471c548fc490a42dd0321bcf7fcdd4ddc52d
Parents: 2703fda
Author: Matthias J. Sax 
Authored: Thu Sep 28 11:00:31 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 28 11:00:31 2017 +0100

--
 .../errors/LogAndContinueExceptionHandler.java  |   6 +-
 .../errors/LogAndFailExceptionHandler.java  |   8 +-
 .../kafka/streams/kstream/ValueTransformer.java |   2 +
 .../internals/KStreamTransformValues.java   |   6 +-
 .../streams/processor/ProcessorContext.java |  11 +-
 .../kafka/streams/processor/StateStore.java |   4 +
 .../internals/AbstractProcessorContext.java |   4 +-
 .../processor/internals/AbstractTask.java   |   6 +-
 .../processor/internals/AssignedTasks.java  |   5 +
 .../internals/CompositeRestoreListener.java |  52 +++--
 .../processor/internals/GlobalStateManager.java |   6 +
 .../internals/GlobalStateManagerImpl.java   |   1 -
 .../internals/GlobalStateUpdateTask.java|  21 ++--
 .../processor/internals/GlobalStreamThread.java |   7 +-
 .../internals/InternalTopologyBuilder.java  |   8 +-
 .../internals/ProcessorStateManager.java|  13 +--
 .../processor/internals/RecordDeserializer.java |  70 +++-
 .../processor/internals/RecordQueue.java|  28 +++--
 .../internals/SourceNodeRecordDeserializer.java |  90 ---
 .../processor/internals/StateManager.java   |   8 +-
 .../processor/internals/StateRestorer.java  |   4 +-
 .../internals/StoreChangelogReader.java |   8 +-
 .../streams/processor/internals/StreamTask.java |   3 +-
 .../processor/internals/StreamThread.java   |  12 +-
 .../kafka/streams/processor/internals/Task.java |   3 +
 .../processor/internals/TaskManager.java|   5 +-
 .../state/internals/InMemoryKeyValueStore.java  |   2 +-
 .../streams/state/internals/MemoryLRUCache.java |   2 +-
 .../streams/processor/TopologyBuilderTest.java  |   2 +-
 .../internals/CompositeRestoreListenerTest.java |   8 +-
 .../internals/GlobalStateManagerImplTest.java   |  42 +++
 .../internals/GlobalStateTaskTest.java  |  20 +++-
 .../internals/InternalTopologyBuilderTest.java  |   3 +
 .../processor/internals/PartitionGroupTest.java |  18 ++-
 .../internals/ProcessorStateManagerTest.java|  50 -
 .../internals/RecordDeserializerTest.java   |  98 
 .../processor/internals/RecordQueueTest.java|  32 --
 .../SourceNodeRecordDeserializerTest.java   | 111 ---
 .../processor/internals/StandbyTaskTest.java|   2 +-
 .../processor/internals/StateManagerStub.java   |   2 +-
 .../processor/internals/StateRestorerTest.java  |   2 +-
 .../internals/StoreChangelogReaderTest.java |   2 +-
 .../internals/StreamPartitionAssignorTest.java  |  15 ++-
 .../processor/internals/StreamTaskTest.java |   4 +-
 .../kafka/test/GlobalStateManagerStub.java  |   2 +-
 .../apache/kafka/test/MockProcessorContext.java |   4 +-
 .../kafka/test/MockStateStoreSupplier.java  |  39 +++
 .../apache/kafka/test/NoOpProcessorContext.java |   4 +-
 .../kafka/test/ProcessorTopologyTestDriver.java |   5 +-
 49 files changed, 478 insertions(+), 382 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/e5f2471c/streams/src/main/java/org/apache/kafka/streams/errors/LogAndContinueExceptionHandler.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/errors/LogAndContinueExceptionHandler.java
 
b/streams/src/main/java/org/apache/kafka/streams/errors/LogAndContinueExceptionHandler.java
index dde4b52..b2ef45b 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/errors/LogAndContinueExceptionHandler.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/errors/LogAndContinueExceptionHandler.java
@@ -38,9 +38,9 @@ 

[1/2] kafka git commit: KAFKA-5949; User Callback Exceptions need to be handled properly

2017-09-28 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 2703fda52 -> e5f2471c5


http://git-wip-us.apache.org/repos/asf/kafka/blob/e5f2471c/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
index f3135d5..ede6dd4 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
@@ -105,7 +105,7 @@ public class ProcessorStateManagerTest {
 final ProcessorStateManager stateMgr = getStandByStateManager(taskId);
 
 try {
-stateMgr.register(persistentStore, true, batchingRestoreCallback);
+stateMgr.register(persistentStore, batchingRestoreCallback);
 stateMgr.updateStandbyStates(persistentStorePartition, 
Collections.singletonList(consumerRecord));
 assertThat(batchingRestoreCallback.getRestoredRecords().size(), 
is(1));
 
assertTrue(batchingRestoreCallback.getRestoredRecords().contains(expectedKeyValue));
@@ -123,7 +123,7 @@ public class ProcessorStateManagerTest {
 final ProcessorStateManager stateMgr = getStandByStateManager(taskId);
 
 try {
-stateMgr.register(persistentStore, true, 
persistentStore.stateRestoreCallback);
+stateMgr.register(persistentStore, 
persistentStore.stateRestoreCallback);
 stateMgr.updateStandbyStates(persistentStorePartition, 
Collections.singletonList(consumerRecord));
 assertThat(persistentStore.keys.size(), is(1));
 assertTrue(persistentStore.keys.contains(intKey));
@@ -153,7 +153,7 @@ public class ProcessorStateManagerTest {
 logContext);
 
 try {
-stateMgr.register(persistentStore, true, 
persistentStore.stateRestoreCallback);
+stateMgr.register(persistentStore, 
persistentStore.stateRestoreCallback);
 assertTrue(changelogReader.wasRegistered(new 
TopicPartition(persistentStoreTopicName, 2)));
 } finally {
 stateMgr.close(Collections.emptyMap());
@@ -180,7 +180,7 @@ public class ProcessorStateManagerTest {
 logContext);
 
 try {
-stateMgr.register(nonPersistentStore, true, 
nonPersistentStore.stateRestoreCallback);
+stateMgr.register(nonPersistentStore, 
nonPersistentStore.stateRestoreCallback);
 assertTrue(changelogReader.wasRegistered(new 
TopicPartition(nonPersistentStoreTopicName, 2)));
 } finally {
 stateMgr.close(Collections.emptyMap());
@@ -229,9 +229,9 @@ public class ProcessorStateManagerTest {
 logContext);
 
 try {
-stateMgr.register(store1, true, store1.stateRestoreCallback);
-stateMgr.register(store2, true, store2.stateRestoreCallback);
-stateMgr.register(store3, true, store3.stateRestoreCallback);
+stateMgr.register(store1, store1.stateRestoreCallback);
+stateMgr.register(store2, store2.stateRestoreCallback);
+stateMgr.register(store3, store3.stateRestoreCallback);
 
 final Map changeLogOffsets = 
stateMgr.checkpointed();
 
@@ -261,7 +261,7 @@ public class ProcessorStateManagerTest {
 false,
 logContext);
 try {
-stateMgr.register(mockStateStore, true, 
mockStateStore.stateRestoreCallback);
+stateMgr.register(mockStateStore, 
mockStateStore.stateRestoreCallback);
 
 assertNull(stateMgr.getStore("noSuchStore"));
 assertEquals(mockStateStore, 
stateMgr.getStore(nonPersistentStoreName));
@@ -299,8 +299,8 @@ public class ProcessorStateManagerTest {
 // make sure the checkpoint file isn't deleted
 assertTrue(checkpointFile.exists());
 
-stateMgr.register(persistentStore, true, 
persistentStore.stateRestoreCallback);
-stateMgr.register(nonPersistentStore, true, 
nonPersistentStore.stateRestoreCallback);
+stateMgr.register(persistentStore, 
persistentStore.stateRestoreCallback);
+stateMgr.register(nonPersistentStore, 
nonPersistentStore.stateRestoreCallback);
 } finally {
 // close the state manager with the ack'ed offsets
 stateMgr.flush();
@@ -330,7 +330,7 @@ public class ProcessorStateManagerTest {
 changelogReader,
 false,
 logContext);
-stateMgr.register(nonPersistentStore, false, 
nonPersistentStore.stateRestoreCallback);
+stateMgr.register(nonPersistentStore, 

kafka git commit: KAFKA-5958; Global stores access state restore listener

2017-09-28 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 1444b7b59 -> e1543a5a8


KAFKA-5958; Global stores access state restore listener

Author: Bill Bejeck 

Reviewers: Damian Guy 

Closes #3973 from bbejeck/KAFKA-5958_global_stores_access_state_restore_listener


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e1543a5a
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e1543a5a
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e1543a5a

Branch: refs/heads/trunk
Commit: e1543a5a8ecbd9da6e39fb0952b1193450b3c931
Parents: 1444b7b
Author: Bill Bejeck 
Authored: Thu Sep 28 10:54:38 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 28 10:54:38 2017 +0100

--
 .../org/apache/kafka/streams/KafkaStreams.java  |  3 ++-
 .../internals/GlobalStateManagerImpl.java   | 17 +---
 .../processor/internals/GlobalStreamThread.java | 17 
 .../apache/kafka/streams/KafkaStreamsTest.java  |  4 +--
 .../internals/GlobalStateManagerImplTest.java   | 27 ++--
 .../internals/GlobalStreamThreadTest.java   |  8 --
 .../kafka/test/ProcessorTopologyTestDriver.java |  6 -
 7 files changed, 66 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/e1543a5a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java 
b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
index 5aec3c5..2f5ce4b 100644
--- a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
+++ b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
@@ -613,7 +613,8 @@ public class KafkaStreams {
 stateDirectory,
 metrics,
 Time.SYSTEM,
-globalThreadId);
+globalThreadId,
+
delegatingStateRestoreListener);
 globalThreadState = globalStreamThread.state();
 }
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/e1543a5a/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImpl.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImpl.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImpl.java
index d9205a0..d03425b 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImpl.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateManagerImpl.java
@@ -27,6 +27,7 @@ import 
org.apache.kafka.streams.errors.ProcessorStateException;
 import org.apache.kafka.streams.errors.StreamsException;
 import org.apache.kafka.streams.processor.BatchingStateRestoreCallback;
 import org.apache.kafka.streams.processor.StateRestoreCallback;
+import org.apache.kafka.streams.processor.StateRestoreListener;
 import org.apache.kafka.streams.processor.StateStore;
 import org.apache.kafka.streams.state.internals.OffsetCheckpoint;
 import org.slf4j.Logger;
@@ -61,15 +62,18 @@ public class GlobalStateManagerImpl implements 
GlobalStateManager {
 private final OffsetCheckpoint checkpoint;
 private final Set globalStoreNames = new HashSet<>();
 private final Map checkpointableOffsets = new 
HashMap<>();
+private final StateRestoreListener stateRestoreListener;
 
 public GlobalStateManagerImpl(final ProcessorTopology topology,
   final Consumer consumer,
-  final StateDirectory stateDirectory) {
+  final StateDirectory stateDirectory,
+  final StateRestoreListener 
stateRestoreListener) {
 this.topology = topology;
 this.consumer = consumer;
 this.stateDirectory = stateDirectory;
 this.baseDir = stateDirectory.globalStateDir();
 this.checkpoint = new OffsetCheckpoint(new File(this.baseDir, 
CHECKPOINT_FILE_NAME));
+this.stateRestoreListener = stateRestoreListener;
 }
 
 @Override
@@ -135,7 +139,7 @@ public class GlobalStateManagerImpl implements 
GlobalStateManager {
 final List topicPartitions = 
topicPartitionsForStore(store);
 final Map 

kafka git commit: MINOR:Updated Rabobank description

2017-09-27 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 73cc41666 -> 5b943ca8a


MINOR:Updated Rabobank description

dguy Please review

Author: Manjula K 
Author: manjuapu 

Reviewers: Damian Guy 

Closes #3963 from manjuapu/customer-logo-stream


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/5b943ca8
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/5b943ca8
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/5b943ca8

Branch: refs/heads/trunk
Commit: 5b943ca8a9bec9f2c990d9d03fc0f4b7c3e9cca5
Parents: 73cc416
Author: Manjula K 
Authored: Wed Sep 27 09:26:57 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 27 09:26:57 2017 +0100

--
 docs/streams/index.html | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/5b943ca8/docs/streams/index.html
--
diff --git a/docs/streams/index.html b/docs/streams/index.html
index 3704a3d..112e304 100644
--- a/docs/streams/index.html
+++ b/docs/streams/index.html
@@ -213,9 +213,9 @@
  
  
 
- 
+ 

- Rabobank is one of the 3 
largest banks in the Netherlands. Its digital nervous system, the Business 
Event Bus, is powered by Apache Kafka and Kafka Streams.
+ Rabobank is one of the 3 
largest banks in the Netherlands. Its digital nervous system, the Business 
Event Bus, is powered by Apache Kafka. It is used by an increasing amount of 
financial processes and services, one which is Rabo Alerts. This service alerts 
customers in real-time upon financial events and is built using Kafka Streams.
  
  https://www.confluent.io/blog/real-time-financial-alerts-rabobank-apache-kafkas-streams-api/;>Learn
 More 
  
@@ -223,7 +223,7 @@
  
 

- 
+ 
 
   As the leading online 
fashion retailer in Europe, Zalando uses Apache Kafka as an ESB (Enterprise 
Service Bus), which helps us in transitioning from a monolithic to a micro 
services architecture. Using Kafka for processing event streams enables our 
technical team to do near-real time business intelligence.
   https://kafka-summit.org/sessions/using-kstreams-ktables-calculate-real-time-domain-rankings/;>Learn
 More
@@ -237,6 +237,7 @@
 
 
  
+
 
 Previous
 Next



kafka git commit: KAFKA-5765; Move merge() from StreamsBuilder to KStream

2017-09-26 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 4e43a7231 -> b8be86b80


KAFKA-5765; Move merge() from StreamsBuilder to KStream

This is the polished version.
1. The old merge() method in StreamsBuilder has been removed,
2. The merge() method in KStreamBuilder was changed so that it would use the 
single variable argument
rather than several variable arguments in the KStreamImpl implementation
3. The merge() method in KStream has been declared as final and tests have been 
added to test correctness.

Author: Richard Yu 

Reviewers: Matthias J. Sax , Bill Bejeck 
, Guozhang Wang , Damian Guy 


Closes #3916 from ConcurrencyPractitioner/trunk


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/b8be86b8
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/b8be86b8
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/b8be86b8

Branch: refs/heads/trunk
Commit: b8be86b80543e41fd3181a8de8f1a3ac0a72e4c5
Parents: 4e43a72
Author: Richard Yu 
Authored: Tue Sep 26 09:42:53 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 26 09:42:53 2017 +0100

--
 docs/streams/upgrade-guide.html | 10 +++-
 .../apache/kafka/streams/StreamsBuilder.java| 14 +
 .../apache/kafka/streams/kstream/KStream.java   | 13 +
 .../kafka/streams/kstream/KStreamBuilder.java   | 10 +++-
 .../internals/InternalStreamsBuilder.java   |  5 --
 .../streams/kstream/internals/KStreamImpl.java  | 30 +--
 .../kafka/streams/StreamsBuilderTest.java   |  6 +--
 .../streams/kstream/KStreamBuilderTest.java |  2 +-
 .../internals/InternalStreamsBuilderTest.java   |  2 +-
 .../kstream/internals/KStreamImplTest.java  | 55 
 .../internals/StreamsMetadataStateTest.java |  2 +-
 11 files changed, 107 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/b8be86b8/docs/streams/upgrade-guide.html
--
diff --git a/docs/streams/upgrade-guide.html b/docs/streams/upgrade-guide.html
index c90024f..c2835a3 100644
--- a/docs/streams/upgrade-guide.html
+++ b/docs/streams/upgrade-guide.html
@@ -84,7 +84,13 @@
 and can be obtained by calling Topology#describe().
 An example using this new API is shown in the quickstart section.
 
-
+
+
+With the introduction of https://cwiki.apache.org/confluence/display/KAFKA/KIP-202+Move+merge%28%29+from+StreamsBuilder+to+KStream;>KIP-202
+a new method merge() has been created in 
KStream as the StreamsBuilder class's 
StreamsBuilder#merge() has been removed. 
+The method signature was also changed, too: instead of providing 
multiple KStreams into the method at the once, only a single 
KStream is accepted.
+
+
 
 New methods in KafkaStreams:
 
@@ -214,7 +220,9 @@
 If exactly-once processing is enabled via the 
processing.guarantees parameter, internally Streams switches from 
a producer per thread to a producer per task runtime model.
 In order to distinguish the different producers, the producer's 
client.id additionally encodes the task-ID for this case.
 Because the producer's client.id is used to report JMX 
metrics, it might be required to update tools that receive those metrics.
+
 
+
  Producer's client.id naming schema: 
 
  at-least-once (default): 
[client.Id]-StreamThread-[sequence-number] 

http://git-wip-us.apache.org/repos/asf/kafka/blob/b8be86b8/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java 
b/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
index 7e746e6..94d19ae 100644
--- a/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
+++ b/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
@@ -59,7 +59,7 @@ public class StreamsBuilder {
 final InternalTopologyBuilder internalTopologyBuilder = 
topology.internalTopologyBuilder;
 
 private final InternalStreamsBuilder internalStreamsBuilder = new 
InternalStreamsBuilder(internalTopologyBuilder);
-
+
 /**
  * Create a {@link KStream} from the specified topics.
  * The default {@code "auto.offset.reset"} strategy, default {@link 
TimestampExtractor}, and default key and value
@@ -493,18 +493,6 @@ public class StreamsBuilder {
 }
 
 /**
- * Create a new instance of {@link KStream} by merging the given {@link 
KStream}s.
- * 
- * There is no 

kafka-site git commit: MINOR:Updating Rabobank description and Zalando image in powered-by & streams page

2017-09-26 Thread damianguy
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 4bb2fd8c6 -> 8c1a77237


MINOR:Updating Rabobank description and Zalando image in powered-by & streams 
page

guozhangwang dguy Updated text was provided by Rabobank, so in this PR I am 
updating it.
Please review. Thanks!!

Author: Manjula K 

Reviewers: Damian Guy 

Closes #86 from manjuapu/asf-site


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/8c1a7723
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/8c1a7723
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/8c1a7723

Branch: refs/heads/asf-site
Commit: 8c1a77237c0afc54ae158d5670da2f5f887c77c6
Parents: 4bb2fd8
Author: Manjula K 
Authored: Tue Sep 26 09:22:25 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 26 09:22:25 2017 +0100

--
 0110/streams/index.html   |   6 +++---
 images/powered-by/zalando.jpg | Bin 0 -> 6356 bytes
 images/powered-by/zalando.png | Bin 2716 -> 0 bytes
 powered-by.html   |   4 ++--
 4 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/8c1a7723/0110/streams/index.html
--
diff --git a/0110/streams/index.html b/0110/streams/index.html
index f323456..919c60b 100644
--- a/0110/streams/index.html
+++ b/0110/streams/index.html
@@ -213,9 +213,9 @@
  
  
 
- 
+ 

- Rabobank is one of the 3 
largest banks in the Netherlands. Its digital nervous system, the Business 
Event Bus, is powered by Apache Kafka and Kafka Streams.
+ Rabobank is one of the 3 
largest banks in the Netherlands. Its digital nervous system, the Business 
Event Bus, is powered by Apache Kafka. It is used by an increasing amount of 
financial processes and services, one which is Rabo Alerts. This service alerts 
customers in real-time upon financial events and is built using Kafka Streams.
  
  https://www.confluent.io/blog/real-time-financial-alerts-rabobank-apache-kafkas-streams-api/;>Learn
 More 
  
@@ -223,7 +223,7 @@
  
 

- 
+ 
 
   As the leading online 
fashion retailer in Europe, Zalando uses Apache Kafka as an ESB (Enterprise 
Service Bus), which helps us in transitioning from a monolithic to a micro 
services architecture. Using Kafka for processing event streams enables our 
technical team to do near-real time business intelligence.
   https://kafka-summit.org/sessions/using-kstreams-ktables-calculate-real-time-domain-rankings/;>Learn
 More

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/8c1a7723/images/powered-by/zalando.jpg
--
diff --git a/images/powered-by/zalando.jpg b/images/powered-by/zalando.jpg
new file mode 100644
index 000..0d8e9d7
Binary files /dev/null and b/images/powered-by/zalando.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/8c1a7723/images/powered-by/zalando.png
--
diff --git a/images/powered-by/zalando.png b/images/powered-by/zalando.png
deleted file mode 100755
index 719a7dc..000
Binary files a/images/powered-by/zalando.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/8c1a7723/powered-by.html
--
diff --git a/powered-by.html b/powered-by.html
index bb51217..fee1d5d 100644
--- a/powered-by.html
+++ b/powered-by.html
@@ -416,7 +416,7 @@
 "link":  "https://www.rabobank.com;,
 "logo": "rabobank.jpg",
 "logoBgColor": "#ff",
-"description": "Rabobank is one of the 3 largest banks in the 
Netherlands. Its digital nervous system, the Business Event Bus, is powered by 
Apache Kafka and Kafka Streams."
+"description": "Rabobank is one of the 3 largest banks in the 
Netherlands. Its digital nervous system, the Business Event Bus, is powered by 
Apache Kafka. It is used by an increasing amount of financial processes and 
services, one which is Rabo Alerts. This service alerts customers in real-time 
upon financial events and is built using Kafka Streams."
 },{
 "link":  "http://www.portoseguro.com.br/;,
 "logo": "porto-seguro.png",
@@ -434,7 +434,7 @@
 "description": "Apache Kafka is used at CJ Affiliate to process 
many of the key events driving our core product. Nearly every aspect of CJ's 
products and services 

kafka git commit: KAFKA-5956; use serdes from materialized in table and globalTable

2017-09-22 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 7c988a3c8 -> 125d8d6f7


KAFKA-5956; use serdes from materialized in table and globalTable

The new overloads `StreamBuilder.table(String, Materialized)` and 
`StreamsBuilder.globalTable(String, Materialized)` need to set the serdes from 
`Materialized` on the internal `Consumed` instance that is created, otherwise 
the defaults will be used and may result in serialization errors

Author: Damian Guy 

Reviewers: Matthias J. Sax , Guozhang Wang 


Closes #3936 from dguy/table-materialized


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/125d8d6f
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/125d8d6f
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/125d8d6f

Branch: refs/heads/trunk
Commit: 125d8d6f70829b9a0dbeabfef8f6b2df438dc12b
Parents: 7c988a3
Author: Damian Guy 
Authored: Fri Sep 22 13:45:19 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 22 13:45:19 2017 +0100

--
 .../apache/kafka/streams/StreamsBuilder.java| 10 ++--
 .../kafka/streams/StreamsBuilderTest.java   | 53 
 2 files changed, 60 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/125d8d6f/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java 
b/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
index a272ec4..7e746e6 100644
--- a/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
+++ b/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java
@@ -301,8 +301,10 @@ public class StreamsBuilder {
   final Materialized> materialized) {
 Objects.requireNonNull(topic, "topic can't be null");
 Objects.requireNonNull(materialized, "materialized can't be null");
+final MaterializedInternal> 
materializedInternal = new MaterializedInternal<>(materialized);
 return internalStreamsBuilder.table(topic,
-new ConsumedInternal(),
+new 
ConsumedInternal<>(Consumed.with(materializedInternal.keySerde(),
+   
  materializedInternal.valueSerde())),
 new 
MaterializedInternal<>(materialized));
 }
 
@@ -429,9 +431,11 @@ public class StreamsBuilder {
   final 
Materialized> materialized) {
 Objects.requireNonNull(topic, "topic can't be null");
 Objects.requireNonNull(materialized, "materialized can't be null");
+final MaterializedInternal> 
materializedInternal = new MaterializedInternal<>(materialized);
 return internalStreamsBuilder.globalTable(topic,
-  new ConsumedInternal(),
-  new 
MaterializedInternal<>(materialized));
+  new 
ConsumedInternal<>(Consumed.with(materializedInternal.keySerde(),
+   
materializedInternal.valueSerde())),
+  materializedInternal);
 }
 
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/125d8d6f/streams/src/test/java/org/apache/kafka/streams/StreamsBuilderTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/StreamsBuilderTest.java 
b/streams/src/test/java/org/apache/kafka/streams/StreamsBuilderTest.java
index dedd157..4ce202b 100644
--- a/streams/src/test/java/org/apache/kafka/streams/StreamsBuilderTest.java
+++ b/streams/src/test/java/org/apache/kafka/streams/StreamsBuilderTest.java
@@ -16,21 +16,31 @@
  */
 package org.apache.kafka.streams;
 
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.streams.errors.TopologyException;
+import org.apache.kafka.streams.kstream.ForeachAction;
 import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.Materialized;
 import org.apache.kafka.streams.kstream.internals.KStreamImpl;
 import 

kafka git commit: MINOR: change task initialization logging levels

2017-09-20 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 37ec15e96 -> bb9859720


MINOR: change task initialization logging levels

In `AssignedTasks` log at debug all task ids that are yet to be initialized.
In `StreamsTask` log at trace when the task is initialized.

Author: Damian Guy 

Reviewers: Guozhang Wang 

Closes #3905 from dguy/minor-task-init-logging


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/bb985972
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/bb985972
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/bb985972

Branch: refs/heads/trunk
Commit: bb9859720bf88732cb63ec27cfa10d510d767d2b
Parents: 37ec15e
Author: Damian Guy 
Authored: Wed Sep 20 12:07:04 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 20 12:07:04 2017 +0100

--
 .../apache/kafka/streams/processor/internals/AssignedTasks.java   | 2 +-
 .../org/apache/kafka/streams/processor/internals/StreamTask.java  | 3 +--
 2 files changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/bb985972/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
index 3208f93..e51ebd7 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/AssignedTasks.java
@@ -109,7 +109,7 @@ class AssignedTasks {
 
 void initializeNewTasks() {
 if (!created.isEmpty()) {
-log.trace("Initializing {}s {}", taskTypeName, created.keySet());
+log.debug("Initializing {}s {}", taskTypeName, created.keySet());
 }
 for (final Iterator> it = 
created.entrySet().iterator(); it.hasNext(); ) {
 final Map.Entry entry = it.next();

http://git-wip-us.apache.org/repos/asf/kafka/blob/bb985972/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
index 0830aa2..6775edb 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
@@ -152,7 +152,7 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 }
 
 public boolean initialize() {
-log.debug("Initializing");
+log.trace("Initializing");
 initializeStateStores();
 initTopology();
 processorContext.initialized();
@@ -606,5 +606,4 @@ public class StreamTask extends AbstractTask implements 
ProcessorNodePunctuator
 RecordCollector createRecordCollector(final LogContext logContext) {
 return new RecordCollectorImpl(producer, id.toString(), logContext);
 }
-
 }



kafka git commit: KAFKA-5931; deprecate KTable#through and KTable#to

2017-09-20 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk c58790595 -> 37ec15e96


KAFKA-5931; deprecate KTable#through and KTable#to

Author: Damian Guy 

Reviewers: Matthias J. Sax , Bill Bejeck 
, Guozhang Wang 

Closes #3903 from dguy/deprectate-to-through


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/37ec15e9
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/37ec15e9
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/37ec15e9

Branch: refs/heads/trunk
Commit: 37ec15e9627e2fe68d78eb6d95e9a117e3bca320
Parents: c587905
Author: Damian Guy 
Authored: Wed Sep 20 12:04:13 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 20 12:04:13 2017 +0100

--
 .../apache/kafka/streams/kstream/KTable.java| 84 +++-
 1 file changed, 64 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/37ec15e9/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
index 6d1d85d..66ec0d7 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
@@ -38,7 +38,7 @@ import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
  * {@code KTable} is an abstraction of a changelog stream from a 
primary-keyed table.
  * Each record in this changelog stream is an update on the primary-keyed 
table with the record key as the primary key.
  * 
- * A {@code KTable} is either {@link StreamsBuilder#table(String, String) 
defined from a single Kafka topic} that is
+ * A {@code KTable} is either {@link StreamsBuilder#table(String) defined from 
a single Kafka topic} that is
  * consumed message by message or the result of a {@code KTable} 
transformation.
  * An aggregation of a {@link KStream} also yields a {@code KTable}.
  * 
@@ -66,7 +66,7 @@ import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
  * @see KStream
  * @see KGroupedTable
  * @see GlobalKTable
- * @see StreamsBuilder#table(String, String)
+ * @see StreamsBuilder#table(String)
  */
 @InterfaceStability.Evolving
 public interface KTable {
@@ -763,17 +763,20 @@ public interface KTable {
  * started).
  * 
  * This is equivalent to calling {@link #to(String) #to(someTopicName)} and
- * {@link StreamsBuilder#table(String, String) 
StreamsBuilder#table(someTopicName, queryableStoreName)}.
+ * {@link StreamsBuilder#table(String, Materialized) 
StreamsBuilder#table(someTopicName, queryableStoreName)}.
  * 
  * The resulting {@code KTable} will be materialized in a local state 
store with the given store name (cf.
- * {@link StreamsBuilder#table(String, String)})
+ * {@link StreamsBuilder#table(String, Materialized)})
  * The store name must be a valid Kafka topic name and cannot contain 
characters other than ASCII alphanumerics, '.', '_' and '-'.
  *
  * @param topic the topic name
  * @param queryableStoreName the state store name used for the result 
{@code KTable}; valid characters are ASCII
  *  alphanumerics, '.', '_' and '-'. If {@code null} this 
is the equivalent of {@link KTable#through(String)()}
  * @return a {@code KTable} that contains the exact same (and potentially 
repartitioned) records as this {@code KTable}
+ * @deprecated use {@link #toStream()} followed by {@link 
KStream#to(String)}
+ * and {@link StreamsBuilder#table(String)} to read back as a {@code 
KTable}
  */
+@Deprecated
 KTable through(final String topic,
  final String queryableStoreName);
 
@@ -784,16 +787,19 @@ public interface KTable {
  * started).
  * 
  * This is equivalent to calling {@link #to(String) #to(someTopicName)} and
- * {@link StreamsBuilder#table(String, String) 
StreamsBuilder#table(someTopicName, queryableStoreName)}.
+ * {@link StreamsBuilder#table(String, Materialized) 
StreamsBuilder#table(someTopicName, queryableStoreName)}.
  * 
  * The resulting {@code KTable} will be materialized in a local state 
store with the given store name (cf.
- * {@link StreamsBuilder#table(String, String)})
+ * {@link StreamsBuilder#table(String, Materialized)})
  * The store name must be a valid Kafka topic name and cannot contain 
characters other than ASCII alphanumerics, '.', '_' and '-'.
  *
  * @param topic the topic name
  * @param storeSupplier user defined state store supplier. 

kafka git commit: MINOR: use StoreBuilder in KStreamImpl rather than StateStoreSupplier

2017-09-19 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk c8f147199 -> c96f89f84


MINOR: use StoreBuilder in KStreamImpl rather than StateStoreSupplier

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3892 from dguy/cleanup-state-stores


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c96f89f8
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c96f89f8
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c96f89f8

Branch: refs/heads/trunk
Commit: c96f89f845f790d8e7bce45aae6c8c4d15a25660
Parents: c8f1471
Author: Damian Guy 
Authored: Tue Sep 19 12:05:52 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 19 12:05:52 2017 +0100

--
 .../streams/kstream/internals/KStreamImpl.java  | 29 ++--
 1 file changed, 15 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/c96f89f8/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
index 6ebbd14..cbaf95a 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
@@ -38,9 +38,10 @@ import org.apache.kafka.streams.kstream.ValueMapper;
 import org.apache.kafka.streams.kstream.ValueTransformerSupplier;
 import org.apache.kafka.streams.processor.FailOnInvalidTimestamp;
 import org.apache.kafka.streams.processor.ProcessorSupplier;
-import org.apache.kafka.streams.processor.StateStoreSupplier;
 import org.apache.kafka.streams.processor.StreamPartitioner;
+import org.apache.kafka.streams.state.StoreBuilder;
 import org.apache.kafka.streams.state.Stores;
+import org.apache.kafka.streams.state.WindowStore;
 
 import java.lang.reflect.Array;
 import java.util.Collections;
@@ -827,16 +828,16 @@ public class KStreamImpl extends AbstractStream 
implements KStream StateStoreSupplier createWindowedStateStore(final 
JoinWindows windows,
-  final 
Serde keySerde,
-  final 
Serde valueSerde,
-  final 
String storeName) {
-return Stores.create(storeName)
-.withKeys(keySerde)
-.withValues(valueSerde)
-.persistent()
-.windowed(windows.size(), windows.maintainMs(), windows.segments, 
true)
-.build();
+private static  StoreBuilder> 
createWindowedStateStore(final JoinWindows windows,
+   
final Serde keySerde,
+   
final Serde valueSerde,
+   
final String storeName) {
+return 
Stores.windowStoreBuilder(Stores.persistentWindowStore(storeName,
+  
windows.maintainMs(),
+  
windows.segments,
+  
windows.size(),
+  true), 
keySerde, valueSerde);
+
 }
 
 private class KStreamImplJoin {
@@ -854,17 +855,17 @@ public class KStreamImpl extends AbstractStream 
implements KStream other,
final ValueJoiner joiner,
final JoinWindows windows,
-   final Joined joined) {
+   final Joined 
joined) {
 String thisWindowStreamName = builder.newName(WINDOWED_NAME);
 String otherWindowStreamName = builder.newName(WINDOWED_NAME);
 String joinThisName = rightOuter ? builder.newName(OUTERTHIS_NAME) 
: builder.newName(JOINTHIS_NAME);
 String joinOtherName = leftOuter ? 
builder.newName(OUTEROTHER_NAME) : 

kafka git commit: KAFKA-5921; add Materialized overloads to windowed kstream

2017-09-19 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 83bdcdbae -> c8f147199


KAFKA-5921; add Materialized overloads to windowed kstream

Add `Materialized` overloads to `WindowedKStream`. Deprecate existing methods 
on `KGroupedStream`

Author: Damian Guy 

Reviewers: Guozhang Wang 

Closes #3889 from dguy/kafka-5921


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c8f14719
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c8f14719
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c8f14719

Branch: refs/heads/trunk
Commit: c8f1471992c98e0104e3a7b2e093adc21b2d2a6f
Parents: 83bdcdb
Author: Damian Guy 
Authored: Tue Sep 19 10:56:42 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 19 10:56:42 2017 +0100

--
 .../kafka/streams/kstream/KGroupedStream.java   |   8 +
 .../kafka/streams/kstream/WindowedKStream.java  | 150 +--
 .../GroupedStreamAggregateBuilder.java  |  15 ++
 .../kstream/internals/KGroupedStreamImpl.java   |  32 ++--
 .../kstream/internals/WindowedKStreamImpl.java  |  95 ++--
 .../KStreamAggregationIntegrationTest.java  |   6 +-
 .../internals/WindowedKStreamImplTest.java  | 109 +-
 7 files changed, 361 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/c8f14719/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
index 08916ef..5621ab4 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
@@ -667,7 +667,9 @@ public interface KGroupedStream {
  * alphanumerics, '.', '_' and '-'. If {@code null} then this will be 
equivalent to {@link KGroupedStream#reduce(Reducer, Windows)} ()}.
  * @return a windowed {@link KTable} that contains "update" records with 
unmodified keys, and values that represent
  * the latest (rolling) aggregate for each key within a window
+ * @deprecated use {@link #windowedBy(Windows)}
  */
+@Deprecated
  KTable reduce(final Reducer reducer,
  final Windows windows,
  final String 
queryableStoreName);
@@ -772,7 +774,9 @@ public interface KGroupedStream {
  * @param storeSupplier user defined state store supplier. Cannot be 
{@code null}.
  * @return a windowed {@link KTable} that contains "update" records with 
unmodified keys, and values that represent
  * the latest (rolling) aggregate for each key within a window
+ * @deprecated use {@link #windowedBy(Windows)}
  */
+@Deprecated
  KTable reduce(final Reducer reducer,
  final Windows windows,
  final 
StateStoreSupplier storeSupplier);
@@ -1259,7 +1263,9 @@ public interface KGroupedStream {
  * alphanumerics, '.', '_' and '-'. If {@code null} then this will be 
equivalent to {@link KGroupedStream#aggregate(Initializer, Aggregator, Windows, 
Serde)} ()} ()}.
  * @return a windowed {@link KTable} that contains "update" records with 
unmodified keys, and values that represent
  * the latest (rolling) aggregate for each key within a window
+ * @deprecated use {@link #windowedBy(Windows)}
  */
+@Deprecated
  KTable aggregate(final 
Initializer initializer,
  final 
Aggregator aggregator,
  final Windows 
windows,
@@ -1369,7 +1375,9 @@ public interface KGroupedStream {
  * @param storeSupplier user defined state store supplier. Cannot be 
{@code null}.
  * @return a windowed {@link KTable} that contains "update" records with 
unmodified keys, and values that represent
  * the latest (rolling) aggregate for each key within a window
+ * @deprecated use {@link #windowedBy(Windows)}
  */
+@Deprecated
  KTable aggregate(final 
Initializer initializer,
  final 
Aggregator aggregator,
  final Windows 
windows,

http://git-wip-us.apache.org/repos/asf/kafka/blob/c8f14719/streams/src/main/java/org/apache/kafka/streams/kstream/WindowedKStream.java

[2/2] kafka git commit: KAFKA-5873; add materialized overloads to StreamsBuilder

2017-09-18 Thread damianguy
KAFKA-5873; add materialized overloads to StreamsBuilder

Add overloads for `table` and `globalTable` that use `Materialized`

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3837 from dguy/kafka-5873


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/f2b74aa1
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/f2b74aa1
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/f2b74aa1

Branch: refs/heads/trunk
Commit: f2b74aa1c36bf2882006c14f7cbd56b493f39d26
Parents: 52d7b67
Author: Damian Guy 
Authored: Mon Sep 18 15:53:44 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 18 15:53:44 2017 +0100

--
 .../examples/pageview/PageViewTypedDemo.java|   7 +-
 .../examples/pageview/PageViewUntypedDemo.java  |   9 +-
 .../apache/kafka/streams/StreamsBuilder.java| 700 +++
 .../java/org/apache/kafka/streams/Topology.java |   2 +-
 .../internals/InternalStreamsBuilder.java   | 152 ++--
 .../streams/kstream/internals/KTableImpl.java   |   9 +-
 .../kstream/internals/MaterializedInternal.java |   7 +-
 .../internals/InternalTopologyBuilder.java  |   3 +-
 .../apache/kafka/streams/KafkaStreamsTest.java  |   3 +-
 .../streams/integration/EosIntegrationTest.java |  12 +-
 .../GlobalKTableIntegrationTest.java|  13 +-
 .../integration/JoinIntegrationTest.java|   4 +-
 .../KStreamKTableJoinIntegrationTest.java   |   3 +-
 .../KTableKTableJoinIntegrationTest.java|   6 +-
 .../integration/RestoreIntegrationTest.java |   3 +-
 .../internals/GlobalKTableJoinsTest.java|   5 +-
 .../internals/InternalStreamsBuilderTest.java   |  87 ++-
 .../internals/KGroupedTableImplTest.java|  25 +-
 .../kstream/internals/KStreamImplTest.java  |  11 +-
 .../internals/KStreamKStreamLeftJoinTest.java   |   2 +-
 .../internals/KStreamKTableJoinTest.java|   7 +-
 .../internals/KStreamKTableLeftJoinTest.java|   5 +-
 .../kstream/internals/KTableAggregateTest.java  |  22 +-
 .../kstream/internals/KTableFilterTest.java |  28 +-
 .../kstream/internals/KTableForeachTest.java|  12 +-
 .../kstream/internals/KTableImplTest.java   |  29 +-
 .../kstream/internals/KTableKTableJoinTest.java |  22 +-
 .../internals/KTableKTableLeftJoinTest.java |  29 +-
 .../internals/KTableKTableOuterJoinTest.java|  14 +-
 .../kstream/internals/KTableMapKeysTest.java|   3 +-
 .../kstream/internals/KTableMapValuesTest.java  |  16 +-
 .../kstream/internals/KTableSourceTest.java |  10 +-
 .../kafka/streams/perf/SimpleBenchmark.java |  13 +-
 .../kafka/streams/perf/YahooBenchmark.java  |   3 +-
 .../internals/StreamsMetadataStateTest.java |   8 +-
 .../kafka/streams/tests/SmokeTestClient.java|  13 +-
 36 files changed, 443 insertions(+), 854 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/f2b74aa1/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
--
diff --git 
a/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
 
b/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
index 72f9be8..068eece 100644
--- 
a/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
+++ 
b/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
@@ -29,6 +29,7 @@ import org.apache.kafka.streams.StreamsConfig;
 import org.apache.kafka.streams.kstream.KStream;
 import org.apache.kafka.streams.kstream.KTable;
 import org.apache.kafka.streams.kstream.KeyValueMapper;
+import org.apache.kafka.streams.kstream.Produced;
 import org.apache.kafka.streams.kstream.Serialized;
 import org.apache.kafka.streams.kstream.TimeWindows;
 import org.apache.kafka.streams.kstream.ValueJoiner;
@@ -145,8 +146,8 @@ public class PageViewTypedDemo {
 
 KStream views = 
builder.stream("streams-pageview-input", Consumed.with(Serdes.String(), 
pageViewSerde));
 
-KTable users = builder.table(Serdes.String(), 
userProfileSerde,
-"streams-userprofile-input", "streams-userprofile-store-name");
+KTable users = 
builder.table("streams-userprofile-input",
+  
Consumed.with(Serdes.String(), userProfileSerde));
 
 KStream regionCount = views
 .leftJoin(users, new ValueJoiner

[1/2] kafka git commit: KAFKA-5873; add materialized overloads to StreamsBuilder

2017-09-18 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 52d7b6763 -> f2b74aa1c


http://git-wip-us.apache.org/repos/asf/kafka/blob/f2b74aa1/streams/src/test/java/org/apache/kafka/streams/integration/GlobalKTableIntegrationTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/GlobalKTableIntegrationTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/GlobalKTableIntegrationTest.java
index cbf2b56..0bdd3a3 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/GlobalKTableIntegrationTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/GlobalKTableIntegrationTest.java
@@ -21,6 +21,7 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
 import org.apache.kafka.common.serialization.LongSerializer;
 import org.apache.kafka.common.serialization.Serdes;
 import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.streams.Consumed;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.KeyValue;
@@ -33,7 +34,9 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
 import org.apache.kafka.streams.kstream.KStream;
 import org.apache.kafka.streams.kstream.KTable;
 import org.apache.kafka.streams.kstream.KeyValueMapper;
+import org.apache.kafka.streams.kstream.Materialized;
 import org.apache.kafka.streams.kstream.ValueJoiner;
+import org.apache.kafka.streams.state.KeyValueStore;
 import org.apache.kafka.streams.state.QueryableStoreTypes;
 import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
 import org.apache.kafka.test.IntegrationTest;
@@ -101,9 +104,13 @@ public class GlobalKTableIntegrationTest {
 
streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
 
streamsConfiguration.put(IntegrationTestUtils.INTERNAL_LEAVE_GROUP_ON_CLOSE, 
true);
 streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 100);
-globalTable = builder.globalTable(Serdes.Long(), Serdes.String(), 
null, globalOne, globalStore);
-stream = builder.stream(inputStream, Consumed.with(Serdes.String(), 
Serdes.Long()));
-table = builder.table(Serdes.String(), Serdes.Long(), inputTable, 
"table");
+globalTable = builder.globalTable(globalOne, 
Consumed.with(Serdes.Long(), Serdes.String()),
+  Materialized.>as(globalStore)
+  .withKeySerde(Serdes.Long())
+  
.withValueSerde(Serdes.String()));
+final Consumed stringLongConsumed = 
Consumed.with(Serdes.String(), Serdes.Long());
+stream = builder.stream(inputStream, stringLongConsumed);
+table = builder.table(inputTable, stringLongConsumed);
 foreachAction = new ForeachAction() {
 @Override
 public void apply(final String key, final String value) {

http://git-wip-us.apache.org/repos/asf/kafka/blob/f2b74aa1/streams/src/test/java/org/apache/kafka/streams/integration/JoinIntegrationTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/JoinIntegrationTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/JoinIntegrationTest.java
index 3a771c4..faa581b 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/JoinIntegrationTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/JoinIntegrationTest.java
@@ -128,8 +128,8 @@ public class JoinIntegrationTest {
 CLUSTER.createTopics(INPUT_TOPIC_1, INPUT_TOPIC_2, OUTPUT_TOPIC);
 
 builder = new StreamsBuilder();
-leftTable = builder.table(INPUT_TOPIC_1, "leftTable");
-rightTable = builder.table(INPUT_TOPIC_2, "rightTable");
+leftTable = builder.table(INPUT_TOPIC_1);
+rightTable = builder.table(INPUT_TOPIC_2);
 leftStream = leftTable.toStream();
 rightStream = rightTable.toStream();
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/f2b74aa1/streams/src/test/java/org/apache/kafka/streams/integration/KStreamKTableJoinIntegrationTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/KStreamKTableJoinIntegrationTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/KStreamKTableJoinIntegrationTest.java
index a433667..8d4299b 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/KStreamKTableJoinIntegrationTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/KStreamKTableJoinIntegrationTest.java
@@ -211,7 +211,8 @@ public class KStreamKTableJoinIntegrationTest {
 // 

kafka git commit: MINOR: Fix typo in mapper parameter of flatMapValues

2017-09-18 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk b363901cb -> bd83ae6ba


MINOR: Fix typo in mapper parameter of flatMapValues

The parameter is already called `mapper` in the KStreamImpl class. I think it 
was probably named `processor` here because it was copy/pasted from some other 
signature. This sees trivial enough to not require a jira as per the 
contribution guidelines.

Author: Andy Chambers 

Reviewers: Damian Guy 

Closes #3888 from cddr/fix-kstream-flatMapValues-signature


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/bd83ae6b
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/bd83ae6b
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/bd83ae6b

Branch: refs/heads/trunk
Commit: bd83ae6ba1f887ab112c4ccb2002633dfd387d69
Parents: b363901
Author: Andy Chambers 
Authored: Mon Sep 18 15:30:25 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 18 15:30:25 2017 +0100

--
 .../src/main/java/org/apache/kafka/streams/kstream/KStream.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/bd83ae6b/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
index 3a51fad..f8f99f2 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
@@ -252,7 +252,7 @@ public interface KStream {
  * Thus, no internal data redistribution is required if a key 
based operator (like an aggregation or join)
  * is applied to the result {@code KStream}. (cf. {@link 
#flatMap(KeyValueMapper)})
  *
- * @param processor a {@link ValueMapper} the computes the new output 
values
+ * @param mapper a {@link ValueMapper} the computes the new output values
  * @param   the value type of the result stream
  * @return a {@code KStream} that contains more or less records with 
unmodified keys and new values of different type
  * @see #selectKey(KeyValueMapper)
@@ -262,7 +262,7 @@ public interface KStream {
  * @see #transform(TransformerSupplier, String...)
  * @see #transformValues(ValueTransformerSupplier, String...)
  */
- KStream flatMapValues(final ValueMapper> processor);
+ KStream flatMapValues(final ValueMapper> mapper);
 
 /**
  * Print the records of this stream to {@code System.out}.



kafka git commit: KAFKA-5515; Remove date formatting from Segments

2017-09-18 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk be6252d8e -> ed0e69214


KAFKA-5515; Remove date formatting from Segments

Remove date formatting from `Segments` and use the `segementId` instead.
Add tests to make sure can load old segments.
Rename old segment dirs to new formatting at load time.

Author: Damian Guy 

Reviewers: tedyu , Matthias J. Sax 
, Guozhang Wang 

Closes #3783 from dguy/kafka-5515


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/ed0e6921
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/ed0e6921
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/ed0e6921

Branch: refs/heads/trunk
Commit: ed0e692147d81e396bf10f4d9425516d51bd52cc
Parents: be6252d
Author: Damian Guy 
Authored: Mon Sep 18 12:11:56 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 18 12:11:56 2017 +0100

--
 .../internals/RocksDBSegmentedBytesStore.java   |  1 +
 .../kafka/streams/state/internals/Segments.java | 46 +++-
 .../RocksDBSegmentedBytesStoreTest.java | 39 -
 .../streams/state/internals/SegmentsTest.java   | 29 ++--
 4 files changed, 91 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/ed0e6921/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStore.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStore.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStore.java
index f3c4639..4d4ee41 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStore.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStore.java
@@ -138,4 +138,5 @@ class RocksDBSegmentedBytesStore implements 
SegmentedBytesStore {
 public boolean isOpen() {
 return open;
 }
+
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/ed0e6921/streams/src/main/java/org/apache/kafka/streams/state/internals/Segments.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/Segments.java 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/Segments.java
index 9c8653a..7c6bb53 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/Segments.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/Segments.java
@@ -24,6 +24,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.File;
 import java.io.IOException;
+import java.text.ParseException;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Arrays;
@@ -61,12 +62,14 @@ class Segments {
 this.formatter.setTimeZone(new SimpleTimeZone(0, "UTC"));
 }
 
-long segmentId(long timestamp) {
+long segmentId(final long timestamp) {
 return timestamp / segmentInterval;
 }
 
-String segmentName(long segmentId) {
-return name + "-" + formatter.format(new Date(segmentId * 
segmentInterval));
+String segmentName(final long segmentId) {
+// previous format used - as a separator so if this changes in the 
future
+// then we should use something different.
+return name + ":" + segmentId * segmentInterval;
 }
 
 Segment getSegmentForTimestamp(final long timestamp) {
@@ -101,7 +104,7 @@ class Segments {
 if (list != null) {
 long[] segmentIds = new long[list.length];
 for (int i = 0; i < list.length; i++)
-segmentIds[i] = segmentIdFromSegmentName(list[i]);
+segmentIds[i] = segmentIdFromSegmentName(list[i], dir);
 
 // open segments in the id order
 Arrays.sort(segmentIds);
@@ -185,12 +188,35 @@ class Segments {
 }
 }
 
-private long segmentIdFromSegmentName(String segmentName) {
-try {
-Date date = formatter.parse(segmentName.substring(name.length() + 
1));
-return date.getTime() / segmentInterval;
-} catch (Exception ex) {
-return -1L;
+private long segmentIdFromSegmentName(final String segmentName,
+  final File parent) {
+// old style segment name with date
+if (segmentName.charAt(name.length()) == '-') {
+final String datePart = segmentName.substring(name.length() + 1);
+final Date date;
+try {
+date = 

kafka git commit: MINOR: Code cleanup, subject: log statements.

2017-09-18 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk d83252eba -> be6252d8e


MINOR: Code cleanup, subject: log statements.

I'm doing this in my spare time, so don't let reviewing this PR take away 
actual work time. This is just me going over the code with the Intellij 
analyzer and implementing the most easily implementable fixes.

This PR is focused only on seemingly erronous log statements.

1: A log statement that has 4 arguments supplied but only 3 `{}` statements

2: A log statement that checks is debug is enabled, but then logs on `info` 
level.

Author: coscale_kdegroot 

Reviewers: Damian Guy 

Closes #3886 from KoenDG/loggingErrors


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/be6252d8
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/be6252d8
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/be6252d8

Branch: refs/heads/trunk
Commit: be6252d8ebdf9cf2d151028a7ba20eb1591b5961
Parents: d83252e
Author: coscale_kdegroot 
Authored: Mon Sep 18 12:04:56 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 18 12:04:56 2017 +0100

--
 .../kafka/streams/processor/internals/StreamPartitionAssignor.java | 2 +-
 .../org/apache/kafka/streams/processor/internals/StreamThread.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/be6252d8/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamPartitionAssignor.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamPartitionAssignor.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamPartitionAssignor.java
index 34e9e8a..621eb15 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamPartitionAssignor.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamPartitionAssignor.java
@@ -502,7 +502,7 @@ public class StreamPartitionAssignor implements 
PartitionAssignor, Configurable,
 states.put(entry.getKey(), entry.getValue().state);
 }
 
-log.debug("Assigning tasks {} to clients {} with number of replicas 
{}",
+log.debug("{} Assigning tasks {} to clients {} with number of replicas 
{}",
 logPrefix, partitionsForTask.keySet(), states, 
numStandbyReplicas);
 
 final StickyTaskAssignor taskAssignor = new 
StickyTaskAssignor<>(states, partitionsForTask.keySet());

http://git-wip-us.apache.org/repos/asf/kafka/blob/be6252d8/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
index b753cf9..867359b 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
@@ -963,7 +963,7 @@ public class StreamThread extends Thread implements 
ThreadDataProvider {
 streamsMetrics.commitTimeSensor.record(computeLatency() / 
(double) committed, timerStartedMs);
 }
 if (log.isDebugEnabled()) {
-log.info("Committed all active tasks {} and standby tasks {} 
in {}ms",
+log.debug("Committed all active tasks {} and standby tasks {} 
in {}ms",
 taskManager.activeTaskIds(), 
taskManager.standbyTaskIds(), timerStartedMs - now);
 }
 



kafka git commit: KAFKA-5654; add materialized count, reduce, aggregate to KGroupedStream

2017-09-18 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 346d0ca53 -> d83252eba


KAFKA-5654; add materialized count, reduce, aggregate to KGroupedStream

Add overloads of `count`, `reduce`, and `aggregate` that are `Materialized` to 
`KGroupedStream`.
Refactor common parts between `KGroupedStream` and `WindowedKStream`

Author: Damian Guy 

Reviewers: Matthias J. Sax , Guozhang Wang 


Closes #3827 from dguy/kafka-5654


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/d83252eb
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/d83252eb
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/d83252eb

Branch: refs/heads/trunk
Commit: d83252ebaeeca5bf19584908d95b424beb31b12e
Parents: 346d0ca
Author: Damian Guy 
Authored: Mon Sep 18 11:54:14 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 18 11:54:14 2017 +0100

--
 .../kafka/streams/kstream/KGroupedStream.java   | 210 ++-
 .../GroupedStreamAggregateBuilder.java  |  76 +++
 .../kstream/internals/KGroupedStreamImpl.java   | 127 +++
 .../streams/kstream/internals/KStreamImpl.java  |  25 +--
 .../kstream/internals/MaterializedInternal.java |  13 +-
 .../kstream/internals/WindowedKStreamImpl.java  |  57 ++---
 .../internals/KGroupedStreamImplTest.java   | 106 ++
 7 files changed, 515 insertions(+), 99 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/d83252eb/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
index f12c2b2..08916ef 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
@@ -18,6 +18,7 @@ package org.apache.kafka.streams.kstream;
 
 import org.apache.kafka.common.annotation.InterfaceStability;
 import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.KeyValue;
 import org.apache.kafka.streams.StreamsConfig;
@@ -146,6 +147,38 @@ public interface KGroupedStream {
 KTable count(final StateStoreSupplier 
storeSupplier);
 
 /**
+ * Count the number of records in this stream by the grouped key.
+ * Records with {@code null} key or value are ignored.
+ * The result is written into a local {@link KeyValueStore} (which is 
basically an ever-updating materialized view)
+ * provided by the given {@code storeSupplier}.
+ * Furthermore, updates to the store are sent downstream into a {@link 
KTable} changelog stream.
+ * 
+ * Not all updates might get sent downstream, as an internal cache is used 
to deduplicate consecutive updates to
+ * the same key.
+ * The rate of propagated updates depends on your input data rate, the 
number of distinct keys, the number of
+ * parallel running Kafka Streams instances, and the {@link StreamsConfig 
configuration} parameters for
+ * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
+ * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
+ * 
+ * To query the local {@link KeyValueStore} it must be obtained via
+ * {@link KafkaStreams#store(String, QueryableStoreType) 
KafkaStreams#store(...)}.
+ * {@code
+ * KafkaStreams streams = ... // counting words
+ * String queryableStoreName = "count-store"; // the queryableStoreName 
should be the name of the store as defined by the Materialized instance
+ * ReadOnlyKeyValueStore localStore = 
streams.store(queryableStoreName, QueryableStoreTypes.keyValueStore());
+ * String key = "some-word";
+ * Long countForWord = localStore.get(key); // key must be local 
(application state is shared over all running Kafka Streams instances)
+ * }
+ * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
+ * query the value of the key on a parallel running instance of your Kafka 
Streams application.
+ *
+ * @param materialized  an instance of {@link Materialized} used to 
materialize a state store. Cannot be {@code null}.
+ * @return a {@link KTable} that contains "update" records with unmodified 
keys and {@link Long} values that
+ * represent the latest (rolling) count (i.e., number of records) for each 
key
+ */
+KTable count(final Materialized

[3/3] kafka git commit: KAFKA-5754; Refactor Streams to use LogContext

2017-09-18 Thread damianguy
KAFKA-5754; Refactor Streams to use LogContext

This PR utilizes `org.apache.kafka.common.utils.LogContext` for logging in 
`KafkaStreams`. hachikuji, ijuma please review this and let me know your 
thoughts.

Author: umesh chaudhary 

Reviewers: Guozhang Wang , Damian Guy 

Closes #3727 from umesh9794/KAFKA-5754


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/f305dd68
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/f305dd68
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/f305dd68

Branch: refs/heads/trunk
Commit: f305dd68f6524abc25c4ed88983f0e78b4e6c243
Parents: 6055c74
Author: umesh chaudhary 
Authored: Mon Sep 18 09:53:27 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 18 09:53:27 2017 +0100

--
 .../org/apache/kafka/streams/KafkaStreams.java  |  43 +++---
 .../processor/internals/AbstractTask.java   |  31 ++--
 .../processor/internals/AssignedTasks.java  |  66 -
 .../processor/internals/GlobalStreamThread.java |  37 ++---
 .../internals/ProcessorStateManager.java|  46 +++---
 .../internals/RecordCollectorImpl.java  |  29 ++--
 .../processor/internals/StandbyContextImpl.java |   3 +-
 .../processor/internals/StandbyTask.java|  13 +-
 .../internals/StoreChangelogReader.java |  32 ++--
 .../internals/StreamPartitionAssignor.java  |  46 +++---
 .../streams/processor/internals/StreamTask.java |  51 +++
 .../processor/internals/StreamThread.java   | 145 +--
 .../processor/internals/TaskManager.java|  40 ++---
 .../streams/state/internals/ThreadCache.java|  15 +-
 .../apache/kafka/streams/KafkaStreamsTest.java  |   1 +
 ...reamSessionWindowAggregateProcessorTest.java |   3 +-
 .../internals/AbstractProcessorContextTest.java |   3 +-
 .../processor/internals/AbstractTaskTest.java   |   3 +-
 .../processor/internals/AssignedTasksTest.java  |   3 +-
 .../processor/internals/ProcessorNodeTest.java  |   3 +-
 .../internals/ProcessorStateManagerTest.java|  51 +--
 .../internals/RecordCollectorTest.java  |  25 +++-
 .../processor/internals/RecordQueueTest.java|   3 +-
 .../processor/internals/SinkNodeTest.java   |   3 +-
 .../processor/internals/StandbyTaskTest.java|   3 +-
 .../processor/internals/StateConsumerTest.java  |   6 +-
 .../internals/StoreChangelogReaderTest.java |   6 +-
 .../processor/internals/StreamTaskTest.java |  11 +-
 .../streams/state/KeyValueStoreTestDriver.java  |   5 +-
 .../internals/CachingKeyValueStoreTest.java |   3 +-
 .../internals/CachingSessionStoreTest.java  |   3 +-
 .../state/internals/CachingWindowStoreTest.java |   3 +-
 .../ChangeLoggingKeyValueBytesStoreTest.java|   3 +-
 .../ChangeLoggingKeyValueStoreTest.java |   3 +-
 ...rtedCacheKeyValueBytesStoreIteratorTest.java |   5 +-
 ...rtedCacheWrappedWindowStoreIteratorTest.java |   3 +-
 .../state/internals/MeteredWindowStoreTest.java |   3 +-
 .../RocksDBKeyValueStoreSupplierTest.java   |   3 +-
 .../RocksDBSegmentedBytesStoreTest.java |   3 +-
 .../RocksDBSessionStoreSupplierTest.java|   3 +-
 .../internals/RocksDBSessionStoreTest.java  |   3 +-
 .../state/internals/RocksDBStoreTest.java   |   5 +-
 .../RocksDBWindowStoreSupplierTest.java |   3 +-
 .../state/internals/RocksDBWindowStoreTest.java |   5 +-
 .../state/internals/SegmentIteratorTest.java|   3 +-
 .../streams/state/internals/SegmentsTest.java   |   3 +-
 .../state/internals/StoreChangeLoggerTest.java  |   3 +-
 .../StreamThreadStateStoreProviderTest.java |   3 +-
 .../state/internals/ThreadCacheTest.java|  54 +++
 .../apache/kafka/test/KStreamTestDriver.java|   8 +-
 .../kafka/test/ProcessorTopologyTestDriver.java |   6 +-
 51 files changed, 466 insertions(+), 391 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/f305dd68/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java 
b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
index 7698f39..b31a3e3 100644
--- a/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
+++ b/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
@@ -30,6 +30,7 @@ import org.apache.kafka.common.metrics.Metrics;
 import org.apache.kafka.common.metrics.MetricsReporter;
 import org.apache.kafka.common.metrics.Sensor;
 import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.utils.LogContext;
 import org.apache.kafka.common.utils.Time;
 import 

kafka-site git commit: MINOR: add note to streams quickstart about snapshot dependency removal being temporary

2017-09-14 Thread damianguy
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site b415c59b0 -> e834dd428


MINOR: add note to streams quickstart about snapshot dependency removal being 
temporary


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/e834dd42
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/e834dd42
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/e834dd42

Branch: refs/heads/asf-site
Commit: e834dd4286a16c3edca83dfa9ffcd3aac5d8df62
Parents: b415c59
Author: Damian Guy 
Authored: Thu Sep 14 14:11:24 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 14 14:11:24 2017 +0100

--
 0110/streams/tutorial.html | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e834dd42/0110/streams/tutorial.html
--
diff --git a/0110/streams/tutorial.html b/0110/streams/tutorial.html
index 7791880..f95eddc 100644
--- a/0110/streams/tutorial.html
+++ b/0110/streams/tutorial.html
@@ -63,6 +63,7 @@
 
 
 Important: You must manually update the setting of 
kafka.version in the generated pom.xml file 
from 0.11.0.1-SNAPSHOT to 0.11.0.1.
+Note: in the next release the above step will not be 
required.
 There are already several example programs written with Streams 
library under src/main/java.
 Since we are going to start writing such programs from scratch, we can 
now delete these examples:
 



kafka git commit: MINOR: update docs to add note about removing SNAPSHOT from streams dependency

2017-09-14 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/0.11.0 b95a6bf61 -> 1c9581e2e


MINOR: update docs to add note about removing SNAPSHOT from streams dependency

Author: Damian Guy 

Reviewers: Michael G. Noll , Ismael Juma 


Closes #3858 from dguy/docs


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/1c9581e2
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/1c9581e2
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/1c9581e2

Branch: refs/heads/0.11.0
Commit: 1c9581e2e9bb4a05dc2e25b4262272cfa1a4b470
Parents: b95a6bf
Author: Damian Guy 
Authored: Thu Sep 14 14:10:10 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 14 14:10:10 2017 +0100

--
 docs/streams/tutorial.html | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/1c9581e2/docs/streams/tutorial.html
--
diff --git a/docs/streams/tutorial.html b/docs/streams/tutorial.html
index a1520de..f95eddc 100644
--- a/docs/streams/tutorial.html
+++ b/docs/streams/tutorial.html
@@ -62,8 +62,9 @@
 
 
 
-The pom.xml file included in the project already has the 
Streams dependency defined,
-and there are already several example programs written with Streams 
library under src/main/java.
+Important: You must manually update the setting of 
kafka.version in the generated pom.xml file 
from 0.11.0.1-SNAPSHOT to 0.11.0.1.
+Note: in the next release the above step will not be 
required.
+There are already several example programs written with Streams 
library under src/main/java.
 Since we are going to start writing such programs from scratch, we can 
now delete these examples:
 
 



kafka-site git commit: MINOR: update streams quickstart to have note about updating snapshot dependency

2017-09-14 Thread damianguy
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 7e98fdb95 -> b415c59b0


MINOR: update streams quickstart to have note about updating snapshot dependency


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/b415c59b
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/b415c59b
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/b415c59b

Branch: refs/heads/asf-site
Commit: b415c59b05155244e27105773287919111735bc9
Parents: 7e98fdb
Author: Damian Guy 
Authored: Thu Sep 14 12:06:45 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 14 12:06:45 2017 +0100

--
 0110/streams/tutorial.html | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/b415c59b/0110/streams/tutorial.html
--
diff --git a/0110/streams/tutorial.html b/0110/streams/tutorial.html
index a1520de..7791880 100644
--- a/0110/streams/tutorial.html
+++ b/0110/streams/tutorial.html
@@ -62,8 +62,8 @@
 
 
 
-The pom.xml file included in the project already has the 
Streams dependency defined,
-and there are already several example programs written with Streams 
library under src/main/java.
+Important: You must manually update the setting of 
kafka.version in the generated pom.xml file 
from 0.11.0.1-SNAPSHOT to 0.11.0.1.
+There are already several example programs written with Streams 
library under src/main/java.
 Since we are going to start writing such programs from scratch, we can 
now delete these examples:
 
 



kafka git commit: MINOR: Bump version in streams quickstart archetype pom.xml

2017-09-14 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/0.11.0 b0708c4ac -> b95a6bf61


MINOR: Bump version in streams quickstart archetype pom.xml

Author: Damian Guy 

Reviewers: Ismael Juma 

Closes #3857 from dguy/fix-archetype-version


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/b95a6bf6
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/b95a6bf6
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/b95a6bf6

Branch: refs/heads/0.11.0
Commit: b95a6bf61ae5c763ff054902a3c2d7cf205496e0
Parents: b0708c4
Author: Damian Guy 
Authored: Thu Sep 14 12:01:13 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 14 12:01:13 2017 +0100

--
 .../java/src/main/resources/archetype-resources/pom.xml  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/b95a6bf6/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
--
diff --git 
a/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml 
b/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
index d7cf2d6..8ec4800 100644
--- a/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
+++ b/streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
@@ -29,7 +29,7 @@
 
 
 UTF-8
-0.11.0.1-SNAPSHOT
+0.11.0.2-SNAPSHOT
 1.7.7
 1.2.17
 
@@ -133,4 +133,4 @@
 ${kafka.version}
 
 
-
\ No newline at end of file
+



kafka git commit: MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GUARANTEE_CONFIG

2017-09-13 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 49b992dd8 -> c42bfc0d5


MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GUARANTEE_CONFIG

The contribution is my original work and I license the work to the project 
under the project's open source licence.

Author: lperry 

Reviewers: Matthias J. Sax , Damian Guy 


Closes #3843 from leigh-perry/trunk


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c42bfc0d
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c42bfc0d
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c42bfc0d

Branch: refs/heads/trunk
Commit: c42bfc0d51e6691b4b5672ef7c8a1bedcd452d7f
Parents: 49b992d
Author: lperry 
Authored: Wed Sep 13 17:52:45 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 13 17:52:45 2017 +0100

--
 streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/c42bfc0d/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java 
b/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
index 6b0e245..446f941 100644
--- a/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
+++ b/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java
@@ -230,7 +230,7 @@ public class StreamsConfig extends AbstractConfig {
 public static final String POLL_MS_CONFIG = "poll.ms";
 private static final String POLL_MS_DOC = "The amount of time in 
milliseconds to block waiting for input.";
 
-/** {@code cache.max.bytes.buffering} */
+/** {@code processing.guarantee} */
 public static final String PROCESSING_GUARANTEE_CONFIG = 
"processing.guarantee";
 private static final String PROCESSING_GUARANTEE_DOC = "The processing 
guarantee that should be used. Possible values are " + AT_LEAST_ONCE + 
" (default) and " + EXACTLY_ONCE + ".";
 



[04/10] kafka-site git commit: Update site for 0.11.0.1 release

2017-09-13 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/connect/connector/Task.html
--
diff --git a/0110/javadoc/org/apache/kafka/connect/connector/Task.html 
b/0110/javadoc/org/apache/kafka/connect/connector/Task.html
index 3aea4ad..c0954cc 100644
--- a/0110/javadoc/org/apache/kafka/connect/connector/Task.html
+++ b/0110/javadoc/org/apache/kafka/connect/connector/Task.html
@@ -2,15 +2,15 @@
 
 
 
-
-Task (kafka 0.11.0.0 API)
-
+
+Task (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/connect/connector/package-frame.html
--
diff --git a/0110/javadoc/org/apache/kafka/connect/connector/package-frame.html 
b/0110/javadoc/org/apache/kafka/connect/connector/package-frame.html
index 21eed9f..d1050a1 100644
--- a/0110/javadoc/org/apache/kafka/connect/connector/package-frame.html
+++ b/0110/javadoc/org/apache/kafka/connect/connector/package-frame.html
@@ -2,9 +2,9 @@
 
 
 
-
-org.apache.kafka.connect.connector (kafka 0.11.0.0 API)
-
+
+org.apache.kafka.connect.connector (kafka 0.11.0.1 API)
+
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/connect/connector/package-summary.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/connect/connector/package-summary.html 
b/0110/javadoc/org/apache/kafka/connect/connector/package-summary.html
index af2f3ad..4e4bcd6 100644
--- a/0110/javadoc/org/apache/kafka/connect/connector/package-summary.html
+++ b/0110/javadoc/org/apache/kafka/connect/connector/package-summary.html
@@ -2,15 +2,15 @@
 
 
 
-
-org.apache.kafka.connect.connector (kafka 0.11.0.0 API)
-
+
+org.apache.kafka.connect.connector (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/connect/connector/package-tree.html
--
diff --git a/0110/javadoc/org/apache/kafka/connect/connector/package-tree.html 
b/0110/javadoc/org/apache/kafka/connect/connector/package-tree.html
index 85690dd..29c7daf 100644
--- a/0110/javadoc/org/apache/kafka/connect/connector/package-tree.html
+++ b/0110/javadoc/org/apache/kafka/connect/connector/package-tree.html
@@ -2,15 +2,15 @@
 
 
 
-
-org.apache.kafka.connect.connector Class Hierarchy (kafka 0.11.0.0 
API)
-
+
+org.apache.kafka.connect.connector Class Hierarchy (kafka 0.11.0.1 
API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/connect/data/ConnectSchema.html
--
diff --git a/0110/javadoc/org/apache/kafka/connect/data/ConnectSchema.html 
b/0110/javadoc/org/apache/kafka/connect/data/ConnectSchema.html
index d6a4e7d..97288de 100644
--- a/0110/javadoc/org/apache/kafka/connect/data/ConnectSchema.html
+++ b/0110/javadoc/org/apache/kafka/connect/data/ConnectSchema.html
@@ -2,15 +2,15 @@
 
 
 
-
-ConnectSchema (kafka 0.11.0.0 API)
-
+
+ConnectSchema (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/connect/data/Date.html
--
diff --git a/0110/javadoc/org/apache/kafka/connect/data/Date.html 
b/0110/javadoc/org/apache/kafka/connect/data/Date.html
index 109a0c3..0bcdae5 100644
--- a/0110/javadoc/org/apache/kafka/connect/data/Date.html
+++ b/0110/javadoc/org/apache/kafka/connect/data/Date.html
@@ -2,15 +2,15 @@
 
 
 
-
-Date (kafka 0.11.0.0 API)
-
+
+Date (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/processor/StreamPartitioner.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/streams/processor/StreamPartitioner.html 
b/0110/javadoc/org/apache/kafka/streams/processor/StreamPartitioner.html
index 36ebf1f..37d6ce3 100644
--- a/0110/javadoc/org/apache/kafka/streams/processor/StreamPartitioner.html
+++ b/0110/javadoc/org/apache/kafka/streams/processor/StreamPartitioner.html
@@ -2,15 +2,15 @@
 
 
 
-
-StreamPartitioner (kafka 0.11.0.0 API)
-
+
+StreamPartitioner (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/processor/TaskId.html
--
diff --git a/0110/javadoc/org/apache/kafka/streams/processor/TaskId.html 
b/0110/javadoc/org/apache/kafka/streams/processor/TaskId.html
index 39ae72d..a84b68c 100644
--- a/0110/javadoc/org/apache/kafka/streams/processor/TaskId.html
+++ b/0110/javadoc/org/apache/kafka/streams/processor/TaskId.html
@@ -2,15 +2,15 @@
 
 
 
-
-TaskId (kafka 0.11.0.0 API)
-
+
+TaskId (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/processor/TimestampExtractor.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/streams/processor/TimestampExtractor.html 
b/0110/javadoc/org/apache/kafka/streams/processor/TimestampExtractor.html
index 85b8718..93b50c7 100644
--- a/0110/javadoc/org/apache/kafka/streams/processor/TimestampExtractor.html
+++ b/0110/javadoc/org/apache/kafka/streams/processor/TimestampExtractor.html
@@ -2,15 +2,15 @@
 
 
 
-
-TimestampExtractor (kafka 0.11.0.0 API)
-
+
+TimestampExtractor (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.AutoOffsetReset.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.AutoOffsetReset.html
 
b/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.AutoOffsetReset.html
index 1ea1dcb..9c8a434 100644
--- 
a/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.AutoOffsetReset.html
+++ 
b/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.AutoOffsetReset.html
@@ -2,15 +2,15 @@
 
 
 
-
-TopologyBuilder.AutoOffsetReset (kafka 0.11.0.0 API)
-
+
+TopologyBuilder.AutoOffsetReset (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.TopicsInfo.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.TopicsInfo.html
 
b/0110/javadoc/org/apache/kafka/streams/processor/TopologyBuilder.TopicsInfo.html
index 5fc6f8a..bb42605 100644
--- 

[05/10] kafka-site git commit: Update site for 0.11.0.1 release

2017-09-13 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/UnknownTopicOrPartitionException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/UnknownTopicOrPartitionException.html
 
b/0110/javadoc/org/apache/kafka/common/errors/UnknownTopicOrPartitionException.html
index 89cbd03..50e67ba 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/UnknownTopicOrPartitionException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/UnknownTopicOrPartitionException.html
@@ -2,15 +2,15 @@
 
 
 
-
-UnknownTopicOrPartitionException (kafka 0.11.0.0 API)
-
+
+UnknownTopicOrPartitionException (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/UnsupportedForMessageFormatException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/UnsupportedForMessageFormatException.html
 
b/0110/javadoc/org/apache/kafka/common/errors/UnsupportedForMessageFormatException.html
index 5b2d1e6..bf7cb27 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/UnsupportedForMessageFormatException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/UnsupportedForMessageFormatException.html
@@ -2,15 +2,15 @@
 
 
 
-
-UnsupportedForMessageFormatException (kafka 0.11.0.0 API)
-
+
+UnsupportedForMessageFormatException (kafka 0.11.0.1 API)
+
 
 
 
 
 
@@ -126,7 +126,8 @@
 
 public class UnsupportedForMessageFormatException
 extends ApiException
-The message format version does not support the requested 
function.
+The message format version does not support the requested 
function. For example, if idempotence is
+ requested and the topic is using a message format older than 0.11.0.0, then 
this error will be returned.
 See Also:Serialized
 Form
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/UnsupportedSaslMechanismException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/UnsupportedSaslMechanismException.html
 
b/0110/javadoc/org/apache/kafka/common/errors/UnsupportedSaslMechanismException.html
index aa6dab0..1446b1f 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/UnsupportedSaslMechanismException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/UnsupportedSaslMechanismException.html
@@ -2,15 +2,15 @@
 
 
 
-
-UnsupportedSaslMechanismException (kafka 0.11.0.0 API)
-
+
+UnsupportedSaslMechanismException (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/UnsupportedVersionException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/UnsupportedVersionException.html 
b/0110/javadoc/org/apache/kafka/common/errors/UnsupportedVersionException.html
index f75335f..9c4e50b 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/UnsupportedVersionException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/UnsupportedVersionException.html
@@ -2,15 +2,15 @@
 
 
 
-
-UnsupportedVersionException (kafka 0.11.0.0 API)
-
+
+UnsupportedVersionException (kafka 0.11.0.1 API)
+
 
 
 
 
 
@@ -126,6 +126,14 @@
 
 public class UnsupportedVersionException
 extends ApiException
+Indicates that a request API or version needed by the 
client is not supported by the broker. This is
+ typically a fatal error as Kafka clients will downgrade request versions as 
needed except in cases where
+ a needed feature is not available in old versions. Fatal errors can generally 
only be handled by closing
+ the client instance, although in some cases it may be possible to continue 
without relying on the
+ underlying feature. 

[07/10] kafka-site git commit: Update site for 0.11.0.1 release

2017-09-13 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html
--
diff --git a/0110/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html 
b/0110/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html
index 425d8cf..56e62ca 100644
--- a/0110/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html
+++ b/0110/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html
@@ -2,15 +2,15 @@
 
 
 
-
-ProducerRecord (kafka 0.11.0.0 API)
-
+
+ProducerRecord (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/producer/RecordMetadata.html
--
diff --git a/0110/javadoc/org/apache/kafka/clients/producer/RecordMetadata.html 
b/0110/javadoc/org/apache/kafka/clients/producer/RecordMetadata.html
index ccaaccd..c22ed2f 100644
--- a/0110/javadoc/org/apache/kafka/clients/producer/RecordMetadata.html
+++ b/0110/javadoc/org/apache/kafka/clients/producer/RecordMetadata.html
@@ -2,15 +2,15 @@
 
 
 
-
-RecordMetadata (kafka 0.11.0.0 API)
-
+
+RecordMetadata (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/producer/package-frame.html
--
diff --git a/0110/javadoc/org/apache/kafka/clients/producer/package-frame.html 
b/0110/javadoc/org/apache/kafka/clients/producer/package-frame.html
index 0fa431d..6381e71 100644
--- a/0110/javadoc/org/apache/kafka/clients/producer/package-frame.html
+++ b/0110/javadoc/org/apache/kafka/clients/producer/package-frame.html
@@ -2,9 +2,9 @@
 
 
 
-
-org.apache.kafka.clients.producer (kafka 0.11.0.0 API)
-
+
+org.apache.kafka.clients.producer (kafka 0.11.0.1 API)
+
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/producer/package-summary.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/producer/package-summary.html 
b/0110/javadoc/org/apache/kafka/clients/producer/package-summary.html
index 52462f5..ea5afcb 100644
--- a/0110/javadoc/org/apache/kafka/clients/producer/package-summary.html
+++ b/0110/javadoc/org/apache/kafka/clients/producer/package-summary.html
@@ -2,15 +2,15 @@
 
 
 
-
-org.apache.kafka.clients.producer (kafka 0.11.0.0 API)
-
+
+org.apache.kafka.clients.producer (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/producer/package-tree.html
--
diff --git a/0110/javadoc/org/apache/kafka/clients/producer/package-tree.html 
b/0110/javadoc/org/apache/kafka/clients/producer/package-tree.html
index 792a773..ea7680e 100644
--- a/0110/javadoc/org/apache/kafka/clients/producer/package-tree.html
+++ b/0110/javadoc/org/apache/kafka/clients/producer/package-tree.html
@@ -2,15 +2,15 @@
 
 
 
-
-org.apache.kafka.clients.producer Class Hierarchy (kafka 0.11.0.0 
API)
-
+
+org.apache.kafka.clients.producer Class Hierarchy (kafka 0.11.0.1 
API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/Cluster.html
--
diff --git a/0110/javadoc/org/apache/kafka/common/Cluster.html 
b/0110/javadoc/org/apache/kafka/common/Cluster.html
index cdf464c..c2b95bd 100644
--- a/0110/javadoc/org/apache/kafka/common/Cluster.html
+++ b/0110/javadoc/org/apache/kafka/common/Cluster.html
@@ -2,15 +2,15 @@
 
 
 
-
-Cluster (kafka 0.11.0.0 API)
-
+
+Cluster (kafka 0.11.0.1 API)
+
 
 
 
 
 
 
-

Developer Guide

+

Developer Manual

There is a quickstart example that provides how to run a stream processing program coded in the Kafka Streams library. @@ -505,7 +505,7 @@ A Kafka Streams application is typically running on many instances. The state that is locally available on any given instance is only a subset of the application's entire state. Querying the local stores on an instance will, by definition, only return data locally available on that particular instance. -We explain how to access data in state stores that are not locally available in section Querying remote state stores (for the entire application). +We explain how to access data in state stores that are not locally available in section Querying remote state stores (for the entire application).

@@ -536,7 +536,7 @@ This read-only constraint is important to guarantee that the underlying state stores


[08/10] kafka-site git commit: Update site for 0.11.0.1 release

2017-09-13 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/admin/DeleteTopicsResult.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/admin/DeleteTopicsResult.html 
b/0110/javadoc/org/apache/kafka/clients/admin/DeleteTopicsResult.html
index f18f6aa..95ba1fc 100644
--- a/0110/javadoc/org/apache/kafka/clients/admin/DeleteTopicsResult.html
+++ b/0110/javadoc/org/apache/kafka/clients/admin/DeleteTopicsResult.html
@@ -2,15 +2,15 @@
 
 
 
-
-DeleteTopicsResult (kafka 0.11.0.0 API)
-
+
+DeleteTopicsResult (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsOptions.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsOptions.html 
b/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsOptions.html
index 494159c..9c9ef5a 100644
--- a/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsOptions.html
+++ b/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsOptions.html
@@ -2,15 +2,15 @@
 
 
 
-
-DescribeAclsOptions (kafka 0.11.0.0 API)
-
+
+DescribeAclsOptions (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsResult.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsResult.html 
b/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsResult.html
index 21afdd9..d291498 100644
--- a/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsResult.html
+++ b/0110/javadoc/org/apache/kafka/clients/admin/DescribeAclsResult.html
@@ -2,15 +2,15 @@
 
 
 
-
-DescribeAclsResult (kafka 0.11.0.0 API)
-
+
+DescribeAclsResult (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterOptions.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterOptions.html 
b/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterOptions.html
index b74122f..8b0ae02 100644
--- a/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterOptions.html
+++ b/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterOptions.html
@@ -2,15 +2,15 @@
 
 
 
-
-DescribeClusterOptions (kafka 0.11.0.0 API)
-
+
+DescribeClusterOptions (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterResult.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterResult.html 
b/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterResult.html
index abc4854..f16afeb 100644
--- a/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterResult.html
+++ b/0110/javadoc/org/apache/kafka/clients/admin/DescribeClusterResult.html
@@ -2,15 +2,15 @@
 
 
 
-
-DescribeClusterResult (kafka 0.11.0.0 API)
-
+
+DescribeClusterResult (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/clients/admin/DescribeConfigsOptions.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/clients/admin/DescribeConfigsOptions.html 
b/0110/javadoc/org/apache/kafka/clients/admin/DescribeConfigsOptions.html
index e066b3d..f0cac97 100644
--- a/0110/javadoc/org/apache/kafka/clients/admin/DescribeConfigsOptions.html
+++ 

[03/10] kafka-site git commit: Update site for 0.11.0.1 release

2017-09-13 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/KafkaStreams.State.html
--
diff --git a/0110/javadoc/org/apache/kafka/streams/KafkaStreams.State.html 
b/0110/javadoc/org/apache/kafka/streams/KafkaStreams.State.html
index ea7daae..6e61ac0 100644
--- a/0110/javadoc/org/apache/kafka/streams/KafkaStreams.State.html
+++ b/0110/javadoc/org/apache/kafka/streams/KafkaStreams.State.html
@@ -2,15 +2,15 @@
 
 
 
-
-KafkaStreams.State (kafka 0.11.0.0 API)
-
+
+KafkaStreams.State (kafka 0.11.0.1 API)
+
 
 
 
 
 
@@ -121,9 +121,9 @@ extends http://docs.oracle.com/javase/7/docs/api/java/lang/Enum.html?is
  |   +-++
  | |
  | v
- |   +-++
- +<- | Rebalancing  | <+
- |   +--+  |
+ |   +-++ <-+
+ +<- | Rebalancing  | --+
+ |   +--+ <+
  | |
  | |
  |   +--+  |
@@ -132,15 +132,28 @@ extends http://docs.oracle.com/javase/7/docs/api/java/lang/Enum.html?is
  | |
  | v
  |   +-++
- +-> | Pending  |
- | Shutdown |
- +-++
-   |
-   v
- +-++
- | Not Running  |
+ +-> | Pending  |<+
+ |   | Shutdown | |
+ |   +-++ |
+ | |  |
+ | v  |
+ |   +-++ |
+ |   | Not Running  | |
+ |   +--+ |
+ ||
+ |   +--+ |
+ +-> | Error|-+
  +--+
- 
+
+
+ 
+ Note the following:
+ - Any state can go to PENDING_SHUTDOWN and subsequently NOT_RUNNING.
+ - It is theoretically possible for a thread to always be in the 
PARTITION_REVOKED state
+ (see StreamThread state diagram) and hence it is possible that 
this instance is always
+ on a REBALANCING state.
+ - Of special importance: If the global stream thread dies, or all stream 
threads die (or both) then
+ the instance will be in the ERROR state. The user will need to close it.
 
 
 
@@ -162,15 +175,18 @@ extends http://docs.oracle.com/javase/7/docs/api/java/lang/Enum.html?is
 CREATED
 
 
-NOT_RUNNING
+ERROR
 
 
-PENDING_SHUTDOWN
+NOT_RUNNING
 
 
-REBALANCING
+PENDING_SHUTDOWN
 
 
+REBALANCING
+
+
 RUNNING
 
 
@@ -190,23 +206,19 @@ extends http://docs.oracle.com/javase/7/docs/api/java/lang/Enum.html?is
 
 
 boolean
-isCreatedOrRunning()
-
-
-boolean
 isRunning()
 
-
+
 boolean
 isValidTransition(KafkaStreams.StatenewState)
 
-
+
 static KafkaStreams.State
 valueOf(http://docs.oracle.com/javase/7/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Returns the enum constant of this type with the specified 
name.
 
 
-
+
 static KafkaStreams.State[]
 values()
 Returns an array containing the constants of this enum 
type, in
@@ -251,22 +263,22 @@ the order they are declared.
 public static finalKafkaStreams.State CREATED
 
 
-
+
 
 
 
 
-RUNNING
-public static finalKafkaStreams.State RUNNING
+REBALANCING
+public static finalKafkaStreams.State REBALANCING
 
 
-
+
 
 
 
 
-REBALANCING
-public static finalKafkaStreams.State REBALANCING
+RUNNING
+public static finalKafkaStreams.State RUNNING
 
 
 
@@ -281,12 +293,21 @@ the order they are declared.
 
 
 
-
+
 
 NOT_RUNNING
 public static finalKafkaStreams.State NOT_RUNNING
 
 
+
+
+
+
+
+ERROR
+public static finalKafkaStreams.State ERROR
+
+
 
 
 
@@ -339,15 +360,6 @@ not permitted.)
 publicbooleanisRunning()
 
 
-
-
-
-
-
-isCreatedOrRunning
-publicbooleanisCreatedOrRunning()
-
-
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html 
b/0110/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html
index 69410be..3fdae67 100644
--- a/0110/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html
+++ b/0110/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html
@@ -2,15 +2,15 @@
 
 
 
-
-KafkaStreams.StateListener (kafka 0.11.0.0 API)
-
+
+KafkaStreams.StateListener (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/BrokerNotAvailableException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/BrokerNotAvailableException.html 
b/0110/javadoc/org/apache/kafka/common/errors/BrokerNotAvailableException.html
index aa0af6f..94e6464 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/BrokerNotAvailableException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/BrokerNotAvailableException.html
@@ -2,15 +2,15 @@
 
 
 
-
-BrokerNotAvailableException (kafka 0.11.0.0 API)
-
+
+BrokerNotAvailableException (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/ClusterAuthorizationException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/ClusterAuthorizationException.html
 
b/0110/javadoc/org/apache/kafka/common/errors/ClusterAuthorizationException.html
index 7fc9d15..54d11bf 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/ClusterAuthorizationException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/ClusterAuthorizationException.html
@@ -2,15 +2,15 @@
 
 
 
-
-ClusterAuthorizationException (kafka 0.11.0.0 API)
-
+
+ClusterAuthorizationException (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/ConcurrentTransactionsException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/ConcurrentTransactionsException.html
 
b/0110/javadoc/org/apache/kafka/common/errors/ConcurrentTransactionsException.html
index 5a62be6..dc2efdd 100644
--- 
a/0110/javadoc/org/apache/kafka/common/errors/ConcurrentTransactionsException.html
+++ 
b/0110/javadoc/org/apache/kafka/common/errors/ConcurrentTransactionsException.html
@@ -2,15 +2,15 @@
 
 
 
-
-ConcurrentTransactionsException (kafka 0.11.0.0 API)
-
+
+ConcurrentTransactionsException (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/ControllerMovedException.html
--
diff --git 
a/0110/javadoc/org/apache/kafka/common/errors/ControllerMovedException.html 
b/0110/javadoc/org/apache/kafka/common/errors/ControllerMovedException.html
index 8331ea5..729559b 100644
--- a/0110/javadoc/org/apache/kafka/common/errors/ControllerMovedException.html
+++ b/0110/javadoc/org/apache/kafka/common/errors/ControllerMovedException.html
@@ -2,15 +2,15 @@
 
 
 
-
-ControllerMovedException (kafka 0.11.0.0 API)
-
+
+ControllerMovedException (kafka 0.11.0.1 API)
+
 
 
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/javadoc/org/apache/kafka/common/errors/CoordinatorLoadInProgressException.html

[10/10] kafka-site git commit: merge trunk

2017-09-13 Thread damianguy
merge trunk


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/06755252
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/06755252
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/06755252

Branch: refs/heads/asf-site
Commit: 0675525230f68c8657322d450ac400d569b7e550
Parents: a1278d0 8c85a0e
Author: Damian Guy 
Authored: Wed Sep 13 13:23:42 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 13 13:23:42 2017 +0100

--
 0102/upgrade.html   |   1 -
 0110/toc.html   |   2 +-
 KEYS|  57 ++
 coding-guide.html   |  41 
 committers.html |   4 +-
 css/styles.css  | 291 +--
 documentation/streams/introduction.html |   2 -
 documentation/streams/quickstart.html   |   2 +
 downloads.html  |  60 ++
 images/icons/check.png  | Bin 0 -> 642 bytes
 images/icons/slash--white.png   | Bin 0 -> 469 bytes
 images/icons/slash.png  | Bin 0 -> 457 bytes
 images/powered-by/CJ_Affiliate.png  | Bin 0 -> 131412 bytes
 images/powered-by/porto-seguro.png  | Bin 0 -> 35125 bytes
 images/powered-by/robotCircle.png   | Bin 0 -> 67349 bytes
 includes/_nav.htm   |   4 +-
 index.html  |  15 +-
 powered-by.html |  15 ++
 18 files changed, 466 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/06755252/0110/toc.html
--
diff --cc 0110/toc.html
index e7d939e,0f6fad0..5704768
--- a/0110/toc.html
+++ b/0110/toc.html
@@@ -141,14 -140,13 +141,14 @@@
  8.3 Connector Development 
Guide
  
  
- 9. Kafka Streams
+ 9. Kafka Streams
  
 -9.1 Play with 
a Streams Application
 -9.2 
Developer Manual
 -9.3 Core 
Concepts
 -9.4 
Architecture
 -9.5 
Upgrade Guide
 +9.1 Play with a Streams 
Application
 +9.2 
Write your own Streams Applications
 +9.3 Developer 
Manual
 +9.4 Core 
Concepts
 +9.5 Architecture
 +9.6 Upgrade 
Guide
  
  
  

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/06755252/downloads.html
--



kafka git commit: KAFKA-5655; materialized count, aggregate, reduce to KGroupedTable

2017-09-12 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 08063f50a -> 8bd2a68b5


KAFKA-5655; materialized count, aggregate, reduce to KGroupedTable

Add overloads of `count`, `aggregate`, `reduce` using `Materialized` to 
`KGroupedTable`
deprecate other overloads

Author: Damian Guy 

Reviewers: Matthias J. Sax , Bill Bejeck 
, Guozhang Wang 

Closes #3829 from dguy/kafka-5655


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/8bd2a68b
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/8bd2a68b
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/8bd2a68b

Branch: refs/heads/trunk
Commit: 8bd2a68b5020f0bf8f79cbe59676d649eebf170f
Parents: 08063f5
Author: Damian Guy 
Authored: Tue Sep 12 17:20:43 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 12 17:20:43 2017 +0100

--
 .../kafka/streams/kstream/KGroupedTable.java| 204 +++
 .../kafka/streams/kstream/Materialized.java |  12 +-
 .../kstream/internals/KGroupedTableImpl.java| 134 +---
 .../kafka/streams/kstream/MaterializedTest.java |  54 +
 .../internals/KGroupedTableImplTest.java| 137 -
 .../kstream/internals/KTableAggregateTest.java  |   1 +
 6 files changed, 509 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/8bd2a68b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedTable.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedTable.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedTable.java
index bf0df55..f854320 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedTable.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedTable.java
@@ -18,6 +18,7 @@ package org.apache.kafka.streams.kstream;
 
 import org.apache.kafka.common.annotation.InterfaceStability;
 import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.utils.Bytes;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.StreamsConfig;
 import org.apache.kafka.streams.processor.StateStoreSupplier;
@@ -80,7 +81,9 @@ public interface KGroupedTable {
  * alphanumerics, '.', '_' and '-'. If {@code null} this is the equivalent 
of {@link KGroupedTable#count()}.
  * @return a {@link KTable} that contains "update" records with unmodified 
keys and {@link Long} values that
  * represent the latest (rolling) count (i.e., number of records) for each 
key
+ * @deprecated use {@link #count(Materialized)}
  */
+@Deprecated
 KTable count(final String queryableStoreName);
 
 /**
@@ -98,6 +101,47 @@ public interface KGroupedTable {
  * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
  * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
  * 
+ * To query the local {@link KeyValueStore} it must be obtained via
+ * {@link KafkaStreams#store(String, QueryableStoreType) 
KafkaStreams#store(...)}:
+ * {@code
+ * KafkaStreams streams = ... // counting words
+ * ReadOnlyKeyValueStore localStore = 
streams.store(queryableStoreName, QueryableStoreTypes.keyValueStore());
+ * String key = "some-word";
+ * Long countForWord = localStore.get(key); // key must be local 
(application state is shared over all running Kafka Streams instances)
+ * }
+ * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
+ * query the value of the key on a parallel running instance of your Kafka 
Streams application.
+ * 
+ * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+ * The changelog topic will be named 
"${applicationId}-${queryableStoreName}-changelog", where "applicationId" is
+ * user-specified in {@link StreamsConfig} via parameter
+ * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"queryableStoreName" is the
+ * provide {@code queryableStoreName}, and "-changelog" is a fixed suffix.
+ * The store name must be a valid Kafka topic name and cannot contain 
characters other than ASCII alphanumerics,
+ * '.', '_' and '-'.
+ * You can retrieve all generated internal topic names via {@link 
KafkaStreams#toString()}.
+ *
+ * @param materialized the instance of {@link Materialized} used to 
materialize the state store. Cannot be {@code null}
+ * @return a {@link KTable} that contains "update" records with unmodified 
keys and 

kafka git commit: KAFKA:5653: add join overloads to KTable

2017-09-12 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk e1491d4a0 -> 08063f50a


KAFKA:5653: add join overloads to KTable

Add `join`, `leftJoin`, `outerJoin` overloads that use `Materialized` to 
`KTable`

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3826 from dguy/kafka-5653


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/08063f50
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/08063f50
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/08063f50

Branch: refs/heads/trunk
Commit: 08063f50a04fda3e40c6060a432a97f49bb68c8c
Parents: e1491d4
Author: Damian Guy 
Authored: Tue Sep 12 16:01:19 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 12 16:01:19 2017 +0100

--
 .../apache/kafka/streams/kstream/KTable.java| 290 +--
 .../streams/kstream/internals/KTableImpl.java   |  88 +-
 .../KTableKTableJoinIntegrationTest.java|  33 ++-
 .../kstream/internals/KTableImplTest.java   |  24 +-
 4 files changed, 400 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/08063f50/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
index 2571ac1..6d1d85d 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
@@ -84,7 +84,7 @@ public interface KTable {
  * have delete semantics.
  * Thus, for tombstones the provided filter predicate is not evaluated but 
the tombstone record is forwarded
  * directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfied 
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., dot not satisfy 
the given predicate) a tombstone record
  * is forwarded.
  *
  * @param predicate a filter {@link Predicate} that is applied to each 
record
@@ -106,7 +106,7 @@ public interface KTable {
  * have delete semantics.
  * Thus, for tombstones the provided filter predicate is not evaluated but 
the tombstone record is forwarded
  * directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfied 
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., dot not satisfy 
the given predicate) a tombstone record
  * is forwarded.
  * 
  * To query the local {@link KeyValueStore} it must be obtained via
@@ -124,7 +124,7 @@ public interface KTable {
  *
  * @param predicate a filter {@link Predicate} that is applied to each 
record
  * @param materialized  a {@link Materialized} that describes how the 
{@link StateStore} for the resulting {@code KTable}
- *  should be materialized
+ *  should be materialized. Cannot be {@code null}
  * @return a {@code KTable} that contains only those records that satisfy 
the given predicate
  * @see #filterNot(Predicate, Materialized)
  */
@@ -144,7 +144,7 @@ public interface KTable {
  * have delete semantics.
  * Thus, for tombstones the provided filter predicate is not evaluated but 
the tombstone record is forwarded
  * directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfied 
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., dot not satisfy 
the given predicate) a tombstone record
  * is forwarded.
  * 
  * To query the local {@link KeyValueStore} it must be obtained via
@@ -184,7 +184,7 @@ public interface KTable {
  * have delete semantics.
  * Thus, for tombstones the provided filter predicate is not evaluated but 
the tombstone record is forwarded
  * directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfied 
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., dot not satisfy 
the given predicate) a tombstone record
  * is forwarded.
  * 
  * To query the local {@link KeyValueStore} it must be obtained via
@@ -260,7 +260,7 @@ public interface KTable {
  * 
  

svn commit: r21571 - in /dev/kafka: ./ 0.11.0.1/

2017-09-12 Thread damianguy
Author: damianguy
Date: Tue Sep 12 13:38:14 2017
New Revision: 21571

Log:
Release 0.11.0.1

Added:
dev/kafka/0.11.0.1/
dev/kafka/0.11.0.1/RELEASE_NOTES.html
dev/kafka/0.11.0.1/RELEASE_NOTES.html.asc
dev/kafka/0.11.0.1/RELEASE_NOTES.html.md5
dev/kafka/0.11.0.1/RELEASE_NOTES.html.sha1
dev/kafka/0.11.0.1/RELEASE_NOTES.html.sha2
dev/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz   (with props)
dev/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz.asc
dev/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz.md5
dev/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz.sha1
dev/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz.sha2
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1-site-docs.tgz   (with props)
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1-site-docs.tgz.asc
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1-site-docs.tgz.md5
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1-site-docs.tgz.sha1
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1-site-docs.tgz.sha2
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz   (with props)
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz.asc
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz.md5
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz.sha1
dev/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz.sha2
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1-site-docs.tgz   (with props)
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1-site-docs.tgz.asc
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1-site-docs.tgz.md5
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1-site-docs.tgz.sha1
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1-site-docs.tgz.sha2
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz   (with props)
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz.asc
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz.md5
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz.sha1
dev/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz.sha2
Modified:
dev/kafka/KEYS

Added: dev/kafka/0.11.0.1/RELEASE_NOTES.html
==
--- dev/kafka/0.11.0.1/RELEASE_NOTES.html (added)
+++ dev/kafka/0.11.0.1/RELEASE_NOTES.html Tue Sep 12 13:38:14 2017
@@ -0,0 +1,73 @@
+Release Notes - Kafka - Version 0.11.0.1
+Below is a summary of the JIRA issues addressed in the 0.11.0.1 release of 
Kafka. For full documentation of the
+release, a guide to get started, and information about the project, see 
the http://kafka.apache.org/;>Kafka
+project site.
+
+Note about upgrades: Please carefully review the
+http://kafka.apache.org/0110/documentation.html#upgrade;>upgrade 
documentation for this release thoroughly
+before upgrading your cluster. The upgrade notes discuss any critical 
information about incompatibilities and breaking
+changes, performance changes, and any other changes that might impact your 
production deployment of Kafka.
+
+The documentation for the most recent release can be found at
+http://kafka.apache.org/documentation.html;>http://kafka.apache.org/documentation.html.
+Improvement
+
+[https://issues.apache.org/jira/browse/KAFKA-5242;>KAFKA-5242] - add 
max_number _of_retries to exponential backoff strategy
+[https://issues.apache.org/jira/browse/KAFKA-5410;>KAFKA-5410] - Fix 
taskClass() method name in Connector and flush() signature in SinkTask
+[https://issues.apache.org/jira/browse/KAFKA-5485;>KAFKA-5485] - 
Streams should not suspend tasks twice
+
+Bug
+
+[https://issues.apache.org/jira/browse/KAFKA-2105;>KAFKA-2105] - 
NullPointerException in client on MetadataRequest
+[https://issues.apache.org/jira/browse/KAFKA-4669;>KAFKA-4669] - 
KafkaProducer.flush hangs when NetworkClient.handleCompletedReceives throws 
exception
+[https://issues.apache.org/jira/browse/KAFKA-4856;>KAFKA-4856] - 
Calling KafkaProducer.close() from multiple threads may cause spurious 
error
+[https://issues.apache.org/jira/browse/KAFKA-5152;>KAFKA-5152] - Kafka 
Streams keeps restoring state after shutdown is initiated during startup
+[https://issues.apache.org/jira/browse/KAFKA-5167;>KAFKA-5167] - 
streams task gets stuck after re-balance due to LockException
+[https://issues.apache.org/jira/browse/KAFKA-5417;>KAFKA-5417] - 
Clients get inconsistent connection states when SASL/SSL connection is marked 
CONECTED and DISCONNECTED at the same time
+[https://issues.apache.org/jira/browse/KAFKA-5431;>KAFKA-5431] - 
LogCleaner stopped due to 
org.apache.kafka.common.errors.CorruptRecordException
+[https://issues.apache.org/jira/browse/KAFKA-5464;>KAFKA-5464] - 
StreamsKafkaClient should not use StreamsConfig.POLL_MS_CONFIG
+[https://issues.apache.org/jira/browse/KAFKA-5484;>KAFKA-5484] - 
Refactor kafkatest docker support
+[https://issues.apache.org/jira/browse/KAFKA-5506;>KAFKA-5506] - 
bin/kafka-consumer-groups.sh failing to query offsets
+[https://issues.apache.org/jira/browse/KAFKA-5508;>KAFKA-5508] - 
Documentation for altering topics
+[https://issues.apache.org/jira/browse/KAFKA-5512;>KAFKA-5512] - 
KafkaConsumer: High memory allocation rate when idle
+[https:

[2/2] kafka git commit: Bump version to 0.11.0.1

2017-09-12 Thread damianguy
Bump version to 0.11.0.1


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/ba8483d2
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/ba8483d2
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/ba8483d2

Branch: refs/heads/0.11.0
Commit: ba8483d27798b784f5bf1936dfbd5f3363ef1619
Parents: b53b7fc
Author: Damian Guy 
Authored: Tue Sep 5 19:18:40 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 12 14:07:42 2017 +0100

--
 gradle.properties   | 2 +-
 tests/kafkatest/__init__.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/ba8483d2/gradle.properties
--
diff --git a/gradle.properties b/gradle.properties
index bea7e2e..e78025c 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -16,7 +16,7 @@
 group=org.apache.kafka
 # NOTE: When you change this version number, you should also make sure to 
update
 # the version numbers in tests/kafkatest/__init__.py and kafka-merge-pr.py.
-version=0.11.0.1-SNAPSHOT
+version=0.11.0.1
 scalaVersion=2.11.11
 task=build
 org.gradle.jvmargs=-XX:MaxPermSize=512m -Xmx1024m -Xss2m

http://git-wip-us.apache.org/repos/asf/kafka/blob/ba8483d2/tests/kafkatest/__init__.py
--
diff --git a/tests/kafkatest/__init__.py b/tests/kafkatest/__init__.py
index 9bee572..d4c6c8c 100644
--- a/tests/kafkatest/__init__.py
+++ b/tests/kafkatest/__init__.py
@@ -22,4 +22,4 @@
 # Instead, in development branches, the version should have a suffix of the 
form ".devN"
 #
 # For example, when Kafka is at version 0.9.0.0-SNAPSHOT, this should be 
something like "0.9.0.0.dev0"
-__version__ = '0.11.0.1.dev0'
+__version__ = '0.11.0.1'



kafka git commit: MINOR: refactor build method to extract methods from if statements

2017-09-12 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk a67140317 -> e1491d4a0


MINOR: refactor build method to extract methods from if statements

Author: Bill Bejeck 

Reviewers: Damian Guy 

Closes #3833 from bbejeck/MINOR_extract_methods_from_build


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e1491d4a
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e1491d4a
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e1491d4a

Branch: refs/heads/trunk
Commit: e1491d4a0463deaaa8de7e100dddc2edbc030abf
Parents: a671403
Author: Bill Bejeck 
Authored: Tue Sep 12 09:26:09 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 12 09:26:09 2017 +0100

--
 .../internals/InternalTopologyBuilder.java  | 116 ---
 1 file changed, 72 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/e1491d4a/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopologyBuilder.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopologyBuilder.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopologyBuilder.java
index 437e9e5..193d0e1 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopologyBuilder.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopologyBuilder.java
@@ -898,54 +898,21 @@ public class InternalTopologyBuilder {
 processorMap.put(node.name(), node);
 
 if (factory instanceof ProcessorNodeFactory) {
-for (final String predecessor : ((ProcessorNodeFactory) 
factory).predecessors) {
-final ProcessorNode predecessorNode = 
processorMap.get(predecessor);
-predecessorNode.addChild(node);
-}
-for (final String stateStoreName : ((ProcessorNodeFactory) 
factory).stateStoreNames) {
-if (!stateStoreMap.containsKey(stateStoreName)) {
-if (stateFactories.containsKey(stateStoreName)) {
-final StateStoreFactory stateStoreFactory = 
stateFactories.get(stateStoreName);
-
-// remember the changelog topic if this state 
store is change-logging enabled
-if (stateStoreFactory.loggingEnabled() && 
!storeToChangelogTopic.containsKey(stateStoreName)) {
-final String changelogTopic = 
ProcessorStateManager.storeChangelogTopic(applicationId, stateStoreName);
-storeToChangelogTopic.put(stateStoreName, 
changelogTopic);
-}
-stateStoreMap.put(stateStoreName, 
stateStoreFactory.build());
-} else {
-stateStoreMap.put(stateStoreName, 
globalStateStores.get(stateStoreName));
-}
-
+buildProcessorNode(processorMap,
+   stateStoreMap,
+   (ProcessorNodeFactory) factory,
+   node);
 
-}
-}
 } else if (factory instanceof SourceNodeFactory) {
-final SourceNodeFactory sourceNodeFactory = 
(SourceNodeFactory) factory;
-final List topics = (sourceNodeFactory.pattern != 
null) ?
-
sourceNodeFactory.getTopics(subscriptionUpdates.getUpdates()) :
-sourceNodeFactory.topics;
+buildSourceNode(topicSourceMap,
+(SourceNodeFactory) factory,
+(SourceNode) node);
 
-for (final String topic : topics) {
-if (internalTopicNames.contains(topic)) {
-// prefix the internal topic name with the 
application id
-topicSourceMap.put(decorateTopic(topic), 
(SourceNode) node);
-} else {
-topicSourceMap.put(topic, (SourceNode) node);
-}
-}
 } else if (factory instanceof SinkNodeFactory) {
-final SinkNodeFactory sinkNodeFactory = (SinkNodeFactory) 
factory;
-
-for (final String predecessor : 
sinkNodeFactory.predecessors) {
-

kafka git commit: MINOR: update processor topology test driver

2017-09-12 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 439050816 -> a67140317


MINOR: update processor topology test driver

Author: Bill Bejeck 

Reviewers: Matthias J. Sax , Guozhang Wang 
, Damian Guy 

Closes #3828 from bbejeck/MINOR_update_processor_topology_test_driver


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/a6714031
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/a6714031
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/a6714031

Branch: refs/heads/trunk
Commit: a67140317a644034e91ee596ab22bfb55adde1e0
Parents: 4390508
Author: Bill Bejeck 
Authored: Tue Sep 12 09:23:28 2017 +0100
Committer: Damian Guy 
Committed: Tue Sep 12 09:23:28 2017 +0100

--
 .../kafka/streams/InternalTopologyAccessor.java | 32 
 .../kafka/test/ProcessorTopologyTestDriver.java | 13 
 2 files changed, 45 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/a6714031/streams/src/test/java/org/apache/kafka/streams/InternalTopologyAccessor.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/InternalTopologyAccessor.java 
b/streams/src/test/java/org/apache/kafka/streams/InternalTopologyAccessor.java
new file mode 100644
index 000..a6144f2
--- /dev/null
+++ 
b/streams/src/test/java/org/apache/kafka/streams/InternalTopologyAccessor.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.streams;
+
+import org.apache.kafka.streams.processor.internals.InternalTopologyBuilder;
+
+
+/**
+ * This class is meant for testing purposes only and allows the testing of
+ * topologies by using the  {@link 
org.apache.kafka.test.ProcessorTopologyTestDriver}
+ */
+public class InternalTopologyAccessor {
+
+public static InternalTopologyBuilder getInternalTopologyBuilder(Topology 
topology) {
+return topology.internalTopologyBuilder;
+}
+}

http://git-wip-us.apache.org/repos/asf/kafka/blob/a6714031/streams/src/test/java/org/apache/kafka/test/ProcessorTopologyTestDriver.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/test/ProcessorTopologyTestDriver.java 
b/streams/src/test/java/org/apache/kafka/test/ProcessorTopologyTestDriver.java
index b2dbeb5..148511a 100644
--- 
a/streams/src/test/java/org/apache/kafka/test/ProcessorTopologyTestDriver.java
+++ 
b/streams/src/test/java/org/apache/kafka/test/ProcessorTopologyTestDriver.java
@@ -31,6 +31,7 @@ import org.apache.kafka.common.serialization.Deserializer;
 import org.apache.kafka.common.serialization.Serializer;
 import org.apache.kafka.common.utils.MockTime;
 import org.apache.kafka.common.utils.Time;
+import org.apache.kafka.streams.InternalTopologyAccessor;
 import org.apache.kafka.streams.StreamsConfig;
 import org.apache.kafka.streams.StreamsMetrics;
 import org.apache.kafka.streams.Topology;
@@ -157,6 +158,18 @@ public class ProcessorTopologyTestDriver {
 private StreamTask task;
 private GlobalStateUpdateTask globalStateTask;
 
+
+/**
+ * Create a new test diver instance
+ * @param config the stream configuration for the topology
+ * @param topology the {@link Topology} whose {@link 
InternalTopologyBuilder} will
+ *be use to create the topology instance.
+ */
+public ProcessorTopologyTestDriver(final StreamsConfig config,
+   final Topology topology) {
+this(config, 
InternalTopologyAccessor.getInternalTopologyBuilder(topology));
+}
+
 /**
  * Create a new test driver instance.
  * @param config the stream configuration for the topology



kafka git commit: KAFKA-5816; [FOLLOW UP] create ProducedInternal class

2017-09-11 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk c5464edbb -> 779714c08


KAFKA-5816; [FOLLOW UP] create ProducedInternal class

Create `ProducedInternal` and remove getters from `Produced`

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3810 from dguy/kafka-5816-follow-up


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/779714c0
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/779714c0
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/779714c0

Branch: refs/heads/trunk
Commit: 779714c08bc16fcdd6fe7c39e92a7f73ebebdb71
Parents: c5464ed
Author: Damian Guy 
Authored: Mon Sep 11 12:00:54 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 11 12:00:54 2017 +0100

--
 .../apache/kafka/streams/kstream/Produced.java  | 24 +---
 .../streams/kstream/internals/KStreamImpl.java  | 12 --
 .../kstream/internals/ProducedInternal.java | 39 
 3 files changed, 57 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/779714c0/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java
index 488bd15..b2513ea 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java
@@ -30,9 +30,9 @@ import org.apache.kafka.streams.processor.StreamPartitioner;
  */
 public class Produced {
 
-private Serde keySerde;
-private Serde valueSerde;
-private StreamPartitioner partitioner;
+protected Serde keySerde;
+protected Serde valueSerde;
+protected StreamPartitioner partitioner;
 
 private Produced(final Serde keySerde,
  final Serde valueSerde,
@@ -42,6 +42,12 @@ public class Produced {
 this.partitioner = partitioner;
 }
 
+protected Produced(final Produced produced) {
+this.keySerde = produced.keySerde;
+this.valueSerde = produced.valueSerde;
+this.partitioner = produced.partitioner;
+}
+
 /**
  * Create a Produced instance with provided keySerde and valueSerde.
  * @param keySerde  Serde to use for serializing the key
@@ -148,16 +154,4 @@ public class Produced {
 this.keySerde = keySerde;
 return this;
 }
-
-public Serde keySerde() {
-return keySerde;
-}
-
-public Serde valueSerde() {
-return valueSerde;
-}
-
-public StreamPartitioner  streamPartitioner() {
-return partitioner;
-}
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/779714c0/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
index 7adc426..41da536 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java
@@ -378,10 +378,11 @@ public class KStreamImpl extends AbstractStream 
implements KStream through(final String topic, final Produced 
produced) {
-to(topic, produced);
+final ProducedInternal producedInternal = new 
ProducedInternal<>(produced);
+to(topic, producedInternal);
 return builder.stream(Collections.singleton(topic),
-  new ConsumedInternal<>(produced.keySerde(),
-produced.valueSerde(),
+  new 
ConsumedInternal<>(producedInternal.keySerde(),
+producedInternal.valueSerde(),
 new FailOnInvalidTimestamp(),
 null));
 }
@@ -455,6 +456,11 @@ public class KStreamImpl extends AbstractStream 
implements KStream produced) {
 Objects.requireNonNull(topic, "topic can't be null");
 Objects.requireNonNull(produced, "Produced can't be null");
+to(topic, new ProducedInternal<>(produced));
+
+}
+
+private void to(final String topic, final ProducedInternal 

[3/5] kafka git commit: KAFKA-5531; throw concrete exceptions in streams tests

2017-09-11 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
index 8aedf36..fbf45b3 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorStateManagerTest.java
@@ -312,7 +312,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldRegisterStoreWithoutLoggingEnabledAndNotBackedByATopic() 
throws Exception {
+public void shouldRegisterStoreWithoutLoggingEnabledAndNotBackedByATopic() 
throws IOException {
 final ProcessorStateManager stateMgr = new ProcessorStateManager(
 new TaskId(0, 1),
 noPartitions,
@@ -326,7 +326,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldNotChangeOffsetsIfAckedOffsetsIsNull() throws Exception {
+public void shouldNotChangeOffsetsIfAckedOffsetsIsNull() throws 
IOException {
 final Map offsets = 
Collections.singletonMap(persistentStorePartition, 99L);
 checkpoint.write(offsets);
 
@@ -346,7 +346,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldWriteCheckpointForPersistentLogEnabledStore() throws 
Exception {
+public void shouldWriteCheckpointForPersistentLogEnabledStore() throws 
IOException {
 final ProcessorStateManager stateMgr = new ProcessorStateManager(
 taskId,
 noPartitions,
@@ -363,7 +363,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldWriteCheckpointForStandbyReplica() throws Exception {
+public void shouldWriteCheckpointForStandbyReplica() throws IOException {
 final ProcessorStateManager stateMgr = new ProcessorStateManager(
 taskId,
 noPartitions,
@@ -391,7 +391,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldNotWriteCheckpointForNonPersistent() throws Exception {
+public void shouldNotWriteCheckpointForNonPersistent() throws IOException {
 final TopicPartition topicPartition = new 
TopicPartition(nonPersistentStoreTopicName, 1);
 
 final ProcessorStateManager stateMgr = new ProcessorStateManager(
@@ -411,7 +411,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldNotWriteCheckpointForStoresWithoutChangelogTopic() 
throws Exception {
+public void shouldNotWriteCheckpointForStoresWithoutChangelogTopic() 
throws IOException {
 final ProcessorStateManager stateMgr = new ProcessorStateManager(
 taskId,
 noPartitions,
@@ -431,7 +431,7 @@ public class ProcessorStateManagerTest {
 
 
 @Test
-public void 
shouldThrowIllegalArgumentExceptionIfStoreNameIsSameAsCheckpointFileName() 
throws Exception {
+public void 
shouldThrowIllegalArgumentExceptionIfStoreNameIsSameAsCheckpointFileName() 
throws IOException {
 final ProcessorStateManager stateManager = new ProcessorStateManager(
 taskId,
 noPartitions,
@@ -450,7 +450,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void 
shouldThrowIllegalArgumentExceptionOnRegisterWhenStoreHasAlreadyBeenRegistered()
 throws Exception {
+public void 
shouldThrowIllegalArgumentExceptionOnRegisterWhenStoreHasAlreadyBeenRegistered()
 throws IOException {
 final ProcessorStateManager stateManager = new ProcessorStateManager(
 taskId,
 noPartitions,
@@ -471,7 +471,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void 
shouldThrowProcessorStateExceptionOnCloseIfStoreThrowsAnException() throws 
Exception {
+public void 
shouldThrowProcessorStateExceptionOnCloseIfStoreThrowsAnException() throws 
IOException {
 
 final ProcessorStateManager stateManager = new ProcessorStateManager(
 taskId,
@@ -499,7 +499,7 @@ public class ProcessorStateManagerTest {
 }
 
 @Test
-public void shouldDeleteCheckpointFileOnCreationIfEosEnabled() throws 
Exception {
+public void shouldDeleteCheckpointFileOnCreationIfEosEnabled() throws 
IOException {
 checkpoint.write(Collections.emptyMap());
 assertTrue(checkpointFile.exists());
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/processor/internals/ProcessorTopologyTest.java
--
diff --git 

[5/5] kafka git commit: KAFKA-5531; throw concrete exceptions in streams tests

2017-09-11 Thread damianguy
KAFKA-5531; throw concrete exceptions in streams tests

1. Now instead of just generic `Exception` methods declare more concrete
exceptions throwing or don't declare any throwing at all, if not needed.
2. `SimpleBenchmark.run()` throws `RuntimeException`
3. `SimpleBenchmark.produce()` throws `IllegalArgumentException`
4. Expect `ProcessorStateException` in
`StandbyTaskTest.testUpdateNonPersistentStore()`

/cc enothereska

Author: Evgeny Veretennikov 

Reviewers: Damian Guy 

Closes #3485 from evis/5531-throw-concrete-exceptions


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c5464edb
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c5464edb
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c5464edb

Branch: refs/heads/trunk
Commit: c5464edbb7a6821e0a91a3712b1fe2fd92a22d68
Parents: 3728f4c
Author: Evgeny Veretennikov 
Authored: Mon Sep 11 09:42:10 2017 +0100
Committer: Damian Guy 
Committed: Mon Sep 11 09:42:10 2017 +0100

--
 .../apache/kafka/streams/KafkaStreamsTest.java  | 24 ++---
 .../apache/kafka/streams/StreamsConfigTest.java | 36 
 .../streams/integration/EosIntegrationTest.java |  4 +-
 .../integration/FanoutIntegrationTest.java  |  2 +-
 .../GlobalKTableIntegrationTest.java|  6 +-
 .../InternalTopicIntegrationTest.java   |  4 +-
 .../integration/JoinIntegrationTest.java|  8 +-
 .../KStreamAggregationDedupIntegrationTest.java |  4 +-
 .../KStreamAggregationIntegrationTest.java  |  4 +-
 .../KStreamKTableJoinIntegrationTest.java   |  2 +-
 .../integration/KStreamRepartitionJoinTest.java | 24 ++---
 ...eamsFineGrainedAutoResetIntegrationTest.java |  7 +-
 .../KTableKTableJoinIntegrationTest.java| 43 -
 .../QueryableStateIntegrationTest.java  | 12 +--
 .../integration/RegexSourceIntegrationTest.java |  7 +-
 .../integration/ResetIntegrationTest.java   |  3 +-
 .../integration/utils/EmbeddedKafkaCluster.java | 16 ++--
 .../streams/kstream/KStreamBuilderTest.java | 32 +++
 .../internals/GlobalKTableJoinsTest.java|  2 +-
 .../internals/KGroupedStreamImplTest.java   | 96 ++--
 .../internals/KGroupedTableImplTest.java| 26 +++---
 .../kstream/internals/KStreamImplTest.java  | 58 ++--
 .../internals/KStreamKStreamJoinTest.java   | 10 +-
 .../internals/KStreamKStreamLeftJoinTest.java   |  4 +-
 .../internals/KStreamKTableJoinTest.java|  2 +-
 .../internals/KStreamKTableLeftJoinTest.java|  2 +-
 ...reamSessionWindowAggregateProcessorTest.java | 18 ++--
 .../internals/KStreamWindowAggregateTest.java   |  6 +-
 .../internals/KTableKTableLeftJoinTest.java |  8 +-
 .../internals/KTableKTableOuterJoinTest.java|  6 +-
 .../kstream/internals/SessionKeySerdeTest.java  | 22 ++---
 .../kafka/streams/perf/SimpleBenchmark.java | 36 
 .../kafka/streams/perf/YahooBenchmark.java  |  6 +-
 .../streams/processor/TopologyBuilderTest.java  | 43 -
 .../internals/AbstractProcessorContextTest.java | 22 ++---
 .../processor/internals/AbstractTaskTest.java   |  6 +-
 .../CopartitionedTopicsValidatorTest.java   | 10 +-
 .../internals/GlobalStateManagerImplTest.java   | 52 +--
 .../internals/GlobalStateTaskTest.java  | 15 +--
 .../internals/GlobalStreamThreadTest.java   | 10 +-
 .../internals/InternalTopicConfigTest.java  | 20 ++--
 .../internals/InternalTopicManagerTest.java | 10 +-
 .../internals/MinTimestampTrackerTest.java  | 14 +--
 .../processor/internals/ProcessorNodeTest.java  |  4 +-
 .../internals/ProcessorStateManagerTest.java| 20 ++--
 .../internals/ProcessorTopologyTest.java| 10 +-
 .../internals/RecordCollectorTest.java  | 10 +-
 .../processor/internals/RecordQueueTest.java|  4 +-
 .../SourceNodeRecordDeserializerTest.java   |  6 +-
 .../processor/internals/StandbyTaskTest.java| 16 ++--
 .../processor/internals/StateConsumerTest.java  | 21 +++--
 .../processor/internals/StateDirectoryTest.java | 30 +++---
 .../processor/internals/StateRestorerTest.java  | 16 ++--
 .../internals/StoreChangelogReaderTest.java | 24 ++---
 .../internals/StreamPartitionAssignorTest.java  |  8 +-
 .../processor/internals/StreamTaskTest.java | 68 +++---
 .../processor/internals/StreamThreadTest.java   | 12 +--
 .../internals/StreamsMetadataStateTest.java | 40 
 .../internals/StreamsMetricsImplTest.java   |  2 +-
 .../assignment/AssignmentInfoTest.java  |  2 +-
 .../internals/assignment/ClientStateTest.java   | 34 +++
 .../assignment/StickyTaskAssignorTest.java  | 56 ++--
 .../assignment/SubscriptionInfoTest.java|  4 +-
 .../apache/kafka/streams/state/StoresTest.java  | 10 +-
 

[2/5] kafka git commit: KAFKA-5531; throw concrete exceptions in streams tests

2017-09-11 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java
index b71319a..9c011bb 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java
@@ -47,7 +47,7 @@ public class SubscriptionInfoTest {
 }
 
 @Test
-public void shouldEncodeDecodeWithUserEndPoint() throws Exception {
+public void shouldEncodeDecodeWithUserEndPoint() {
 SubscriptionInfo original = new SubscriptionInfo(UUID.randomUUID(),
 Collections.singleton(new TaskId(0, 0)), 
Collections.emptySet(), "localhost:80");
 SubscriptionInfo decoded = SubscriptionInfo.decode(original.encode());
@@ -55,7 +55,7 @@ public class SubscriptionInfoTest {
 }
 
 @Test
-public void shouldBeBackwardCompatible() throws Exception {
+public void shouldBeBackwardCompatible() {
 UUID processId = UUID.randomUUID();
 
 Set activeTasks =

http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/state/StoresTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/StoresTest.java 
b/streams/src/test/java/org/apache/kafka/streams/state/StoresTest.java
index 700b243..900c8da 100644
--- a/streams/src/test/java/org/apache/kafka/streams/state/StoresTest.java
+++ b/streams/src/test/java/org/apache/kafka/streams/state/StoresTest.java
@@ -41,7 +41,7 @@ public class StoresTest {
 
 @SuppressWarnings("deprecation")
 @Test
-public void shouldCreateInMemoryStoreSupplierWithLoggedConfig() throws 
Exception {
+public void shouldCreateInMemoryStoreSupplierWithLoggedConfig() {
 final StateStoreSupplier supplier = Stores.create("store")
 .withKeys(Serdes.String())
 .withValues(Serdes.String())
@@ -56,7 +56,7 @@ public class StoresTest {
 
 @SuppressWarnings("deprecation")
 @Test
-public void shouldCreateInMemoryStoreSupplierNotLogged() throws Exception {
+public void shouldCreateInMemoryStoreSupplierNotLogged() {
 final StateStoreSupplier supplier = Stores.create("store")
 .withKeys(Serdes.String())
 .withValues(Serdes.String())
@@ -69,7 +69,7 @@ public class StoresTest {
 
 @SuppressWarnings("deprecation")
 @Test
-public void shouldCreatePersistenStoreSupplierWithLoggedConfig() throws 
Exception {
+public void shouldCreatePersistenStoreSupplierWithLoggedConfig() {
 final StateStoreSupplier supplier = Stores.create("store")
 .withKeys(Serdes.String())
 .withValues(Serdes.String())
@@ -84,7 +84,7 @@ public class StoresTest {
 
 @SuppressWarnings("deprecation")
 @Test
-public void shouldCreatePersistenStoreSupplierNotLogged() throws Exception 
{
+public void shouldCreatePersistenStoreSupplierNotLogged() {
 final StateStoreSupplier supplier = Stores.create("store")
 .withKeys(Serdes.String())
 .withValues(Serdes.String())
@@ -96,7 +96,7 @@ public class StoresTest {
 }
 
 @Test
-public void 
shouldThrowIllegalArgumentExceptionWhenTryingToConstructWindowStoreWithLessThanTwoSegments()
 throws Exception {
+public void 
shouldThrowIllegalArgumentExceptionWhenTryingToConstructWindowStoreWithLessThanTwoSegments()
 {
 final Stores.PersistentKeyValueFactory storeFactory = 
Stores.create("store")
 .withKeys(Serdes.String())
 .withValues(Serdes.String())

http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/state/internals/AbstractKeyValueStoreTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/AbstractKeyValueStoreTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/AbstractKeyValueStoreTest.java
index 345639b..af917e6 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/AbstractKeyValueStoreTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/AbstractKeyValueStoreTest.java
@@ -219,52 +219,52 @@ public abstract class AbstractKeyValueStoreTest {
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldThrowNullPointerExceptionOnPutNullKey() throws Exception 
{
+public void 

[1/5] kafka git commit: KAFKA-5531; throw concrete exceptions in streams tests

2017-09-11 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 3728f4cd9 -> c5464edbb


http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBKeyValueStoreTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBKeyValueStoreTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBKeyValueStoreTest.java
index 51308ce..5aaf82f 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBKeyValueStoreTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBKeyValueStoreTest.java
@@ -72,12 +72,12 @@ public class RocksDBKeyValueStoreTest extends 
AbstractKeyValueStoreTest {
 }
 
 @Test
-public void shouldUseCustomRocksDbConfigSetter() throws Exception {
+public void shouldUseCustomRocksDbConfigSetter() {
 assertTrue(TheRocksDbConfigSetter.called);
 }
 
 @Test
-public void shouldPerformRangeQueriesWithCachingDisabled() throws 
Exception {
+public void shouldPerformRangeQueriesWithCachingDisabled() {
 context.setTime(1L);
 store.put(1, "hi");
 store.put(2, "goodbye");
@@ -88,7 +88,7 @@ public class RocksDBKeyValueStoreTest extends 
AbstractKeyValueStoreTest {
 }
 
 @Test
-public void shouldPerformAllQueriesWithCachingDisabled() throws Exception {
+public void shouldPerformAllQueriesWithCachingDisabled() {
 context.setTime(1L);
 store.put(1, "hi");
 store.put(2, "goodbye");
@@ -99,7 +99,7 @@ public class RocksDBKeyValueStoreTest extends 
AbstractKeyValueStoreTest {
 }
 
 @Test
-public void 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext()
 throws Exception {
+public void 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext()
 {
 context.setTime(1L);
 store.put(1, "hi");
 store.put(2, "goodbye");

http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStoreTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStoreTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStoreTest.java
index df91cfb..36d4c1f 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStoreTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSegmentedBytesStoreTest.java
@@ -79,7 +79,7 @@ public class RocksDBSegmentedBytesStoreTest {
 }
 
 @Test
-public void shouldPutAndFetch() throws Exception {
+public void shouldPutAndFetch() {
 final String key = "a";
 bytesStore.put(serializeKey(new Windowed<>(key, new SessionWindow(10, 
10L))), serializeValue(10L));
 bytesStore.put(serializeKey(new Windowed<>(key, new 
SessionWindow(500L, 1000L))), serializeValue(50L));
@@ -94,7 +94,7 @@ public class RocksDBSegmentedBytesStoreTest {
 }
 
 @Test
-public void shouldFindValuesWithinRange() throws Exception {
+public void shouldFindValuesWithinRange() {
 final String key = "a";
 bytesStore.put(serializeKey(new Windowed<>(key, new SessionWindow(0L, 
0L))), serializeValue(50L));
 bytesStore.put(serializeKey(new Windowed<>(key, new 
SessionWindow(1000L, 1000L))), serializeValue(10L));
@@ -103,7 +103,7 @@ public class RocksDBSegmentedBytesStoreTest {
 }
 
 @Test
-public void shouldRemove() throws Exception {
+public void shouldRemove() {
 bytesStore.put(serializeKey(new Windowed<>("a", new SessionWindow(0, 
1000))), serializeValue(30L));
 bytesStore.put(serializeKey(new Windowed<>("a", new 
SessionWindow(1500, 2500))), serializeValue(50L));
 
@@ -113,7 +113,7 @@ public class RocksDBSegmentedBytesStoreTest {
 }
 
 @Test
-public void shouldRollSegments() throws Exception {
+public void shouldRollSegments() {
 // just to validate directories
 final Segments segments = new Segments(storeName, retention, 
numSegments);
 final String key = "a";

http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSessionStoreSupplierTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSessionStoreSupplierTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSessionStoreSupplierTest.java
index 9e41d95..f62edf8 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBSessionStoreSupplierTest.java
+++ 

[4/5] kafka git commit: KAFKA-5531; throw concrete exceptions in streams tests

2017-09-11 Thread damianguy
http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KGroupedTableImplTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KGroupedTableImplTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KGroupedTableImplTest.java
index 9204b88..105dd2e 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KGroupedTableImplTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KGroupedTableImplTest.java
@@ -63,56 +63,56 @@ public class KGroupedTableImplTest {
 }
 
 @Test
-public void shouldAllowNullStoreNameOnAggregate() throws Exception {
+public void shouldAllowNullStoreNameOnAggregate() {
 groupedTable.aggregate(MockInitializer.STRING_INIT, 
MockAggregator.TOSTRING_ADDER, MockAggregator.TOSTRING_REMOVER, (String) null);
 }
 
 @Test(expected = InvalidTopicException.class)
-public void shouldNotAllowInvalidStoreNameOnAggregate() throws Exception {
+public void shouldNotAllowInvalidStoreNameOnAggregate() {
 groupedTable.aggregate(MockInitializer.STRING_INIT, 
MockAggregator.TOSTRING_ADDER, MockAggregator.TOSTRING_REMOVER, 
INVALID_STORE_NAME);
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldNotAllowNullInitializerOnAggregate() throws Exception {
+public void shouldNotAllowNullInitializerOnAggregate() {
 groupedTable.aggregate(null, MockAggregator.TOSTRING_ADDER, 
MockAggregator.TOSTRING_REMOVER, "store");
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldNotAllowNullAdderOnAggregate() throws Exception {
+public void shouldNotAllowNullAdderOnAggregate() {
 groupedTable.aggregate(MockInitializer.STRING_INIT, null, 
MockAggregator.TOSTRING_REMOVER, "store");
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldNotAllowNullSubtractorOnAggregate() throws Exception {
+public void shouldNotAllowNullSubtractorOnAggregate() {
 groupedTable.aggregate(MockInitializer.STRING_INIT, 
MockAggregator.TOSTRING_ADDER, null, "store");
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldNotAllowNullAdderOnReduce() throws Exception {
+public void shouldNotAllowNullAdderOnReduce() {
 groupedTable.reduce(null, MockReducer.STRING_REMOVER, "store");
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldNotAllowNullSubtractorOnReduce() throws Exception {
+public void shouldNotAllowNullSubtractorOnReduce() {
 groupedTable.reduce(MockReducer.STRING_ADDER, null, "store");
 }
 
 @Test
-public void shouldAllowNullStoreNameOnReduce() throws Exception {
+public void shouldAllowNullStoreNameOnReduce() {
 groupedTable.reduce(MockReducer.STRING_ADDER, 
MockReducer.STRING_REMOVER, (String) null);
 }
 
 @Test(expected = InvalidTopicException.class)
-public void shouldNotAllowInvalidStoreNameOnReduce() throws Exception {
+public void shouldNotAllowInvalidStoreNameOnReduce() {
 groupedTable.reduce(MockReducer.STRING_ADDER, 
MockReducer.STRING_REMOVER, INVALID_STORE_NAME);
 }
 
 @Test(expected = NullPointerException.class)
-public void shouldNotAllowNullStoreSupplierOnReduce() throws Exception {
+public void shouldNotAllowNullStoreSupplierOnReduce() {
 groupedTable.reduce(MockReducer.STRING_ADDER, 
MockReducer.STRING_REMOVER, (StateStoreSupplier) null);
 }
 
-private void doShouldReduce(final KTable reduced, final 
String topic) throws Exception {
+private void doShouldReduce(final KTable reduced, final 
String topic) {
 final Map results = new HashMap<>();
 reduced.foreach(new ForeachAction() {
 @Override
@@ -141,7 +141,7 @@ public class KGroupedTableImplTest {
 }
 
 @Test
-public void shouldReduce() throws Exception {
+public void shouldReduce() {
 final String topic = "input";
 final KeyValueMapper> 
intProjection =
 new KeyValueMapper>() {
@@ -160,7 +160,7 @@ public class KGroupedTableImplTest {
 }
 
 @Test
-public void shouldReduceWithInternalStoreName() throws Exception {
+public void shouldReduceWithInternalStoreName() {
 final String topic = "input";
 final KeyValueMapper> 
intProjection =
 new KeyValueMapper>() {

http://git-wip-us.apache.org/repos/asf/kafka/blob/c5464edb/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamImplTest.java

kafka git commit: KAFKA-5815; add Printed class and KStream#print(printed)

2017-09-08 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk e16b9143d -> 4769e3d92


KAFKA-5815; add Printed class and KStream#print(printed)

Part of KIP-182
- Add `Printed` class and `KStream#print(Printed)`
- deprecate all other `print` and `writeAsText` methods

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3768 from dguy/kafka-5652-printed


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/4769e3d9
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/4769e3d9
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/4769e3d9

Branch: refs/heads/trunk
Commit: 4769e3d92acdc6036f1f834c70004f0c867ae582
Parents: e16b914
Author: Damian Guy 
Authored: Fri Sep 8 18:22:04 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 8 18:22:04 2017 +0100

--
 docs/streams/developer-guide.html   |  12 +-
 docs/streams/upgrade-guide.html |   6 +
 .../apache/kafka/streams/kstream/KStream.java   |  37 ++
 .../streams/kstream/PrintForeachAction.java |  61 -
 .../apache/kafka/streams/kstream/Printed.java   | 126 +++
 .../streams/kstream/internals/KStreamImpl.java  |  29 ++---
 .../streams/kstream/internals/KStreamPrint.java |  43 +--
 .../streams/kstream/internals/KTableImpl.java   |   5 +-
 .../kstream/internals/PrintForeachAction.java   |  64 ++
 .../kstream/internals/PrintedInternal.java  |  36 ++
 .../kafka/streams/kstream/PrintedTest.java  | 126 +++
 .../kstream/internals/KStreamImplTest.java  |   7 +-
 .../kstream/internals/KStreamPrintTest.java |  19 +--
 13 files changed, 426 insertions(+), 145 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/4769e3d9/docs/streams/developer-guide.html
--
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index 05acb55..42a9b20 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -1016,10 +1016,14 @@ Note that in the WordCountProcessor 
implementation, users need to r
 
KStreambyte[], String stream = ...;
stream.print();
-
-   // Several variants of `print` exist to e.g. override 
the default serdes for record keys
-   // and record values, set a prefix label for the output 
string, etc
-   stream.print(Serdes.ByteArray(), Serdes.String());
+
+   // You can also override how and where the data is 
printed, i.e, to file:
+   stream.print(Printed.toFile("stream.out"));
+
+   // with a custom KeyValueMapper and label
+   stream.print(Printed.toSysOut()
+.withLabel("my-stream")
+.withKeyValueMapper((key, value) -> key + " -> 
" + value));
 
 
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/4769e3d9/docs/streams/upgrade-guide.html
--
diff --git a/docs/streams/upgrade-guide.html b/docs/streams/upgrade-guide.html
index ffb365e..96c5941 100644
--- a/docs/streams/upgrade-guide.html
+++ b/docs/streams/upgrade-guide.html
@@ -86,6 +86,12 @@
 
 
 
+With the introduction of https://cwiki.apache.org/confluence/display/KAFKA/KIP-182%3A+Reduce+Streams+DSL+overloads+and+allow+easier+use+of+custom+storage+engines;>KIP-182
+you should no longer pass in Serde to 
KStream#print operations.
+If you can't rely on using toString to print your keys an 
values, you should instead you provide a custom KeyValueMapper via 
the Printed#withKeyValueMapper call.
+
+
+
 Windowed aggregations have moved from KGroupedStream to 
WindowedKStream.
 You can now perform a windowed aggregation by, for example, using 
KGroupedStream#windowedBy(Windows)#reduce(Reducer).
 Note: the previous aggregate functions on KGroupedStream 
still work, but have been deprecated.

http://git-wip-us.apache.org/repos/asf/kafka/blob/4769e3d9/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
index c1e5b87..3a51fad 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java
+++ 

kafka git commit: KAFKA-5853; implement WindowedKStream

2017-09-08 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk beeed8660 -> e16b9143d


KAFKA-5853; implement WindowedKStream

Add the `WindowedKStream` interface and implementation of methods that don't 
require `Materialized`

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3809 from dguy/kgrouped-stream-windowed-by


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e16b9143
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e16b9143
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e16b9143

Branch: refs/heads/trunk
Commit: e16b9143dfcecbd58e3bebecbdb7d8e933b88cc4
Parents: beeed86
Author: Damian Guy 
Authored: Fri Sep 8 16:49:18 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 8 16:49:18 2017 +0100

--
 docs/streams/developer-guide.html   |  55 +--
 docs/streams/upgrade-guide.html |   6 +
 .../kafka/streams/kstream/KGroupedStream.java   |  14 ++
 .../kafka/streams/kstream/WindowedKStream.java  | 150 +++
 .../kstream/internals/KGroupedStreamImpl.java   |  22 ++-
 .../kstream/internals/WindowedKStreamImpl.java  | 143 ++
 .../KStreamAggregationIntegrationTest.java  |  55 +++
 .../internals/KGroupedStreamImplTest.java   |   1 +
 .../internals/WindowedKStreamImplTest.java  | 144 ++
 9 files changed, 544 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/e16b9143/docs/streams/developer-guide.html
--
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index b8d3ae4..05acb55 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -1175,9 +1175,10 @@ Note that in the WordCountProcessor 
implementation, users need to r
 Once records are grouped by key via groupByKey or 
groupBy -- and
 thus represented as either a KGroupedStream or a
 KGroupedTable -- they can be aggregated via an operation 
such as
-reduce. Aggregations are key-based operations, i.e.
-they always operate over records (notably record values) of the 
same key. You may
-choose to perform aggregations on
+reduce.
+For windowed aggregations use 
windowedBy(Windows).reduce(Reducer).
+Aggregations are key-based operations, i.e.they always operate 
over records (notably record values) of the same key.
+You maychoose to perform aggregations on
 windowed or non-windowed data.
 
 
@@ -1205,20 +1206,20 @@ Note that in the WordCountProcessor 
implementation, users need to r
 Several variants of aggregate exist, see 
Javadocs for details.
 
 
-KGroupedStreambyte[], String groupedStream = ...;
-KGroupedTablebyte[], String groupedTable = ...;
+KGroupedStreamBytes, String groupedStream = ...;
+KGroupedTableBytes, String groupedTable = ...;
 
 // Java 8+ examples, using lambda expressions
 
 // Aggregating a KGroupedStream (note how the value type 
changes from String to Long)
-KTablebyte[], Long aggregatedStream = 
groupedStream.aggregate(
+KTableBytes, Long aggregatedStream = 
groupedStream.aggregate(
 () -> 0L, /* initializer */
 (aggKey, newValue, aggValue) -> aggValue + 
newValue.length(), /* adder */
 Serdes.Long(), /* serde for aggregate value */
 "aggregated-stream-store" /* state store name */);
 
 // Aggregating a KGroupedTable (note how the value type 
changes from String to Long)
-KTablebyte[], Long aggregatedTable = 
groupedTable.aggregate(
+KTableBytes, Long aggregatedTable = 
groupedTable.aggregate(
 () -> 0L, /* initializer */
 (aggKey, newValue, aggValue) -> aggValue + 
newValue.length(), /* adder */
 (aggKey, oldValue, aggValue) -> aggValue - 
oldValue.length(), /* subtractor */
@@ -1226,19 +1227,26 @@ Note that in the WordCountProcessor 
implementation, users need to r
 "aggregated-table-store" /* state store name */);
 
 
+// windowed aggregation
+KTableWindowed, Long windowedAggregate 
= groupedStream.windowedBy(TimeWindows.of(TimeUnit.MINUTES(5).toMillis())
+.aggregate(() -> 0L, /* initializer 

[1/2] kafka git commit: KAFKA-5832; add Consumed and change StreamBuilder to use it

2017-09-08 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 27336192f -> d0ee6ed36


http://git-wip-us.apache.org/repos/asf/kafka/blob/d0ee6ed3/streams/src/test/java/org/apache/kafka/streams/integration/KStreamRepartitionJoinTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/KStreamRepartitionJoinTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/KStreamRepartitionJoinTest.java
index 4a356c7..9618033 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/KStreamRepartitionJoinTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/KStreamRepartitionJoinTest.java
@@ -25,6 +25,7 @@ import org.apache.kafka.common.serialization.LongSerializer;
 import org.apache.kafka.common.serialization.Serdes;
 import org.apache.kafka.common.serialization.StringDeserializer;
 import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.streams.Consumed;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.KeyValue;
 import org.apache.kafka.streams.StreamsBuilder;
@@ -100,9 +101,9 @@ public class KStreamRepartitionJoinTest {
 streamsConfiguration.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 3);
 
streamsConfiguration.put(IntegrationTestUtils.INTERNAL_LEAVE_GROUP_ON_CLOSE, 
true);
 
-streamOne = builder.stream(Serdes.Long(), Serdes.Integer(), 
streamOneInput);
-streamTwo = builder.stream(Serdes.Integer(), Serdes.String(), 
streamTwoInput);
-streamFour = builder.stream(Serdes.Integer(), Serdes.String(), 
streamFourInput);
+streamOne = builder.stream(streamOneInput, 
Consumed.with(Serdes.Long(), Serdes.Integer()));
+streamTwo = builder.stream(streamTwoInput, 
Consumed.with(Serdes.Integer(), Serdes.String()));
+streamFour = builder.stream(streamFourInput, 
Consumed.with(Serdes.Integer(), Serdes.String()));
 
 keyMapper = MockKeyValueMapper.SelectValueKeyValueMapper();
 }

http://git-wip-us.apache.org/repos/asf/kafka/blob/d0ee6ed3/streams/src/test/java/org/apache/kafka/streams/integration/KStreamsFineGrainedAutoResetIntegrationTest.java
--
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/KStreamsFineGrainedAutoResetIntegrationTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/KStreamsFineGrainedAutoResetIntegrationTest.java
index 2ae5cc2..92f351b 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/KStreamsFineGrainedAutoResetIntegrationTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/KStreamsFineGrainedAutoResetIntegrationTest.java
@@ -26,6 +26,7 @@ import org.apache.kafka.common.serialization.Serde;
 import org.apache.kafka.common.serialization.Serdes;
 import org.apache.kafka.common.serialization.StringDeserializer;
 import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.streams.Consumed;
 import org.apache.kafka.streams.KafkaStreams;
 import org.apache.kafka.streams.KeyValue;
 import org.apache.kafka.streams.StreamsBuilder;
@@ -186,9 +187,10 @@ public class KStreamsFineGrainedAutoResetIntegrationTest {
 
 final StreamsBuilder builder = new StreamsBuilder();
 
-final KStream pattern1Stream = 
builder.stream(Topology.AutoOffsetReset.EARLIEST, Pattern.compile("topic-\\d" + 
topicSuffix));
-final KStream pattern2Stream = 
builder.stream(Topology.AutoOffsetReset.LATEST, Pattern.compile("topic-[A-D]" + 
topicSuffix));
-final KStream namedTopicsStream = 
builder.stream(topicY, topicZ);
+
+final KStream pattern1Stream = 
builder.stream(Pattern.compile("topic-\\d" + topicSuffix), Consumed.with(Topology.AutoOffsetReset.EARLIEST));
+final KStream pattern2Stream = 
builder.stream(Pattern.compile("topic-[A-D]" + topicSuffix), Consumed.with(Topology.AutoOffsetReset.LATEST));
+final KStream namedTopicsStream = 
builder.stream(Arrays.asList(topicY, topicZ));
 
 pattern1Stream.to(stringSerde, stringSerde, outputTopic);
 pattern2Stream.to(stringSerde, stringSerde, outputTopic);
@@ -262,10 +264,9 @@ public class KStreamsFineGrainedAutoResetIntegrationTest {
 public void shouldThrowExceptionOverlappingTopic() throws  Exception {
 final StreamsBuilder builder = new StreamsBuilder();
 //NOTE this would realistically get caught when building topology, the 
test is for completeness
-builder.stream(Topology.AutoOffsetReset.EARLIEST, 
Pattern.compile("topic-[A-D]_1"));
-
+builder.stream(Pattern.compile("topic-[A-D]_1"), 
Consumed.with(Topology.AutoOffsetReset.EARLIEST));
 try {
-

[2/2] kafka git commit: KAFKA-5832; add Consumed and change StreamBuilder to use it

2017-09-08 Thread damianguy
KAFKA-5832; add Consumed and change StreamBuilder to use it

Added `Consumed` class.
Updated `StreamBuilder#stream`, `StreamBuilder#table`, 
`StreamBuilder#globalTable`

Author: Damian Guy 

Reviewers: Matthias J. Sax , Guozhang Wang 


Closes #3784 from dguy/kip-182-stream-builder


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/d0ee6ed3
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/d0ee6ed3
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/d0ee6ed3

Branch: refs/heads/trunk
Commit: d0ee6ed36baf702fa24dac8ae31f45fc27324d89
Parents: 2733619
Author: Damian Guy 
Authored: Fri Sep 8 08:21:48 2017 +0100
Committer: Damian Guy 
Committed: Fri Sep 8 08:21:48 2017 +0100

--
 docs/streams/developer-guide.html   |   7 +-
 .../examples/pageview/PageViewTypedDemo.java|   3 +-
 .../examples/pageview/PageViewUntypedDemo.java  |   3 +-
 .../java/org/apache/kafka/streams/Consumed.java | 158 +++
 .../apache/kafka/streams/StreamsBuilder.java| 431 +--
 .../apache/kafka/streams/kstream/KStream.java   |   8 +-
 .../kstream/internals/ConsumedInternal.java |  56 +++
 .../internals/InternalStreamsBuilder.java   |  91 ++--
 .../streams/kstream/internals/KStreamImpl.java  |   7 +-
 .../streams/kstream/internals/KTableImpl.java   |   4 +-
 .../apache/kafka/streams/KafkaStreamsTest.java  |   2 +-
 .../kafka/streams/StreamsBuilderTest.java   |   9 +-
 .../GlobalKTableIntegrationTest.java|   3 +-
 .../KStreamAggregationDedupIntegrationTest.java |   3 +-
 .../KStreamAggregationIntegrationTest.java  |  15 +-
 .../KStreamKTableJoinIntegrationTest.java   |   3 +-
 .../integration/KStreamRepartitionJoinTest.java |   7 +-
 ...eamsFineGrainedAutoResetIntegrationTest.java |  13 +-
 .../QueryableStateIntegrationTest.java  |   3 +-
 .../integration/RegexSourceIntegrationTest.java |   2 +-
 .../kstream/internals/AbstractStreamTest.java   |   3 +-
 .../internals/GlobalKTableJoinsTest.java|   3 +-
 .../internals/InternalStreamsBuilderTest.java   |  69 +--
 .../internals/KGroupedStreamImplTest.java   |  10 +-
 .../kstream/internals/KStreamBranchTest.java|   3 +-
 .../kstream/internals/KStreamFilterTest.java|   5 +-
 .../kstream/internals/KStreamFlatMapTest.java   |   3 +-
 .../internals/KStreamFlatMapValuesTest.java |   3 +-
 .../kstream/internals/KStreamForeachTest.java   |   3 +-
 .../kstream/internals/KStreamImplTest.java  |  25 +-
 .../internals/KStreamKStreamJoinTest.java   |  24 +-
 .../internals/KStreamKStreamLeftJoinTest.java   |  10 +-
 .../internals/KStreamKTableJoinTest.java|   3 +-
 .../internals/KStreamKTableLeftJoinTest.java|   3 +-
 .../kstream/internals/KStreamMapTest.java   |   3 +-
 .../kstream/internals/KStreamMapValuesTest.java |   3 +-
 .../kstream/internals/KStreamPeekTest.java  |   5 +-
 .../kstream/internals/KStreamSelectKeyTest.java |   3 +-
 .../kstream/internals/KStreamTransformTest.java |   3 +-
 .../internals/KStreamTransformValuesTest.java   |   3 +-
 .../internals/KStreamWindowAggregateTest.java   |   7 +-
 .../kafka/streams/perf/SimpleBenchmark.java |   7 +-
 .../kafka/streams/perf/YahooBenchmark.java  |   6 +-
 .../processor/internals/StandbyTaskTest.java|   3 +-
 .../processor/internals/StreamThreadTest.java   |  10 +-
 .../streams/tests/ShutdownDeadlockTest.java |   3 +-
 .../kafka/streams/tests/SmokeTestClient.java|   3 +-
 47 files changed, 553 insertions(+), 501 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/d0ee6ed3/docs/streams/developer-guide.html
--
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index a140b46..b8d3ae4 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -547,9 +547,8 @@ Note that in the WordCountProcessor 
implementation, users need to r
 StreamsBuilder builder = new StreamsBuilder();
 
 KStreamString, Long wordCounts = builder.stream(
-Serdes.String(), /* key serde */
-Serdes.Long(),   /* value serde */
-"word-counts-input-topic" /* input topic */);
+"word-counts-input-topic" /* input topic */,
+Consumed.with(Serdes.String(), Serdes.Long()); // 
define key and value serdes
 
 When to provide serdes explicitly:
 
@@ -2427,7 +2426,7 @@ Note that in the WordCountProcessor 
implementation, users need to r
   StreamsConfig config = new 

kafka git commit: KAFKA-5844; add groupBy(selector, serialized) to Ktable

2017-09-07 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 9cbb9f093 -> 329d5fa64


KAFKA-5844; add groupBy(selector, serialized) to Ktable

add `KTable#groupBy(KeyValueMapper, Serialized)` and deprecate the overload 
with `Serde` params

Author: Damian Guy 

Reviewers: Matthias J. Sax , Guozhang Wang 
, Bill Bejeck 

Closes #3802 from dguy/kip-182-ktable-groupby


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/329d5fa6
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/329d5fa6
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/329d5fa6

Branch: refs/heads/trunk
Commit: 329d5fa64a2a3ac1d39ac37fdacbf6e43d500d11
Parents: 9cbb9f0
Author: Damian Guy 
Authored: Thu Sep 7 12:35:31 2017 +0100
Committer: Damian Guy 
Committed: Thu Sep 7 12:35:31 2017 +0100

--
 .../apache/kafka/streams/kstream/KTable.java| 33 +++-
 .../kafka/streams/kstream/KeyValueMapper.java   |  4 +--
 .../streams/kstream/internals/KTableImpl.java   | 22 -
 .../kstream/internals/KTableAggregateTest.java  | 21 ++---
 .../internals/KTableKTableLeftJoinTest.java |  3 +-
 .../kafka/streams/tests/SmokeTestClient.java|  3 +-
 6 files changed, 62 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/329d5fa6/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
index 06a0eee..4bc9572 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
@@ -1001,7 +1001,7 @@ public interface KTable {
  * records to and rereading all update records from it, such that the 
resulting {@link KGroupedTable} is partitioned
  * on the new key.
  * 
- * If the key or value type is changed, it is recommended to use {@link 
#groupBy(KeyValueMapper, Serde, Serde)}
+ * If the key or value type is changed, it is recommended to use {@link 
#groupBy(KeyValueMapper, Serialized)}
  * instead.
  *
  * @param selector a {@link KeyValueMapper} that computes a new grouping 
key and value to be aggregated
@@ -1012,6 +1012,35 @@ public interface KTable {
  KGroupedTable groupBy(final KeyValueMapper> selector);
 
 /**
+ * Re-groups the records of this {@code KTable} using the provided {@link 
KeyValueMapper}
+ * and {@link Serde}s as specified by {@link Serialized}.
+ * Each {@link KeyValue} pair of this {@code KTable} is mapped to a new 
{@link KeyValue} pair by applying the
+ * provided {@link KeyValueMapper}.
+ * Re-grouping a {@code KTable} is required before an aggregation operator 
can be applied to the data
+ * (cf. {@link KGroupedTable}).
+ * The {@link KeyValueMapper} selects a new key and value (with should 
both have unmodified type).
+ * If the new record key is {@code null} the record will not be included 
in the resulting {@link KGroupedTable}
+ * 
+ * Because a new key is selected, an internal repartitioning topic will be 
created in Kafka.
+ * This topic will be named "${applicationId}-XXX-repartition", where 
"applicationId" is user-specified in
+ * {@link  StreamsConfig} via parameter {@link 
StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, "XXX" is
+ * an internally generated name, and "-repartition" is a fixed suffix.
+ * You can retrieve all generated internal topic names via {@link 
KafkaStreams#toString()}.
+ * 
+ * All data of this {@code KTable} will be redistributed through the 
repartitioning topic by writing all update
+ * records to and rereading all update records from it, such that the 
resulting {@link KGroupedTable} is partitioned
+ * on the new key.
+ *
+ * @param selector  a {@link KeyValueMapper} that computes a new 
grouping key and value to be aggregated
+ * @param serializedthe {@link Serialized} instance used to specify 
{@link org.apache.kafka.common.serialization.Serdes}
+ * @param   the key type of the result {@link KGroupedTable}
+ * @param   the value type of the result {@link KGroupedTable}
+ * @return a {@link KGroupedTable} that contains the re-grouped records of 
the original {@code KTable}
+ */
+ KGroupedTable groupBy(final KeyValueMapper> selector,
+   final Serialized 
serialized);
+
+/**
  * Re-groups the records of this 

[1/2] kafka git commit: KAFKA-5650; add StateStoreBuilder interface and implementations

2017-09-07 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 667cd60dc -> 9cbb9f093


http://git-wip-us.apache.org/repos/asf/kafka/blob/9cbb9f09/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDbWindowBytesStoreSupplier.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDbWindowBytesStoreSupplier.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDbWindowBytesStoreSupplier.java
new file mode 100644
index 000..a0500b6
--- /dev/null
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDbWindowBytesStoreSupplier.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import org.apache.kafka.common.utils.Bytes;
+import org.apache.kafka.streams.state.WindowBytesStoreSupplier;
+import org.apache.kafka.streams.state.WindowStore;
+
+import static 
org.apache.kafka.streams.state.internals.RocksDBWindowStoreSupplier.MIN_SEGMENTS;
+
+public class RocksDbWindowBytesStoreSupplier implements 
WindowBytesStoreSupplier {
+private final String name;
+private final long retentionPeriod;
+private final int segments;
+private final long windowSize;
+private final boolean retainDuplicates;
+
+public RocksDbWindowBytesStoreSupplier(final String name,
+   final long retentionPeriod,
+   final int segments,
+   final long windowSize,
+   final boolean retainDuplicates) {
+if (segments < MIN_SEGMENTS) {
+throw new IllegalArgumentException("numSegments must be >= " + 
MIN_SEGMENTS);
+}
+this.name = name;
+this.retentionPeriod = retentionPeriod;
+this.segments = segments;
+this.windowSize = windowSize;
+this.retainDuplicates = retainDuplicates;
+}
+
+@Override
+public String name() {
+return name;
+}
+
+@Override
+public WindowStore get() {
+final RocksDBSegmentedBytesStore segmentedBytesStore = new 
RocksDBSegmentedBytesStore(
+name,
+retentionPeriod,
+segments,
+new WindowKeySchema()
+);
+return RocksDBWindowStore.bytesStore(segmentedBytesStore,
+ retainDuplicates,
+ windowSize);
+
+}
+
+@Override
+public String metricsScope() {
+return "rocksdb-window";
+}
+
+@Override
+public int segments() {
+return segments;
+}
+
+@Override
+public long windowSize() {
+return windowSize;
+}
+
+@Override
+public boolean retainDuplicates() {
+return retainDuplicates;
+}
+
+@Override
+public long retentionPeriod() {
+return retentionPeriod;
+}
+}

http://git-wip-us.apache.org/repos/asf/kafka/blob/9cbb9f09/streams/src/main/java/org/apache/kafka/streams/state/internals/SessionStoreBuilder.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/state/internals/SessionStoreBuilder.java
 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/SessionStoreBuilder.java
new file mode 100644
index 000..61919c3
--- /dev/null
+++ 
b/streams/src/main/java/org/apache/kafka/streams/state/internals/SessionStoreBuilder.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the 

kafka git commit: KAFKA-5819; Add Joined class and relevant KStream join overloads

2017-09-06 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk b687c0680 -> 45394d52c


KAFKA-5819; Add Joined class and relevant KStream join overloads

Add the `Joined` class and the overloads to `KStream` that use it.
Deprecate existing methods that have `Serde` params

Author: Damian Guy 

Reviewers: Matthias J. Sax , Guozhang Wang 


Closes #3776 from dguy/kip-182-stream-join


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/45394d52
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/45394d52
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/45394d52

Branch: refs/heads/trunk
Commit: 45394d52c1ba566178c57897297a3ea31379f957
Parents: b687c06
Author: Damian Guy 
Authored: Wed Sep 6 10:55:43 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 6 10:55:43 2017 +0100

--
 .../kafka/streams/kstream/JoinWindows.java  |   4 +-
 .../apache/kafka/streams/kstream/Joined.java| 146 +++
 .../apache/kafka/streams/kstream/KStream.java   | 426 ++-
 .../kafka/streams/kstream/ValueJoiner.java  |  10 +-
 .../streams/kstream/internals/KStreamImpl.java  | 110 +++--
 .../integration/KStreamRepartitionJoinTest.java |  25 +-
 .../kstream/internals/KStreamImplTest.java  |  54 ++-
 .../internals/KStreamKStreamJoinTest.java   |  28 +-
 .../internals/KStreamKStreamLeftJoinTest.java   |  13 +-
 9 files changed, 732 insertions(+), 84 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/45394d52/streams/src/main/java/org/apache/kafka/streams/kstream/JoinWindows.java
--
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/JoinWindows.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/JoinWindows.java
index 9d69738..ef9ed01 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/JoinWindows.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/JoinWindows.java
@@ -55,9 +55,9 @@ import java.util.Map;
  * @see UnlimitedWindows
  * @see SessionWindows
  * @see KStream#join(KStream, ValueJoiner, JoinWindows)
- * @see KStream#join(KStream, ValueJoiner, JoinWindows, 
org.apache.kafka.common.serialization.Serde, 
org.apache.kafka.common.serialization.Serde, 
org.apache.kafka.common.serialization.Serde)
+ * @see KStream#join(KStream, ValueJoiner, JoinWindows, Joined)
  * @see KStream#leftJoin(KStream, ValueJoiner, JoinWindows)
- * @see KStream#leftJoin(KStream, ValueJoiner, JoinWindows, 
org.apache.kafka.common.serialization.Serde, 
org.apache.kafka.common.serialization.Serde, 
org.apache.kafka.common.serialization.Serde)
+ * @see KStream#leftJoin(KStream, ValueJoiner, JoinWindows, Joined)
  * @see KStream#outerJoin(KStream, ValueJoiner, JoinWindows)
  * @see KStream#outerJoin(KStream, ValueJoiner, JoinWindows)
  * @see TimestampExtractor

http://git-wip-us.apache.org/repos/asf/kafka/blob/45394d52/streams/src/main/java/org/apache/kafka/streams/kstream/Joined.java
--
diff --git a/streams/src/main/java/org/apache/kafka/streams/kstream/Joined.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/Joined.java
new file mode 100644
index 000..8601e1c
--- /dev/null
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/Joined.java
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.kstream;
+
+import org.apache.kafka.common.serialization.Serde;
+
+/**
+ * The {@code Joined} class represents optional params that can be passed to
+ * {@link KStream#join}, {@link KStream#leftJoin}, and  {@link 
KStream#outerJoin} operations.
+ */
+public class Joined {
+
+private Serde keySerde;
+private Serde valueSerde;
+private Serde otherValueSerde;
+
+private Joined(final Serde keySerde,
+   final Serde valueSerde,
+   final Serde 

kafka git commit: KAFKA-5817; Add Serialized class and overloads to KStream#groupBy and KStream#groupByKey

2017-09-06 Thread damianguy
Repository: kafka
Updated Branches:
  refs/heads/trunk 2fb5664bf -> b687c0680


KAFKA-5817; Add Serialized class and overloads to KStream#groupBy and 
KStream#groupByKey

Part of KIP-182
- Add the `Serialized` class
- implement overloads of `KStream#groupByKey` and KStream#groupBy`
- deprecate existing methods that have more than default arguments

Author: Damian Guy 

Reviewers: Bill Bejeck , Matthias J. Sax 
, Guozhang Wang 

Closes #3772 from dguy/kafka-5817


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/b687c068
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/b687c068
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/b687c068

Branch: refs/heads/trunk
Commit: b687c068008a81fad390c80da289249cc04b3efb
Parents: 2fb5664
Author: Damian Guy 
Authored: Wed Sep 6 10:43:14 2017 +0100
Committer: Damian Guy 
Committed: Wed Sep 6 10:43:14 2017 +0100

--
 docs/streams/developer-guide.html   | 35 
 .../examples/pageview/PageViewTypedDemo.java|  3 +-
 .../examples/pageview/PageViewUntypedDemo.java  |  3 +-
 .../apache/kafka/streams/kstream/KStream.java   | 59 -
 .../kafka/streams/kstream/Serialized.java   | 88 
 .../streams/kstream/internals/KStreamImpl.java  | 38 ++---
 .../KStreamAggregationDedupIntegrationTest.java |  6 +-
 .../KStreamAggregationIntegrationTest.java  |  7 +-
 .../KStreamKTableJoinIntegrationTest.java   |  3 +-
 .../internals/KGroupedStreamImplTest.java   |  3 +-
 .../internals/KStreamWindowAggregateTest.java   |  8 +-
 .../kafka/streams/perf/YahooBenchmark.java  |  3 +-
 .../kafka/streams/tests/SmokeTestClient.java|  3 +-
 13 files changed, 215 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/b687c068/docs/streams/developer-guide.html
--
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index b530e5e..8433bf3 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -842,8 +842,9 @@ Note that in the WordCountProcessor 
implementation, users need to r
// When the key and/or value types do not match the 
configured
// default serdes, we must explicitly specify serdes.
KGroupedStreambyte[], String groupedStream = 
stream.groupByKey(
-   Serdes.ByteArray(), /* key */
-   Serdes.String() /* value */
+   Serialized.with(
+Serdes.ByteArray(), /* key */
+Serdes.String()) /* value */
);
 
 
@@ -883,15 +884,17 @@ Note that in the WordCountProcessor 
implementation, users need to r
// Group the stream by a new key and key type
KGroupedStreamString, String groupedStream = 
stream.groupBy(
(key, value) -> value,
-   Serdes.String(), /* key (note: type was modified) */
-   Serdes.String()  /* value */
+   Serialize.with(
+Serdes.String(), /* key (note: type was 
modified) */
+Serdes.String())  /* value */
);
 
// Group the table by a new key and key type, and also 
modify the value and value type.
KGroupedTableString, Integer groupedTable = 
table.groupBy(
(key, value) -> KeyValue.pair(value, 
value.length()),
-   Serdes.String(), /* key (note: type was modified) */
-   Serdes.Integer() /* value (note: type was modified) 
*/
+   Serialized.with(
+   Serdes.String(), /* key (note: type was 
modified) */
+   Serdes.Integer()) /* value (note: type was 
modified) */
);
 
 
@@ -905,8 +908,9 @@ Note that in the WordCountProcessor 
implementation, users need to r
   return value;
}
},
-   Serdes.String(), /* key (note: type was modified) */
-   Serdes.String()  /* value */
+   Serialized.with(
+Serdes.String(), /* key (note: type was 
modified) */
+Serdes.String())  /* value */
   

[kafka] Git Push Summary

2017-09-05 Thread damianguy
Repository: kafka
Updated Tags:  refs/tags/0.11.0.1-rc0 [created] a8aa61266


  1   2   >