[kafka-site] branch asf-site updated: [MINOR] adding Itau Unibanco and OTICS to the powered-by page (#300)

2020-08-31 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new f85f951  [MINOR] adding Itau Unibanco and OTICS to the powered-by page 
(#300)
f85f951 is described below

commit f85f9511dd367ef3be5717e7c23868e2bf6cb0ad
Author: scott-confluent <66280178+scott-conflu...@users.noreply.github.com>
AuthorDate: Mon Aug 31 12:41:26 2020 -0700

[MINOR] adding Itau Unibanco and OTICS to the powered-by page (#300)
---
 images/powered-by/itau.png  | Bin 0 -> 8432 bytes
 images/powered-by/otics.png | Bin 0 -> 25835 bytes
 powered-by.html |  10 ++
 3 files changed, 10 insertions(+)

diff --git a/images/powered-by/itau.png b/images/powered-by/itau.png
new file mode 100644
index 000..0898605
Binary files /dev/null and b/images/powered-by/itau.png differ
diff --git a/images/powered-by/otics.png b/images/powered-by/otics.png
new file mode 100644
index 000..e6392c7
Binary files /dev/null and b/images/powered-by/otics.png differ
diff --git a/powered-by.html b/powered-by.html
index 73cee12..725d42f 100644
--- a/powered-by.html
+++ b/powered-by.html
@@ -238,6 +238,11 @@
 "logoBgColor": "#007bb6",
 "description": "Apache Kafka is used at LinkedIn for activity stream 
data and operational metrics. This powers various products like LinkedIn 
Newsfeed, LinkedIn Today in addition to our offline analytics systems like 
Hadoop."
 }, {
+"link": "https://www.itau.com.br;,
+"logo": "itau.png",
+"logoBgColor": "#ff",
+"description": "Itaú Unibanco uses Apache Kafka for integrations, 
decoupling and application modernization. This kind of technology help us on 
digital strategies and enable us to deliver new solutions applied to the 
business, through application streaming and data pipelines, accelerating our 
digital transformation and evolving our technology architecture."
+}, {
 "link": "http://www.liveperson.com/;,
 "logo": "liveperson.png",
 "logoBgColor": "#ff",
@@ -313,6 +318,11 @@
 "logoBgColor": "#ff",
 "description": "Kafka is used as the primary high speed message queue 
to power Storm and our real-time analytics/event ingestion pipelines."
 }, {
+"link": "http://www.otics.ca/;,
+"logo": "otics.png",
+"logoBgColor": "#ff",
+"description": "We use Apache Kafka with our MAADS-VIPER product to 
manage the distribution of insights from thousands of machine learning 
algorithms that allow users or machines to publish and consume these insights 
for decision-making.  We also  use Kafka for Real-Time Machine Learning to 
create micro machine learning models that provide clients with transactional 
learnings very fast."
+}, {
 "link": "http://www.ovh.com/us/index.xml;,
 "logo": "ovh.png",
 "logoBgColor": "#ff",



[kafka] branch trunk updated (003dce5 -> bbfecae)

2020-02-21 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 003dce5  MINOR: Standby task commit needed when offsets updated (#8146)
 add bbfecae  MINOR: Document endpoints for connector topic tracking 
(KIP-558)

No new revisions were added by this update.

Summary of changes:
 docs/connect.html | 6 ++
 1 file changed, 6 insertions(+)



[kafka] branch 2.5 updated: MINOR: Document endpoints for connector topic tracking (KIP-558)

2020-02-21 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.5
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.5 by this push:
 new 972119e  MINOR: Document endpoints for connector topic tracking 
(KIP-558)
972119e is described below

commit 972119e34c76eb9c936b698b622692002e459402
Author: Konstantine Karantasis 
AuthorDate: Fri Feb 21 12:25:35 2020 -0800

MINOR: Document endpoints for connector topic tracking (KIP-558)

Update the site documentation to include the endpoints introduced with 
KIP-558 and a short paragraph on how this feature is used in Connect.

Author: Konstantine Karantasis 

Reviewers: Toby Drake , Ewen Cheslack-Postava 


Closes #8148 from kkonstantine/kip-558-docs

(cherry picked from commit bbfecaef725456f648f03530d26a5395042966fa)
Signed-off-by: Ewen Cheslack-Postava 
---
 docs/connect.html | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/docs/connect.html b/docs/connect.html
index a92bb04..473569c 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -239,6 +239,8 @@
 POST /connectors/{name}/restart - restart a connector 
(typically because it has failed)
 POST /connectors/{name}/tasks/{taskId}/restart - 
restart an individual task (typically because it has failed)
 DELETE /connectors/{name} - delete a connector, 
halting all tasks and deleting its configuration
+GET /connectors/{name}/topics - get the set of topics 
that a specific connector is using since the connector was created or since a 
request to reset its set of active topics was issued
+PUT /connectors/{name}/topics/reset - send a request 
to empty the set of active topics of a connector
 
 
 Kafka Connect also provides a REST API for getting information about 
connector plugins:
@@ -577,6 +579,10 @@
 
 
 
+Starting with 2.5.0, Kafka Connect uses the 
status.storage.topic to also store information related to the 
topics that each connector is using. Connect Workers use these per-connector 
topic status updates to respond to requests to the REST endpoint GET 
/connectors/{name}/topics by returning the set of topic names that a 
connector is using. A request to the REST endpoint PUT 
/connectors/{name}/topics/reset resets the set of active topics for a 
con [...]
+
+
+
 It's sometimes useful to temporarily stop the message processing of a 
connector. For example, if the remote system is undergoing maintenance, it 
would be preferable for source connectors to stop polling it for new data 
instead of filling logs with exception spam. For this use case, Connect offers 
a pause/resume API. While a source connector is paused, Connect will stop 
polling it for additional records. While a sink connector is paused, Connect 
will stop pushing new messages to it. T [...]
 
 



[kafka] branch trunk updated (aa4ba8e -> 6c8f654)

2019-08-19 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from aa4ba8e  KAFKA-8041: Enable producer retries in log dir failure test 
to address flakiness (#7200)
 add 6c8f654  MINOR: Upgrade ducktape to 0.7.6

No new revisions were added by this update.

Summary of changes:
 tests/docker/Dockerfile | 2 +-
 tests/setup.py  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[kafka] branch trunk updated: KAFKA-7813: JmxTool throws NPE when --object-name is omitted

2019-03-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 938580f  KAFKA-7813: JmxTool throws NPE when --object-name is omitted
938580f is described below

commit 938580ff6c5c27b1d7a3baf9cc09029ef3c2eb68
Author: huxihx 
AuthorDate: Sun Mar 17 18:50:55 2019 -0700

KAFKA-7813: JmxTool throws NPE when --object-name is omitted

https://issues.apache.org/jira/browse/KAFKA-7813

Running the JMX tool without --object-name parameter, results in a 
NullPointerException.

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: huxihx 

Reviewers: Ewen Cheslack-Postava 

Closes #6139 from huxihx/KAFKA-7813
---
 core/src/main/scala/kafka/tools/JmxTool.scala | 22 +++---
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/core/src/main/scala/kafka/tools/JmxTool.scala 
b/core/src/main/scala/kafka/tools/JmxTool.scala
index c5303a9..9451cc2 100644
--- a/core/src/main/scala/kafka/tools/JmxTool.scala
+++ b/core/src/main/scala/kafka/tools/JmxTool.scala
@@ -18,7 +18,7 @@
  */
 package kafka.tools
 
-import java.util.Date
+import java.util.{Date, Objects}
 import java.text.SimpleDateFormat
 import javax.management._
 import javax.management.remote._
@@ -28,7 +28,7 @@ import joptsimple.OptionParser
 import scala.collection.JavaConverters._
 import scala.collection.mutable
 import scala.math._
-import kafka.utils.{CommandLineUtils , Exit, Logging}
+import kafka.utils.{CommandLineUtils, Exit, Logging}
 
 
 /**
@@ -140,7 +140,7 @@ object JmxTool extends Logging {
   else
 List(null)
 
-val hasPatternQueries = queries.exists((name: ObjectName) => 
name.isPattern)
+val hasPatternQueries = queries.filterNot(Objects.isNull).exists((name: 
ObjectName) => name.isPattern)
 
 var names: Iterable[ObjectName] = null
 def namesSet = Option(names).toSet.flatten
@@ -165,12 +165,20 @@ object JmxTool extends Logging {
 }
 
 val numExpectedAttributes: Map[ObjectName, Int] =
-  if (attributesWhitelistExists)
-queries.map((_, attributesWhitelist.get.length)).toMap
-  else {
-names.map{(name: ObjectName) =>
+  if (!attributesWhitelistExists)
+names.map{name: ObjectName =>
   val mbean = mbsc.getMBeanInfo(name)
   (name, mbsc.getAttributes(name, 
mbean.getAttributes.map(_.getName)).size)}.toMap
+  else {
+if (!hasPatternQueries)
+  names.map{name: ObjectName =>
+val mbean = mbsc.getMBeanInfo(name)
+val attributes = mbsc.getAttributes(name, 
mbean.getAttributes.map(_.getName))
+val expectedAttributes = 
attributes.asScala.asInstanceOf[mutable.Buffer[Attribute]]
+  .filter(attr => attributesWhitelist.get.contains(attr.getName))
+(name, expectedAttributes.size)}.toMap.filter(_._2 > 0)
+else
+  queries.map((_, attributesWhitelist.get.length)).toMap
   }
 
 if(numExpectedAttributes.isEmpty) {



[kafka] branch 2.1 updated: KAFKA-7834: Extend collected logs in system test services to include heap dumps

2019-02-19 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 90fb79b  KAFKA-7834: Extend collected logs in system test services to 
include heap dumps
90fb79b is described below

commit 90fb79b4c17b242b83288bcabe06466347f5141f
Author: Konstantine Karantasis 
AuthorDate: Mon Feb 4 16:46:03 2019 -0800

KAFKA-7834: Extend collected logs in system test services to include heap 
dumps

* Enable heap dumps on OOM with -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath= in the major services in system tests
* Collect the heap dump from the predefined location as part of the result 
logs for each service
* Change Connect service to delete the whole root directory instead of 
individual expected files
* Tested by running the full suite of system tests

Author: Konstantine Karantasis 

Reviewers: Ewen Cheslack-Postava 

Closes #6158 from kkonstantine/KAFKA-7834
---
 tests/kafkatest/services/connect.py | 26 ++
 tests/kafkatest/services/kafka/kafka.py | 11 +--
 tests/kafkatest/services/zookeeper.py   | 10 --
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/tests/kafkatest/services/connect.py 
b/tests/kafkatest/services/connect.py
index bf38e50..40c2cf3 100644
--- a/tests/kafkatest/services/connect.py
+++ b/tests/kafkatest/services/connect.py
@@ -42,6 +42,7 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 PID_FILE = os.path.join(PERSISTENT_ROOT, "connect.pid")
 EXTERNAL_CONFIGS_FILE = os.path.join(PERSISTENT_ROOT, 
"connect-external-configs.properties")
 CONNECT_REST_PORT = 8083
+HEAP_DUMP_FILE = os.path.join(PERSISTENT_ROOT, "connect_heap_dump.bin")
 
 # Currently the Connect worker supports waiting on three modes:
 STARTUP_MODE_INSTANT = 'INSTANT'
@@ -61,6 +62,9 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 "connect_stderr": {
 "path": STDERR_FILE,
 "collect_default": True},
+"connect_heap_dump_file": {
+"path": HEAP_DUMP_FILE,
+"collect_default": True}
 }
 
 def __init__(self, context, num_nodes, kafka, files, startup_timeout_sec = 
60):
@@ -160,8 +164,8 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def clean_node(self, node):
 node.account.kill_process("connect", clean_shutdown=False, 
allow_fail=True)
 self.security_config.clean_node(node)
-all_files = " ".join([self.CONFIG_FILE, self.LOG4J_CONFIG_FILE, 
self.PID_FILE, self.LOG_FILE, self.STDOUT_FILE, self.STDERR_FILE, 
self.EXTERNAL_CONFIGS_FILE] + self.config_filenames() + self.files)
-node.account.ssh("rm -rf " + all_files, allow_fail=False)
+other_files = " ".join(self.config_filenames() + self.files)
+node.account.ssh("rm -rf -- %s %s" % 
(ConnectServiceBase.PERSISTENT_ROOT, other_files), allow_fail=False)
 
 def config_filenames(self):
 return [os.path.join(self.PERSISTENT_ROOT, "connect-connector-" + 
str(idx) + ".properties") for idx, template in 
enumerate(self.connector_config_templates or [])]
@@ -252,6 +256,14 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def _base_url(self, node):
 return 'http://' + node.account.externally_routable_ip + ':' + 
str(self.CONNECT_REST_PORT)
 
+def append_to_environment_variable(self, envvar, value):
+env_opts = self.environment[envvar]
+if env_opts is None:
+env_opts = "\"%s\"" % value
+else:
+env_opts = "\"%s %s\"" % (env_opts.strip('\"'), value)
+self.environment[envvar] = env_opts
+
 
 class ConnectStandaloneService(ConnectServiceBase):
 """Runs Kafka Connect in standalone mode."""
@@ -266,7 +278,10 @@ class ConnectStandaloneService(ConnectServiceBase):
 
 def start_cmd(self, node, connector_configs):
 cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%s\"; " 
% self.LOG4J_CONFIG_FILE
-cmd += "export KAFKA_OPTS=%s; " % self.security_config.kafka_opts
+heap_kafka_opts = "-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=%s" % \
+  self.logs["connect_heap_dump_file"]["path"]
+other_kafka_opts = self.security_config.kafka_opts.strip('\"')
+cmd += "export KAFKA_OPTS=\"%s %s\"; " % (heap_kafka_opts, 
other_kafka_opts)
 for envvar in self.environment:
 cmd += "export %s=%s; " % (envvar, str(self.environment[envvar]))

[kafka] branch trunk updated: MINOR: Save failed test output to build output directory

2019-02-15 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ed30712  MINOR: Save failed test output to build output directory
ed30712 is described below

commit ed3071231aee1ba8a5c2c496112dd6034f9bf942
Author: Ewen Cheslack-Postava 
AuthorDate: Fri Feb 15 10:50:08 2019 -0800

MINOR: Save failed test output to build output directory

Author: Ewen Cheslack-Postava 

Reviewers: Colin Patrick McCabe 

Closes #6234 from ewencp/test-logs
---
 build.gradle | 66 +++-
 1 file changed, 65 insertions(+), 1 deletion(-)

diff --git a/build.gradle b/build.gradle
index 420edf7..ff316c8 100644
--- a/build.gradle
+++ b/build.gradle
@@ -15,6 +15,8 @@
 
 import org.ajoberstar.grgit.Grgit
 
+import java.nio.charset.StandardCharsets
+
 buildscript {
   repositories {
 mavenCentral()
@@ -139,6 +141,7 @@ if (file('.git').exists()) {
   }
 }
 
+
 subprojects {
   apply plugin: 'java'
   // apply the eclipse plugin only to subprojects that hold code. 'connect' is 
just a folder.
@@ -204,6 +207,65 @@ subprojects {
   def testLoggingEvents = ["passed", "skipped", "failed"]
   def testShowStandardStreams = false
   def testExceptionFormat = 'full'
+  // Gradle built-in logging only supports sending test output to stdout, 
which generates a lot
+  // of noise, especially for passing tests. We really only want output for 
failed tests. This
+  // hooks into the output and logs it (so we don't have to buffer it all in 
memory) and only
+  // saves the output for failing tests. Directory and filenames are such that 
you can, e.g.,
+  // create a Jenkins rule to collect failed test output.
+  def logTestStdout = {
+def testId = { TestDescriptor descriptor ->
+  "${descriptor.className}.${descriptor.name}".toString()
+}
+
+def logFiles = new HashMap()
+def logStreams = new HashMap()
+beforeTest { TestDescriptor td ->
+  def tid = testId(td)
+  def logFile = new File(
+  "${projectDir}/build/reports/testOutput/${tid}.test.stdout")
+  logFile.parentFile.mkdirs()
+  logFiles.put(tid, logFile)
+  logStreams.put(tid, new FileOutputStream(logFile))
+}
+onOutput { TestDescriptor td, TestOutputEvent toe ->
+  def tid = testId(td)
+  // Some output can happen outside the context of a specific test (e.g. 
at the class level)
+  // and beforeTest/afterTest seems to not be invoked for these cases (and 
similarly, there's
+  // a TestDescriptor hierarchy that includes the thread executing the 
test, Gradle tasks,
+  // etc). We see some of these in practice and it seems like something 
buggy in the Gradle
+  // test runner since we see it *before* any tests and it is frequently 
not related to any
+  // code in the test (best guess is that it is tail output from last 
test). We won't have
+  // an output file for these, so simply ignore them. If they become 
critical for debugging,
+  // they can be seen with showStandardStreams.
+  if (td.name == td.className) {
+return
+  }
+  try {
+logStreams.get(tid).write(toe.message.getBytes(StandardCharsets.UTF_8))
+  } catch (Exception e) {
+println "ERROR: Failed to write output for test ${tid}"
+e.printStackTrace()
+  }
+}
+afterTest { TestDescriptor td, TestResult tr ->
+  def tid = testId(td)
+  try {
+logStreams.get(tid).close()
+if (tr.resultType != TestResult.ResultType.FAILURE) {
+  logFiles.get(tid).delete()
+} else {
+  def file = logFiles.get(tid)
+  println "${tid} failed, log available in ${file}"
+}
+  } catch (Exception e) {
+println "ERROR: Failed to close stdout file for ${tid}"
+e.printStackTrace()
+  } finally {
+logFiles.remove(tid)
+logStreams.remove(tid)
+  }
+}
+  }
 
   test {
 maxParallelForks = userMaxForks ?: Runtime.runtime.availableProcessors()
@@ -216,7 +278,7 @@ subprojects {
   showStandardStreams = userShowStandardStreams ?: testShowStandardStreams
   exceptionFormat = testExceptionFormat
 }
-
+logTestStdout.rehydrate(delegate, owner, this)()
   }
 
   task integrationTest(type: Test, dependsOn: compileJava) {
@@ -230,6 +292,7 @@ subprojects {
   showStandardStreams = userShowStandardStreams ?: testShowStandardStreams
   exceptionFormat = testExceptionFormat
 }
+logTestStdout.rehydrate(delegate, owner, this)()
 
 useJUnit {
   includeCategories 'org.apache.kafka.test.IntegrationTest'
@@ -248,6 +311,7 @@ subprojects {
   showStandardStreams = userShowStandardStreams ?: testShowStandardStreams

[kafka] branch 2.0 updated: KAFKA-7834: Extend collected logs in system test services to include heap dumps

2019-02-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new d96c7ea  KAFKA-7834: Extend collected logs in system test services to 
include heap dumps
d96c7ea is described below

commit d96c7eae0b48bb222f08771848a4e5f9df7a6f73
Author: Konstantine Karantasis 
AuthorDate: Mon Feb 4 16:46:03 2019 -0800

KAFKA-7834: Extend collected logs in system test services to include heap 
dumps

* Enable heap dumps on OOM with -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath= in the major services in system tests
* Collect the heap dump from the predefined location as part of the result 
logs for each service
* Change Connect service to delete the whole root directory instead of 
individual expected files
* Tested by running the full suite of system tests

Author: Konstantine Karantasis 

Reviewers: Ewen Cheslack-Postava 

Closes #6158 from kkonstantine/KAFKA-7834

(cherry picked from commit 83c435f3babec485cf2091532191fe5420c27820)
Signed-off-by: Ewen Cheslack-Postava 
---
 tests/kafkatest/services/connect.py | 26 ++
 tests/kafkatest/services/kafka/kafka.py | 11 +--
 tests/kafkatest/services/zookeeper.py   | 10 --
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/tests/kafkatest/services/connect.py 
b/tests/kafkatest/services/connect.py
index bf38e50..40c2cf3 100644
--- a/tests/kafkatest/services/connect.py
+++ b/tests/kafkatest/services/connect.py
@@ -42,6 +42,7 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 PID_FILE = os.path.join(PERSISTENT_ROOT, "connect.pid")
 EXTERNAL_CONFIGS_FILE = os.path.join(PERSISTENT_ROOT, 
"connect-external-configs.properties")
 CONNECT_REST_PORT = 8083
+HEAP_DUMP_FILE = os.path.join(PERSISTENT_ROOT, "connect_heap_dump.bin")
 
 # Currently the Connect worker supports waiting on three modes:
 STARTUP_MODE_INSTANT = 'INSTANT'
@@ -61,6 +62,9 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 "connect_stderr": {
 "path": STDERR_FILE,
 "collect_default": True},
+"connect_heap_dump_file": {
+"path": HEAP_DUMP_FILE,
+"collect_default": True}
 }
 
 def __init__(self, context, num_nodes, kafka, files, startup_timeout_sec = 
60):
@@ -160,8 +164,8 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def clean_node(self, node):
 node.account.kill_process("connect", clean_shutdown=False, 
allow_fail=True)
 self.security_config.clean_node(node)
-all_files = " ".join([self.CONFIG_FILE, self.LOG4J_CONFIG_FILE, 
self.PID_FILE, self.LOG_FILE, self.STDOUT_FILE, self.STDERR_FILE, 
self.EXTERNAL_CONFIGS_FILE] + self.config_filenames() + self.files)
-node.account.ssh("rm -rf " + all_files, allow_fail=False)
+other_files = " ".join(self.config_filenames() + self.files)
+node.account.ssh("rm -rf -- %s %s" % 
(ConnectServiceBase.PERSISTENT_ROOT, other_files), allow_fail=False)
 
 def config_filenames(self):
 return [os.path.join(self.PERSISTENT_ROOT, "connect-connector-" + 
str(idx) + ".properties") for idx, template in 
enumerate(self.connector_config_templates or [])]
@@ -252,6 +256,14 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def _base_url(self, node):
 return 'http://' + node.account.externally_routable_ip + ':' + 
str(self.CONNECT_REST_PORT)
 
+def append_to_environment_variable(self, envvar, value):
+env_opts = self.environment[envvar]
+if env_opts is None:
+env_opts = "\"%s\"" % value
+else:
+env_opts = "\"%s %s\"" % (env_opts.strip('\"'), value)
+self.environment[envvar] = env_opts
+
 
 class ConnectStandaloneService(ConnectServiceBase):
 """Runs Kafka Connect in standalone mode."""
@@ -266,7 +278,10 @@ class ConnectStandaloneService(ConnectServiceBase):
 
 def start_cmd(self, node, connector_configs):
 cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%s\"; " 
% self.LOG4J_CONFIG_FILE
-cmd += "export KAFKA_OPTS=%s; " % self.security_config.kafka_opts
+heap_kafka_opts = "-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=%s" % \
+  self.logs["connect_heap_dump_file"]["path"]
+other_kafka_opts = self.security_config.kafka_opts.strip('\"')
+cmd += "export KAFKA_OPTS=\"%s %s\"; " % (heap_kafka_opts, 
other_kafka_opts)
   

[kafka] branch 2.2 updated: KAFKA-7834: Extend collected logs in system test services to include heap dumps

2019-02-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.2
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.2 by this push:
 new 257fd87  KAFKA-7834: Extend collected logs in system test services to 
include heap dumps
257fd87 is described below

commit 257fd87fd7823dab8048cd4afe5aa46f95eae75b
Author: Konstantine Karantasis 
AuthorDate: Mon Feb 4 16:46:03 2019 -0800

KAFKA-7834: Extend collected logs in system test services to include heap 
dumps

* Enable heap dumps on OOM with -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath= in the major services in system tests
* Collect the heap dump from the predefined location as part of the result 
logs for each service
* Change Connect service to delete the whole root directory instead of 
individual expected files
* Tested by running the full suite of system tests

Author: Konstantine Karantasis 

Reviewers: Ewen Cheslack-Postava 

Closes #6158 from kkonstantine/KAFKA-7834

(cherry picked from commit 83c435f3babec485cf2091532191fe5420c27820)
Signed-off-by: Ewen Cheslack-Postava 
---
 tests/kafkatest/services/connect.py | 26 ++
 tests/kafkatest/services/kafka/kafka.py | 11 +--
 tests/kafkatest/services/zookeeper.py   | 10 --
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/tests/kafkatest/services/connect.py 
b/tests/kafkatest/services/connect.py
index bf38e50..40c2cf3 100644
--- a/tests/kafkatest/services/connect.py
+++ b/tests/kafkatest/services/connect.py
@@ -42,6 +42,7 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 PID_FILE = os.path.join(PERSISTENT_ROOT, "connect.pid")
 EXTERNAL_CONFIGS_FILE = os.path.join(PERSISTENT_ROOT, 
"connect-external-configs.properties")
 CONNECT_REST_PORT = 8083
+HEAP_DUMP_FILE = os.path.join(PERSISTENT_ROOT, "connect_heap_dump.bin")
 
 # Currently the Connect worker supports waiting on three modes:
 STARTUP_MODE_INSTANT = 'INSTANT'
@@ -61,6 +62,9 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 "connect_stderr": {
 "path": STDERR_FILE,
 "collect_default": True},
+"connect_heap_dump_file": {
+"path": HEAP_DUMP_FILE,
+"collect_default": True}
 }
 
 def __init__(self, context, num_nodes, kafka, files, startup_timeout_sec = 
60):
@@ -160,8 +164,8 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def clean_node(self, node):
 node.account.kill_process("connect", clean_shutdown=False, 
allow_fail=True)
 self.security_config.clean_node(node)
-all_files = " ".join([self.CONFIG_FILE, self.LOG4J_CONFIG_FILE, 
self.PID_FILE, self.LOG_FILE, self.STDOUT_FILE, self.STDERR_FILE, 
self.EXTERNAL_CONFIGS_FILE] + self.config_filenames() + self.files)
-node.account.ssh("rm -rf " + all_files, allow_fail=False)
+other_files = " ".join(self.config_filenames() + self.files)
+node.account.ssh("rm -rf -- %s %s" % 
(ConnectServiceBase.PERSISTENT_ROOT, other_files), allow_fail=False)
 
 def config_filenames(self):
 return [os.path.join(self.PERSISTENT_ROOT, "connect-connector-" + 
str(idx) + ".properties") for idx, template in 
enumerate(self.connector_config_templates or [])]
@@ -252,6 +256,14 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def _base_url(self, node):
 return 'http://' + node.account.externally_routable_ip + ':' + 
str(self.CONNECT_REST_PORT)
 
+def append_to_environment_variable(self, envvar, value):
+env_opts = self.environment[envvar]
+if env_opts is None:
+env_opts = "\"%s\"" % value
+else:
+env_opts = "\"%s %s\"" % (env_opts.strip('\"'), value)
+self.environment[envvar] = env_opts
+
 
 class ConnectStandaloneService(ConnectServiceBase):
 """Runs Kafka Connect in standalone mode."""
@@ -266,7 +278,10 @@ class ConnectStandaloneService(ConnectServiceBase):
 
 def start_cmd(self, node, connector_configs):
 cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%s\"; " 
% self.LOG4J_CONFIG_FILE
-cmd += "export KAFKA_OPTS=%s; " % self.security_config.kafka_opts
+heap_kafka_opts = "-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=%s" % \
+  self.logs["connect_heap_dump_file"]["path"]
+other_kafka_opts = self.security_config.kafka_opts.strip('\"')
+cmd += "export KAFKA_OPTS=\"%s %s\"; " % (heap_kafka_opts, 
other_kafka_opts)
   

[kafka] branch trunk updated: KAFKA-7834: Extend collected logs in system test services to include heap dumps

2019-02-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 83c435f  KAFKA-7834: Extend collected logs in system test services to 
include heap dumps
83c435f is described below

commit 83c435f3babec485cf2091532191fe5420c27820
Author: Konstantine Karantasis 
AuthorDate: Mon Feb 4 16:46:03 2019 -0800

KAFKA-7834: Extend collected logs in system test services to include heap 
dumps

* Enable heap dumps on OOM with -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath= in the major services in system tests
* Collect the heap dump from the predefined location as part of the result 
logs for each service
* Change Connect service to delete the whole root directory instead of 
individual expected files
* Tested by running the full suite of system tests

Author: Konstantine Karantasis 

Reviewers: Ewen Cheslack-Postava 

Closes #6158 from kkonstantine/KAFKA-7834
---
 tests/kafkatest/services/connect.py | 26 ++
 tests/kafkatest/services/kafka/kafka.py | 11 +--
 tests/kafkatest/services/zookeeper.py   | 10 --
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/tests/kafkatest/services/connect.py 
b/tests/kafkatest/services/connect.py
index bf38e50..40c2cf3 100644
--- a/tests/kafkatest/services/connect.py
+++ b/tests/kafkatest/services/connect.py
@@ -42,6 +42,7 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 PID_FILE = os.path.join(PERSISTENT_ROOT, "connect.pid")
 EXTERNAL_CONFIGS_FILE = os.path.join(PERSISTENT_ROOT, 
"connect-external-configs.properties")
 CONNECT_REST_PORT = 8083
+HEAP_DUMP_FILE = os.path.join(PERSISTENT_ROOT, "connect_heap_dump.bin")
 
 # Currently the Connect worker supports waiting on three modes:
 STARTUP_MODE_INSTANT = 'INSTANT'
@@ -61,6 +62,9 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 "connect_stderr": {
 "path": STDERR_FILE,
 "collect_default": True},
+"connect_heap_dump_file": {
+"path": HEAP_DUMP_FILE,
+"collect_default": True}
 }
 
 def __init__(self, context, num_nodes, kafka, files, startup_timeout_sec = 
60):
@@ -160,8 +164,8 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def clean_node(self, node):
 node.account.kill_process("connect", clean_shutdown=False, 
allow_fail=True)
 self.security_config.clean_node(node)
-all_files = " ".join([self.CONFIG_FILE, self.LOG4J_CONFIG_FILE, 
self.PID_FILE, self.LOG_FILE, self.STDOUT_FILE, self.STDERR_FILE, 
self.EXTERNAL_CONFIGS_FILE] + self.config_filenames() + self.files)
-node.account.ssh("rm -rf " + all_files, allow_fail=False)
+other_files = " ".join(self.config_filenames() + self.files)
+node.account.ssh("rm -rf -- %s %s" % 
(ConnectServiceBase.PERSISTENT_ROOT, other_files), allow_fail=False)
 
 def config_filenames(self):
 return [os.path.join(self.PERSISTENT_ROOT, "connect-connector-" + 
str(idx) + ".properties") for idx, template in 
enumerate(self.connector_config_templates or [])]
@@ -252,6 +256,14 @@ class ConnectServiceBase(KafkaPathResolverMixin, Service):
 def _base_url(self, node):
 return 'http://' + node.account.externally_routable_ip + ':' + 
str(self.CONNECT_REST_PORT)
 
+def append_to_environment_variable(self, envvar, value):
+env_opts = self.environment[envvar]
+if env_opts is None:
+env_opts = "\"%s\"" % value
+else:
+env_opts = "\"%s %s\"" % (env_opts.strip('\"'), value)
+self.environment[envvar] = env_opts
+
 
 class ConnectStandaloneService(ConnectServiceBase):
 """Runs Kafka Connect in standalone mode."""
@@ -266,7 +278,10 @@ class ConnectStandaloneService(ConnectServiceBase):
 
 def start_cmd(self, node, connector_configs):
 cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%s\"; " 
% self.LOG4J_CONFIG_FILE
-cmd += "export KAFKA_OPTS=%s; " % self.security_config.kafka_opts
+heap_kafka_opts = "-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=%s" % \
+  self.logs["connect_heap_dump_file"]["path"]
+other_kafka_opts = self.security_config.kafka_opts.strip('\"')
+cmd += "export KAFKA_OPTS=\"%s %s\"; " % (heap_kafka_opts, 
other_kafka_opts)
 for envvar in self.environment:
 cmd += "export %s=%s; " % (envvar, str(self.environment[envvar]))

[kafka] branch 1.1 updated: MINOR: Upgrade ducktape to 0.7.5 (#6197)

2019-02-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 693993d  MINOR: Upgrade ducktape to 0.7.5 (#6197)
693993d is described below

commit 693993debb410c6eccc26e80f15a010834f9accf
Author: Konstantine Karantasis 
AuthorDate: Fri Jan 25 11:14:19 2019 -0800

MINOR: Upgrade ducktape to 0.7.5 (#6197)

Reviewed-by: Colin P. McCabe 
---
 tests/docker/Dockerfile | 2 +-
 tests/setup.py  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/docker/Dockerfile b/tests/docker/Dockerfile
index 81bd128..b923a42 100644
--- a/tests/docker/Dockerfile
+++ b/tests/docker/Dockerfile
@@ -32,7 +32,7 @@ LABEL ducker.creator=$ducker_creator
 
 # Update Linux and install necessary utilities.
 RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev && apt-get -y clean
-RUN pip install -U pip setuptools && pip install --upgrade cffi virtualenv 
pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install --upgrade 
ducktape==0.7.1
+RUN pip install -U pip==9.0.3 setuptools && pip install --upgrade cffi 
virtualenv pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install 
--upgrade ducktape==0.7.5
 
 # Set up ssh
 COPY ./ssh-config /root/.ssh/config
diff --git a/tests/setup.py b/tests/setup.py
index 24ee4eb..14a3695 100644
--- a/tests/setup.py
+++ b/tests/setup.py
@@ -51,7 +51,7 @@ setup(name="kafkatest",
   license="apache2.0",
   packages=find_packages(),
   include_package_data=True,
-  install_requires=["ducktape==0.7.1", "requests>=2.5.0"],
+  install_requires=["ducktape==0.7.5", "requests==2.20.0"],
   tests_require=["pytest", "mock"],
   cmdclass={'test': PyTest},
   )



[kafka] branch 1.0 updated: MINOR: Upgrade ducktape to 0.7.5 (#6197)

2019-02-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new 4ce05d9  MINOR: Upgrade ducktape to 0.7.5 (#6197)
4ce05d9 is described below

commit 4ce05d980228af622ab50b24e9c860aca9ab9478
Author: Konstantine Karantasis 
AuthorDate: Fri Jan 25 11:14:19 2019 -0800

MINOR: Upgrade ducktape to 0.7.5 (#6197)

Reviewed-by: Colin P. McCabe 
---
 tests/docker/Dockerfile | 2 +-
 tests/setup.py  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/docker/Dockerfile b/tests/docker/Dockerfile
index 921b524..033a96a 100644
--- a/tests/docker/Dockerfile
+++ b/tests/docker/Dockerfile
@@ -33,7 +33,7 @@ LABEL ducker.creator=$ducker_creator
 
 # Update Linux and install necessary utilities.
 RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev && apt-get -y clean
-RUN pip install -U pip setuptools && pip install --upgrade cffi virtualenv 
pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install --upgrade 
ducktape==0.7.1
+RUN pip install -U pip==9.0.3 setuptools && pip install --upgrade cffi 
virtualenv pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install 
--upgrade ducktape==0.7.5
 
 # Set up ssh
 COPY ./ssh-config /root/.ssh/config
diff --git a/tests/setup.py b/tests/setup.py
index 24ee4eb..14a3695 100644
--- a/tests/setup.py
+++ b/tests/setup.py
@@ -51,7 +51,7 @@ setup(name="kafkatest",
   license="apache2.0",
   packages=find_packages(),
   include_package_data=True,
-  install_requires=["ducktape==0.7.1", "requests>=2.5.0"],
+  install_requires=["ducktape==0.7.5", "requests==2.20.0"],
   tests_require=["pytest", "mock"],
   cmdclass={'test': PyTest},
   )



[kafka] branch 2.0 updated: MINOR: Upgrade ducktape to 0.7.5 (#6197)

2019-02-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 47bb1a4  MINOR: Upgrade ducktape to 0.7.5 (#6197)
47bb1a4 is described below

commit 47bb1a46689b7b11895a221d40506cd5c5bf3b6e
Author: Konstantine Karantasis 
AuthorDate: Fri Jan 25 11:14:19 2019 -0800

MINOR: Upgrade ducktape to 0.7.5 (#6197)

Reviewed-by: Colin P. McCabe 
---
 tests/docker/Dockerfile | 2 +-
 tests/setup.py  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/docker/Dockerfile b/tests/docker/Dockerfile
index 11c6fb6..3d577b6 100644
--- a/tests/docker/Dockerfile
+++ b/tests/docker/Dockerfile
@@ -32,7 +32,7 @@ LABEL ducker.creator=$ducker_creator
 
 # Update Linux and install necessary utilities.
 RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev && apt-get -y clean
-RUN pip install -U pip==9.0.3 setuptools && pip install --upgrade cffi 
virtualenv pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install 
--upgrade ducktape==0.7.1
+RUN pip install -U pip==9.0.3 setuptools && pip install --upgrade cffi 
virtualenv pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install 
--upgrade ducktape==0.7.5
 
 # Set up ssh
 COPY ./ssh-config /root/.ssh/config
diff --git a/tests/setup.py b/tests/setup.py
index 7d7c4a4..a0de1d4 100644
--- a/tests/setup.py
+++ b/tests/setup.py
@@ -51,7 +51,7 @@ setup(name="kafkatest",
   license="apache2.0",
   packages=find_packages(),
   include_package_data=True,
-  install_requires=["ducktape==0.7.1", "requests>=2.5.0"],
+  install_requires=["ducktape==0.7.5", "requests==2.20.0"],
   tests_require=["pytest", "mock"],
   cmdclass={'test': PyTest},
   )



[kafka] branch 1.0 updated: MINOR: upgrade to jdk8 8u202

2019-01-25 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new bb47925  MINOR: upgrade to jdk8 8u202
bb47925 is described below

commit bb47925181f4a0a68216c6883cad96fd15530a1c
Author: Jarek Rudzinski 
AuthorDate: Thu Jan 24 22:19:19 2019 -0800

MINOR: upgrade to jdk8 8u202

Upgrade from 171 to 202. Unpack and install directly from a cached tgz 
rather than going via the installer deb from webupd8. The installer is still on 
8u919 while we want 202.

Testing via kafka branch builder job

https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2305/

Author: Jarek Rudzinski 
Author: Ewen Cheslack-Postava 

Reviewers: Alex Diachenko , Ewen Cheslack-Postava 


Closes #6165 from jarekr/trunk-jdk8-from-tgz

(cherry picked from commit ad3b6dd83571d06aa9b39c9c37e8663a017c6916)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile | 20 +++
 vagrant/base.sh | 59 -
 2 files changed, 40 insertions(+), 39 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 88f2028..ee08487 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -40,7 +40,7 @@ ec2_keypair_file = nil
 
 ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
-ec2_ami = "ami-905730e8"
+ec2_ami = "ami-29ebb519"
 ec2_instance_type = "m3.medium"
 ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
 ec2_spot_max_price = "0.113"  # On-demand price for instance type
@@ -52,6 +52,9 @@ ec2_subnet_id = nil
 # are running Vagrant from within that VPC as well.
 ec2_associate_public_ip = nil
 
+jdk_major = '8'
+jdk_full = '8u202-linux-x64'
+
 local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
 if File.exists?(local_config_file) then
   eval(File.read(local_config_file), binding, "Vagrantfile.local")
@@ -75,15 +78,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 
 if Vagrant.has_plugin?("vagrant-cachier")
   override.cache.scope = :box
-  # Besides the defaults, we use a custom cache to handle the Oracle JDK
-  # download, which downloads via wget during an apt install. Because of 
the
-  # way the installer ends up using its cache directory, we need to jump
-  # through some hoops instead of just specifying a cache directly -- we
-  # share to a temporary location and the provisioning scripts symlink data
-  # to the right location.
-  override.cache.enable :generic, {
-"oracle-jdk8" => { cache_dir: "/tmp/oracle-jdk8-installer-cache" },
-  }
 end
   end
 
@@ -169,7 +163,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(zookeeper, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (10 + i).to_s
   assign_local_ip(zookeeper, ip_address)
-  zookeeper.vm.provision "shell", path: "vagrant/base.sh"
+  zookeeper.vm.provision "shell", path: "vagrant/base.sh", env: 
{"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
   zk_jmx_port = enable_jmx ? (8000 + i).to_s : ""
   zookeeper.vm.provision "shell", path: "vagrant/zk.sh", :args => [i.to_s, 
num_zookeepers, zk_jmx_port]
 end
@@ -186,7 +180,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   # host DNS isn't setup, we shouldn't use hostnames -- IP addresses must 
be
   # used to support clients running on the host.
   zookeeper_connect = zookeepers.map{ |zk_addr| zk_addr + 
":2181"}.join(",")
-  broker.vm.provision "shell", path: "vagrant/base.sh"
+  broker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
   kafka_jmx_port = enable_jmx ? (9000 + i).to_s : ""
   broker.vm.provision "shell", path: "vagrant/broker.sh", :args => 
[i.to_s, enable_dns ? name : ip_address, zookeeper_connect, kafka_jmx_port]
 end
@@ -198,7 +192,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(worker, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (100 + i).to_s
   assign_local_ip(worker, ip_address)
-  worker.vm.provision "shell", path: "vagrant/base.sh"
+  worker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
 end
   }
 
diff --git a/vagrant/base.sh b/vagrant/base.sh
index d77ed40..9072f3f 100755
--- a/vagrant/base.sh
++

[kafka] branch 1.1 updated: MINOR: upgrade to jdk8 8u202

2019-01-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 3a23f4b  MINOR: upgrade to jdk8 8u202
3a23f4b is described below

commit 3a23f4b9b0b6a99dd3c7d0fa3520d5636b7e27c5
Author: Jarek Rudzinski 
AuthorDate: Thu Jan 24 22:19:19 2019 -0800

MINOR: upgrade to jdk8 8u202

Upgrade from 171 to 202. Unpack and install directly from a cached tgz 
rather than going via the installer deb from webupd8. The installer is still on 
8u919 while we want 202.

Testing via kafka branch builder job

https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2305/

Author: Jarek Rudzinski 
Author: Ewen Cheslack-Postava 

Reviewers: Alex Diachenko , Ewen Cheslack-Postava 


Closes #6165 from jarekr/trunk-jdk8-from-tgz

(cherry picked from commit ad3b6dd83571d06aa9b39c9c37e8663a017c6916)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile | 20 +++
 vagrant/base.sh | 61 -
 2 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 88f2028..ee08487 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -40,7 +40,7 @@ ec2_keypair_file = nil
 
 ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
-ec2_ami = "ami-905730e8"
+ec2_ami = "ami-29ebb519"
 ec2_instance_type = "m3.medium"
 ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
 ec2_spot_max_price = "0.113"  # On-demand price for instance type
@@ -52,6 +52,9 @@ ec2_subnet_id = nil
 # are running Vagrant from within that VPC as well.
 ec2_associate_public_ip = nil
 
+jdk_major = '8'
+jdk_full = '8u202-linux-x64'
+
 local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
 if File.exists?(local_config_file) then
   eval(File.read(local_config_file), binding, "Vagrantfile.local")
@@ -75,15 +78,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 
 if Vagrant.has_plugin?("vagrant-cachier")
   override.cache.scope = :box
-  # Besides the defaults, we use a custom cache to handle the Oracle JDK
-  # download, which downloads via wget during an apt install. Because of 
the
-  # way the installer ends up using its cache directory, we need to jump
-  # through some hoops instead of just specifying a cache directly -- we
-  # share to a temporary location and the provisioning scripts symlink data
-  # to the right location.
-  override.cache.enable :generic, {
-"oracle-jdk8" => { cache_dir: "/tmp/oracle-jdk8-installer-cache" },
-  }
 end
   end
 
@@ -169,7 +163,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(zookeeper, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (10 + i).to_s
   assign_local_ip(zookeeper, ip_address)
-  zookeeper.vm.provision "shell", path: "vagrant/base.sh"
+  zookeeper.vm.provision "shell", path: "vagrant/base.sh", env: 
{"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
   zk_jmx_port = enable_jmx ? (8000 + i).to_s : ""
   zookeeper.vm.provision "shell", path: "vagrant/zk.sh", :args => [i.to_s, 
num_zookeepers, zk_jmx_port]
 end
@@ -186,7 +180,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   # host DNS isn't setup, we shouldn't use hostnames -- IP addresses must 
be
   # used to support clients running on the host.
   zookeeper_connect = zookeepers.map{ |zk_addr| zk_addr + 
":2181"}.join(",")
-  broker.vm.provision "shell", path: "vagrant/base.sh"
+  broker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
   kafka_jmx_port = enable_jmx ? (9000 + i).to_s : ""
   broker.vm.provision "shell", path: "vagrant/broker.sh", :args => 
[i.to_s, enable_dns ? name : ip_address, zookeeper_connect, kafka_jmx_port]
 end
@@ -198,7 +192,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(worker, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (100 + i).to_s
   assign_local_ip(worker, ip_address)
-  worker.vm.provision "shell", path: "vagrant/base.sh"
+  worker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
 end
   }
 
diff --git a/vagrant/base.sh b/vagrant/base.sh
index 4243bb0..ad05c78 100755
--- a/vagra

[kafka] branch 2.1 updated: MINOR: upgrade to jdk8 8u202

2019-01-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 5b2de1d  MINOR: upgrade to jdk8 8u202
5b2de1d is described below

commit 5b2de1da376f0eaf0ed8affa3f34a897f0859f05
Author: Jarek Rudzinski 
AuthorDate: Thu Jan 24 22:19:19 2019 -0800

MINOR: upgrade to jdk8 8u202

Upgrade from 171 to 202. Unpack and install directly from a cached tgz 
rather than going via the installer deb from webupd8. The installer is still on 
8u919 while we want 202.

Testing via kafka branch builder job

https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2305/

Author: Jarek Rudzinski 
Author: Ewen Cheslack-Postava 

Reviewers: Alex Diachenko , Ewen Cheslack-Postava 


Closes #6165 from jarekr/trunk-jdk8-from-tgz

(cherry picked from commit ad3b6dd83571d06aa9b39c9c37e8663a017c6916)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile | 20 +++
 vagrant/base.sh | 61 -
 2 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 88f2028..ee08487 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -40,7 +40,7 @@ ec2_keypair_file = nil
 
 ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
-ec2_ami = "ami-905730e8"
+ec2_ami = "ami-29ebb519"
 ec2_instance_type = "m3.medium"
 ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
 ec2_spot_max_price = "0.113"  # On-demand price for instance type
@@ -52,6 +52,9 @@ ec2_subnet_id = nil
 # are running Vagrant from within that VPC as well.
 ec2_associate_public_ip = nil
 
+jdk_major = '8'
+jdk_full = '8u202-linux-x64'
+
 local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
 if File.exists?(local_config_file) then
   eval(File.read(local_config_file), binding, "Vagrantfile.local")
@@ -75,15 +78,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 
 if Vagrant.has_plugin?("vagrant-cachier")
   override.cache.scope = :box
-  # Besides the defaults, we use a custom cache to handle the Oracle JDK
-  # download, which downloads via wget during an apt install. Because of 
the
-  # way the installer ends up using its cache directory, we need to jump
-  # through some hoops instead of just specifying a cache directly -- we
-  # share to a temporary location and the provisioning scripts symlink data
-  # to the right location.
-  override.cache.enable :generic, {
-"oracle-jdk8" => { cache_dir: "/tmp/oracle-jdk8-installer-cache" },
-  }
 end
   end
 
@@ -169,7 +163,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(zookeeper, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (10 + i).to_s
   assign_local_ip(zookeeper, ip_address)
-  zookeeper.vm.provision "shell", path: "vagrant/base.sh"
+  zookeeper.vm.provision "shell", path: "vagrant/base.sh", env: 
{"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
   zk_jmx_port = enable_jmx ? (8000 + i).to_s : ""
   zookeeper.vm.provision "shell", path: "vagrant/zk.sh", :args => [i.to_s, 
num_zookeepers, zk_jmx_port]
 end
@@ -186,7 +180,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   # host DNS isn't setup, we shouldn't use hostnames -- IP addresses must 
be
   # used to support clients running on the host.
   zookeeper_connect = zookeepers.map{ |zk_addr| zk_addr + 
":2181"}.join(",")
-  broker.vm.provision "shell", path: "vagrant/base.sh"
+  broker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
   kafka_jmx_port = enable_jmx ? (9000 + i).to_s : ""
   broker.vm.provision "shell", path: "vagrant/broker.sh", :args => 
[i.to_s, enable_dns ? name : ip_address, zookeeper_connect, kafka_jmx_port]
 end
@@ -198,7 +192,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(worker, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (100 + i).to_s
   assign_local_ip(worker, ip_address)
-  worker.vm.provision "shell", path: "vagrant/base.sh"
+  worker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
 end
   }
 
diff --git a/vagrant/base.sh b/vagrant/base.sh
index 4429096..6b7e5bc 100755
--- a/vagra

[kafka] branch 2.0 updated: MINOR: upgrade to jdk8 8u202

2019-01-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new e31b973  MINOR: upgrade to jdk8 8u202
e31b973 is described below

commit e31b973122b0daa88c0c314f353173f39c23d2bd
Author: Jarek Rudzinski 
AuthorDate: Thu Jan 24 22:19:19 2019 -0800

MINOR: upgrade to jdk8 8u202

Upgrade from 171 to 202. Unpack and install directly from a cached tgz 
rather than going via the installer deb from webupd8. The installer is still on 
8u919 while we want 202.

Testing via kafka branch builder job

https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2305/

Author: Jarek Rudzinski 
Author: Ewen Cheslack-Postava 

Reviewers: Alex Diachenko , Ewen Cheslack-Postava 


Closes #6165 from jarekr/trunk-jdk8-from-tgz

(cherry picked from commit ad3b6dd83571d06aa9b39c9c37e8663a017c6916)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile | 20 +++
 vagrant/base.sh | 61 -
 2 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 88f2028..ee08487 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -40,7 +40,7 @@ ec2_keypair_file = nil
 
 ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
-ec2_ami = "ami-905730e8"
+ec2_ami = "ami-29ebb519"
 ec2_instance_type = "m3.medium"
 ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
 ec2_spot_max_price = "0.113"  # On-demand price for instance type
@@ -52,6 +52,9 @@ ec2_subnet_id = nil
 # are running Vagrant from within that VPC as well.
 ec2_associate_public_ip = nil
 
+jdk_major = '8'
+jdk_full = '8u202-linux-x64'
+
 local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
 if File.exists?(local_config_file) then
   eval(File.read(local_config_file), binding, "Vagrantfile.local")
@@ -75,15 +78,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 
 if Vagrant.has_plugin?("vagrant-cachier")
   override.cache.scope = :box
-  # Besides the defaults, we use a custom cache to handle the Oracle JDK
-  # download, which downloads via wget during an apt install. Because of 
the
-  # way the installer ends up using its cache directory, we need to jump
-  # through some hoops instead of just specifying a cache directly -- we
-  # share to a temporary location and the provisioning scripts symlink data
-  # to the right location.
-  override.cache.enable :generic, {
-"oracle-jdk8" => { cache_dir: "/tmp/oracle-jdk8-installer-cache" },
-  }
 end
   end
 
@@ -169,7 +163,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(zookeeper, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (10 + i).to_s
   assign_local_ip(zookeeper, ip_address)
-  zookeeper.vm.provision "shell", path: "vagrant/base.sh"
+  zookeeper.vm.provision "shell", path: "vagrant/base.sh", env: 
{"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
   zk_jmx_port = enable_jmx ? (8000 + i).to_s : ""
   zookeeper.vm.provision "shell", path: "vagrant/zk.sh", :args => [i.to_s, 
num_zookeepers, zk_jmx_port]
 end
@@ -186,7 +180,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   # host DNS isn't setup, we shouldn't use hostnames -- IP addresses must 
be
   # used to support clients running on the host.
   zookeeper_connect = zookeepers.map{ |zk_addr| zk_addr + 
":2181"}.join(",")
-  broker.vm.provision "shell", path: "vagrant/base.sh"
+  broker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
   kafka_jmx_port = enable_jmx ? (9000 + i).to_s : ""
   broker.vm.provision "shell", path: "vagrant/broker.sh", :args => 
[i.to_s, enable_dns ? name : ip_address, zookeeper_connect, kafka_jmx_port]
 end
@@ -198,7 +192,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(worker, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (100 + i).to_s
   assign_local_ip(worker, ip_address)
-  worker.vm.provision "shell", path: "vagrant/base.sh"
+  worker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
 end
   }
 
diff --git a/vagrant/base.sh b/vagrant/base.sh
index dcba0a1..3068c22 100755
--- a/vagra

[kafka] branch trunk updated: MINOR: upgrade to jdk8 8u202

2019-01-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ad3b6dd  MINOR: upgrade to jdk8 8u202
ad3b6dd is described below

commit ad3b6dd83571d06aa9b39c9c37e8663a017c6916
Author: Jarek Rudzinski 
AuthorDate: Thu Jan 24 22:19:19 2019 -0800

MINOR: upgrade to jdk8 8u202

Upgrade from 171 to 202. Unpack and install directly from a cached tgz 
rather than going via the installer deb from webupd8. The installer is still on 
8u919 while we want 202.

Testing via kafka branch builder job

https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2305/

Author: Jarek Rudzinski 
Author: Ewen Cheslack-Postava 

Reviewers: Alex Diachenko , Ewen Cheslack-Postava 


Closes #6165 from jarekr/trunk-jdk8-from-tgz
---
 Vagrantfile | 20 +++
 vagrant/base.sh | 61 -
 2 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 88f2028..ee08487 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -40,7 +40,7 @@ ec2_keypair_file = nil
 
 ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
-ec2_ami = "ami-905730e8"
+ec2_ami = "ami-29ebb519"
 ec2_instance_type = "m3.medium"
 ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
 ec2_spot_max_price = "0.113"  # On-demand price for instance type
@@ -52,6 +52,9 @@ ec2_subnet_id = nil
 # are running Vagrant from within that VPC as well.
 ec2_associate_public_ip = nil
 
+jdk_major = '8'
+jdk_full = '8u202-linux-x64'
+
 local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
 if File.exists?(local_config_file) then
   eval(File.read(local_config_file), binding, "Vagrantfile.local")
@@ -75,15 +78,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 
 if Vagrant.has_plugin?("vagrant-cachier")
   override.cache.scope = :box
-  # Besides the defaults, we use a custom cache to handle the Oracle JDK
-  # download, which downloads via wget during an apt install. Because of 
the
-  # way the installer ends up using its cache directory, we need to jump
-  # through some hoops instead of just specifying a cache directly -- we
-  # share to a temporary location and the provisioning scripts symlink data
-  # to the right location.
-  override.cache.enable :generic, {
-"oracle-jdk8" => { cache_dir: "/tmp/oracle-jdk8-installer-cache" },
-  }
 end
   end
 
@@ -169,7 +163,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(zookeeper, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (10 + i).to_s
   assign_local_ip(zookeeper, ip_address)
-  zookeeper.vm.provision "shell", path: "vagrant/base.sh"
+  zookeeper.vm.provision "shell", path: "vagrant/base.sh", env: 
{"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
   zk_jmx_port = enable_jmx ? (8000 + i).to_s : ""
   zookeeper.vm.provision "shell", path: "vagrant/zk.sh", :args => [i.to_s, 
num_zookeepers, zk_jmx_port]
 end
@@ -186,7 +180,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   # host DNS isn't setup, we shouldn't use hostnames -- IP addresses must 
be
   # used to support clients running on the host.
   zookeeper_connect = zookeepers.map{ |zk_addr| zk_addr + 
":2181"}.join(",")
-  broker.vm.provision "shell", path: "vagrant/base.sh"
+  broker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
   kafka_jmx_port = enable_jmx ? (9000 + i).to_s : ""
   broker.vm.provision "shell", path: "vagrant/broker.sh", :args => 
[i.to_s, enable_dns ? name : ip_address, zookeeper_connect, kafka_jmx_port]
 end
@@ -198,7 +192,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   name_node(worker, name, ec2_instance_name_prefix)
   ip_address = "192.168.50." + (100 + i).to_s
   assign_local_ip(worker, ip_address)
-  worker.vm.provision "shell", path: "vagrant/base.sh"
+  worker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" 
=> jdk_major, "JDK_FULL" => jdk_full}
 end
   }
 
diff --git a/vagrant/base.sh b/vagrant/base.sh
index 59e890c..6ee9660 100755
--- a/vagrant/base.sh
+++ b/vagrant/base.sh
@@ -20,38 +20,45 @@ set -ex
 # If you update this, also update tests/docker/Dockerfile
 ex

[kafka] branch 2.1 updated: KAFKA-5117: Stop resolving externalized configs in Connect REST API

2019-01-23 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 265b58b  KAFKA-5117: Stop resolving externalized configs in Connect 
REST API
265b58b is described below

commit 265b58bd11dfbd8014ccabe320589f7163a82925
Author: Chris Egerton 
AuthorDate: Wed Jan 23 11:00:23 2019 -0800

KAFKA-5117: Stop resolving externalized configs in Connect REST API


[KIP-297](https://cwiki.apache.org/confluence/display/KAFKA/KIP-297%3A+Externalizing+Secrets+for+Connect+Configurations#KIP-297:ExternalizingSecretsforConnectConfigurations-PublicInterfaces)
 introduced the `ConfigProvider` mechanism, which was primarily intended for 
externalizing secrets provided in connector configurations. However, when 
querying the Connect REST API for the configuration of a connector or its 
tasks, those secrets are still exposed. The changes here prevent the Conne [...]

Tested and verified manually. If these changes are approved unit tests can 
be added to prevent a regression.

Author: Chris Egerton 

Reviewers: Robert Yokota , Randall Hauch 


Closes #6129 from C0urante/hide-provided-connect-configs

(cherry picked from commit 743607af5aa625a19377688709870b021014dee2)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../runtime/distributed/DistributedHerder.java |  4 ++--
 .../runtime/standalone/StandaloneHerder.java   |  4 ++--
 .../runtime/distributed/DistributedHerderTest.java | 11 +-
 .../runtime/standalone/StandaloneHerderTest.java   | 24 +++---
 tests/kafkatest/tests/connect/connect_rest_test.py |  7 +--
 tests/kafkatest/tests/connect/connect_test.py  |  5 ++---
 .../templates/connect-distributed.properties   |  6 ++
 7 files changed, 44 insertions(+), 17 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
index 099f084..7edc3b2 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
@@ -451,7 +451,7 @@ public class DistributedHerder extends AbstractHerder 
implements Runnable {
 if (!configState.contains(connName)) {
 callback.onCompletion(new 
NotFoundException("Connector " + connName + " not found"), null);
 } else {
-Map config = 
configState.connectorConfig(connName);
+Map config = 
configState.rawConnectorConfig(connName);
 callback.onCompletion(null, new 
ConnectorInfo(connName, config,
 configState.tasks(connName),
 
connectorTypeForClass(config.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG;
@@ -607,7 +607,7 @@ public class DistributedHerder extends AbstractHerder 
implements Runnable {
 List result = new ArrayList<>();
 for (int i = 0; i < 
configState.taskCount(connName); i++) {
 ConnectorTaskId id = new 
ConnectorTaskId(connName, i);
-result.add(new TaskInfo(id, 
configState.taskConfig(id)));
+result.add(new TaskInfo(id, 
configState.rawTaskConfig(id)));
 }
 callback.onCompletion(null, result);
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
index fe31c28..95b53e5 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
@@ -134,7 +134,7 @@ public class StandaloneHerder extends AbstractHerder {
 private ConnectorInfo createConnectorInfo(String connector) {
 if (!configState.contains(connector))
 return null;
-Map config = configState.connectorConfig(connector);
+Map config = configState.rawConnectorConfig(connector);
 return new ConnectorInfo(connector, config, 
configState.tasks(connector),
 
connectorTypeForClass(config.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG)));
 }
@@ -232,7 +232,7 @@ public class StandaloneHerder extends AbstractHerder {
 
 List result = new ArrayList<>();
 for (ConnectorTaskId taskId : configState.tasks(con

[kafka] branch trunk updated: KAFKA-5117: Stop resolving externalized configs in Connect REST API

2019-01-23 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 743607a  KAFKA-5117: Stop resolving externalized configs in Connect 
REST API
743607a is described below

commit 743607af5aa625a19377688709870b021014dee2
Author: Chris Egerton 
AuthorDate: Wed Jan 23 11:00:23 2019 -0800

KAFKA-5117: Stop resolving externalized configs in Connect REST API


[KIP-297](https://cwiki.apache.org/confluence/display/KAFKA/KIP-297%3A+Externalizing+Secrets+for+Connect+Configurations#KIP-297:ExternalizingSecretsforConnectConfigurations-PublicInterfaces)
 introduced the `ConfigProvider` mechanism, which was primarily intended for 
externalizing secrets provided in connector configurations. However, when 
querying the Connect REST API for the configuration of a connector or its 
tasks, those secrets are still exposed. The changes here prevent the Conne [...]

Tested and verified manually. If these changes are approved unit tests can 
be added to prevent a regression.

Author: Chris Egerton 

Reviewers: Robert Yokota , Randall Hauch 


Closes #6129 from C0urante/hide-provided-connect-configs
---
 .../runtime/distributed/DistributedHerder.java |  4 ++--
 .../runtime/standalone/StandaloneHerder.java   |  4 ++--
 .../runtime/distributed/DistributedHerderTest.java | 11 ++-
 .../runtime/standalone/StandaloneHerderTest.java   | 22 --
 tests/kafkatest/tests/connect/connect_rest_test.py |  7 +--
 tests/kafkatest/tests/connect/connect_test.py  |  5 ++---
 .../templates/connect-distributed.properties   |  6 ++
 7 files changed, 43 insertions(+), 16 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
index d25bbbc..711b6c9 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java
@@ -451,7 +451,7 @@ public class DistributedHerder extends AbstractHerder 
implements Runnable {
 if (!configState.contains(connName)) {
 callback.onCompletion(new 
NotFoundException("Connector " + connName + " not found"), null);
 } else {
-Map config = 
configState.connectorConfig(connName);
+Map config = 
configState.rawConnectorConfig(connName);
 callback.onCompletion(null, new 
ConnectorInfo(connName, config,
 configState.tasks(connName),
 
connectorTypeForClass(config.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG;
@@ -607,7 +607,7 @@ public class DistributedHerder extends AbstractHerder 
implements Runnable {
 List result = new ArrayList<>();
 for (int i = 0; i < 
configState.taskCount(connName); i++) {
 ConnectorTaskId id = new 
ConnectorTaskId(connName, i);
-result.add(new TaskInfo(id, 
configState.taskConfig(id)));
+result.add(new TaskInfo(id, 
configState.rawTaskConfig(id)));
 }
 callback.onCompletion(null, result);
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
index fe31c28..95b53e5 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/standalone/StandaloneHerder.java
@@ -134,7 +134,7 @@ public class StandaloneHerder extends AbstractHerder {
 private ConnectorInfo createConnectorInfo(String connector) {
 if (!configState.contains(connector))
 return null;
-Map config = configState.connectorConfig(connector);
+Map config = configState.rawConnectorConfig(connector);
 return new ConnectorInfo(connector, config, 
configState.tasks(connector),
 
connectorTypeForClass(config.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG)));
 }
@@ -232,7 +232,7 @@ public class StandaloneHerder extends AbstractHerder {
 
 List result = new ArrayList<>();
 for (ConnectorTaskId taskId : configState.tasks(connName))
-result.add(new TaskInfo(taskId, configState.taskConfig(taskId)));
+   

[kafka] branch 2.0 updated: MINOR: Start Connect REST server in standalone mode to match distributed mode (KAFKA-7503 follow-up)

2019-01-16 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 05e70e6  MINOR: Start Connect REST server in standalone mode to match 
distributed mode (KAFKA-7503 follow-up)
05e70e6 is described below

commit 05e70e6b1c3b21bd9ac0e88ea396ae252c4d115b
Author: Magesh Nandakumar 
AuthorDate: Wed Jan 16 22:58:30 2019 -0800

MINOR: Start Connect REST server in standalone mode to match distributed 
mode (KAFKA-7503 follow-up)

Start the Rest server in the standalone mode similar to how it's done for 
distributed mode.

Author: Magesh Nandakumar 

Reviewers: Arjun Satish , Ewen Cheslack-Postava 


Closes #6148 from mageshn/KAFKA-7826

(cherry picked from commit dec68c9350dba6da9f38247db08f93dc0a798ebd)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/cli/ConnectStandalone.java   | 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
index aba9d9c..a47fd96 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
@@ -22,6 +22,7 @@ import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.connect.runtime.Connect;
 import org.apache.kafka.connect.runtime.ConnectorConfig;
 import org.apache.kafka.connect.runtime.Herder;
+import org.apache.kafka.connect.runtime.HerderProvider;
 import org.apache.kafka.connect.runtime.Worker;
 import org.apache.kafka.connect.runtime.WorkerInfo;
 import org.apache.kafka.connect.runtime.isolation.Plugins;
@@ -82,6 +83,9 @@ public class ConnectStandalone {
 log.debug("Kafka cluster ID: {}", kafkaClusterId);
 
 RestServer rest = new RestServer(config);
+HerderProvider provider = new HerderProvider();
+rest.start(provider, plugins);
+
 URI advertisedUrl = rest.advertisedUrl();
 String workerId = advertisedUrl.getHost() + ":" + 
advertisedUrl.getPort();
 
@@ -93,6 +97,8 @@ public class ConnectStandalone {
 
 try {
 connect.start();
+// herder has initialized now, and ready to be used by the 
RestServer.
+provider.setHerder(herder);
 for (final String connectorPropsFile : 
Arrays.copyOfRange(args, 1, args.length)) {
 Map connectorProps = 
Utils.propsToStringMap(Utils.loadProps(connectorPropsFile));
 FutureCallback> cb = new 
FutureCallback<>(new Callback>() {



[kafka] branch 2.1 updated: MINOR: Start Connect REST server in standalone mode to match distributed mode (KAFKA-7503 follow-up)

2019-01-16 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 93d8f7a  MINOR: Start Connect REST server in standalone mode to match 
distributed mode (KAFKA-7503 follow-up)
93d8f7a is described below

commit 93d8f7a906203412d2e9a52649e87842df2dba17
Author: Magesh Nandakumar 
AuthorDate: Wed Jan 16 22:58:30 2019 -0800

MINOR: Start Connect REST server in standalone mode to match distributed 
mode (KAFKA-7503 follow-up)

Start the Rest server in the standalone mode similar to how it's done for 
distributed mode.

Author: Magesh Nandakumar 

Reviewers: Arjun Satish , Ewen Cheslack-Postava 


Closes #6148 from mageshn/KAFKA-7826

(cherry picked from commit dec68c9350dba6da9f38247db08f93dc0a798ebd)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/cli/ConnectStandalone.java   | 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
index aba9d9c..a47fd96 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
@@ -22,6 +22,7 @@ import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.connect.runtime.Connect;
 import org.apache.kafka.connect.runtime.ConnectorConfig;
 import org.apache.kafka.connect.runtime.Herder;
+import org.apache.kafka.connect.runtime.HerderProvider;
 import org.apache.kafka.connect.runtime.Worker;
 import org.apache.kafka.connect.runtime.WorkerInfo;
 import org.apache.kafka.connect.runtime.isolation.Plugins;
@@ -82,6 +83,9 @@ public class ConnectStandalone {
 log.debug("Kafka cluster ID: {}", kafkaClusterId);
 
 RestServer rest = new RestServer(config);
+HerderProvider provider = new HerderProvider();
+rest.start(provider, plugins);
+
 URI advertisedUrl = rest.advertisedUrl();
 String workerId = advertisedUrl.getHost() + ":" + 
advertisedUrl.getPort();
 
@@ -93,6 +97,8 @@ public class ConnectStandalone {
 
 try {
 connect.start();
+// herder has initialized now, and ready to be used by the 
RestServer.
+provider.setHerder(herder);
 for (final String connectorPropsFile : 
Arrays.copyOfRange(args, 1, args.length)) {
 Map connectorProps = 
Utils.propsToStringMap(Utils.loadProps(connectorPropsFile));
 FutureCallback> cb = new 
FutureCallback<>(new Callback>() {



[kafka] branch trunk updated: MINOR: Start Connect REST server in standalone mode to match distributed mode (KAFKA-7503 follow-up)

2019-01-16 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new dec68c9  MINOR: Start Connect REST server in standalone mode to match 
distributed mode (KAFKA-7503 follow-up)
dec68c9 is described below

commit dec68c9350dba6da9f38247db08f93dc0a798ebd
Author: Magesh Nandakumar 
AuthorDate: Wed Jan 16 22:58:30 2019 -0800

MINOR: Start Connect REST server in standalone mode to match distributed 
mode (KAFKA-7503 follow-up)

Start the Rest server in the standalone mode similar to how it's done for 
distributed mode.

Author: Magesh Nandakumar 

Reviewers: Arjun Satish , Ewen Cheslack-Postava 


Closes #6148 from mageshn/KAFKA-7826
---
 .../main/java/org/apache/kafka/connect/cli/ConnectStandalone.java   | 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
index 3498ffe..dd1cf0f 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java
@@ -22,6 +22,7 @@ import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.connect.runtime.Connect;
 import org.apache.kafka.connect.runtime.ConnectorConfig;
 import org.apache.kafka.connect.runtime.Herder;
+import org.apache.kafka.connect.runtime.HerderProvider;
 import org.apache.kafka.connect.runtime.Worker;
 import org.apache.kafka.connect.runtime.WorkerInfo;
 import org.apache.kafka.connect.runtime.isolation.Plugins;
@@ -82,6 +83,9 @@ public class ConnectStandalone {
 log.debug("Kafka cluster ID: {}", kafkaClusterId);
 
 RestServer rest = new RestServer(config);
+HerderProvider provider = new HerderProvider();
+rest.start(provider, plugins);
+
 URI advertisedUrl = rest.advertisedUrl();
 String workerId = advertisedUrl.getHost() + ":" + 
advertisedUrl.getPort();
 
@@ -93,6 +97,8 @@ public class ConnectStandalone {
 
 try {
 connect.start();
+// herder has initialized now, and ready to be used by the 
RestServer.
+provider.setHerder(herder);
 for (final String connectorPropsFile : 
Arrays.copyOfRange(args, 1, args.length)) {
 Map connectorProps = 
Utils.propsToStringMap(Utils.loadProps(connectorPropsFile));
 FutureCallback> cb = new 
FutureCallback<>(new Callback>() {



[kafka] branch trunk updated: KAFKA-7461: Add tests for logical types

2019-01-14 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new aca52b6  KAFKA-7461: Add tests for logical types
aca52b6 is described below

commit aca52b6d2c1381646f211978735be59c0a7de1fd
Author: Andrew Schofield 
AuthorDate: Mon Jan 14 15:41:23 2019 -0800

KAFKA-7461: Add tests for logical types

Added testing of logical types for Kafka Connect in support of KIP-145 
features.
Added tests for Boolean, Time, Date and Timestamp, including the valid 
conversions.

The area of ISO8601 strings is a bit of a mess because the tokenizer is not 
compatible with
that format, and a subsequent JIRA will be needed to fix that.

A few small fixes as well as creating test cases, but they're clearly just 
corrections such as
using 0 to mean January (java.util.Calendar uses zero-based month numbers).

Author: Andrew Schofield 

Reviewers: Mickael Maison , Ewen 
Cheslack-Postava 

Closes #6077 from 
AndrewJSchofield/KAFKA-7461-ConverterValuesLogicalTypesTest
---
 .../java/org/apache/kafka/connect/data/Values.java |   6 +-
 .../org/apache/kafka/connect/data/ValuesTest.java  | 101 -
 2 files changed, 102 insertions(+), 5 deletions(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
index c2bd9f4..c1bebdf 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
@@ -67,7 +67,7 @@ public class Values {
 private static final Schema MAP_SELECTOR_SCHEMA = 
SchemaBuilder.map(Schema.STRING_SCHEMA, Schema.STRING_SCHEMA).build();
 private static final Schema STRUCT_SELECTOR_SCHEMA = 
SchemaBuilder.struct().build();
 private static final String TRUE_LITERAL = Boolean.TRUE.toString();
-private static final String FALSE_LITERAL = Boolean.TRUE.toString();
+private static final String FALSE_LITERAL = Boolean.FALSE.toString();
 private static final long MILLIS_PER_DAY = 24 * 60 * 60 * 1000;
 private static final String NULL_VALUE = "null";
 private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-dd";
@@ -488,7 +488,7 @@ public class Values {
 Calendar calendar = Calendar.getInstance(UTC);
 calendar.setTime((java.util.Date) value);
 calendar.set(Calendar.YEAR, 1970);
-calendar.set(Calendar.MONTH, 1);
+calendar.set(Calendar.MONTH, 0); // Months are 
zero-based
 calendar.set(Calendar.DAY_OF_MONTH, 1);
 return Time.toLogical(toSchema, (int) 
calendar.getTimeInMillis());
 }
@@ -872,7 +872,7 @@ public class Values {
 }
 } else if (tokenLength == ISO_8601_TIMESTAMP_LENGTH) {
 try {
-return new SchemaAndValue(Time.SCHEMA, new 
SimpleDateFormat(ISO_8601_TIMESTAMP_FORMAT_PATTERN).parse(token));
+return new SchemaAndValue(Timestamp.SCHEMA, new 
SimpleDateFormat(ISO_8601_TIMESTAMP_FORMAT_PATTERN).parse(token));
 } catch (ParseException e) {
 // not a valid date
 }
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java
index dcfa3cf..16d 100644
--- a/connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java
+++ b/connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java
@@ -35,6 +35,8 @@ import static org.junit.Assert.fail;
 
 public class ValuesTest {
 
+private static final long MILLIS_PER_DAY = 24 * 60 * 60 * 1000;
+
 private static final Map STRING_MAP = new 
LinkedHashMap<>();
 private static final Schema STRING_MAP_SCHEMA = 
SchemaBuilder.map(Schema.STRING_SCHEMA, Schema.STRING_SCHEMA).schema();
 
@@ -79,6 +81,24 @@ public class ValuesTest {
 }
 
 @Test
+public void shouldConvertBooleanValues() {
+assertRoundTrip(Schema.BOOLEAN_SCHEMA, Schema.BOOLEAN_SCHEMA, 
Boolean.FALSE);
+SchemaAndValue resultFalse = roundTrip(Schema.BOOLEAN_SCHEMA, "false");
+assertEquals(Schema.BOOLEAN_SCHEMA, resultFalse.schema());
+assertEquals(Boolean.FALSE, resultFalse.value());
+
+assertRoundTrip(Schema.BOOLEAN_SCHEMA, Schema.BOOLEAN_SCHEMA, 
Boolean.TRUE);
+SchemaAndValue resultTrue = roundTrip(Schema.BOOLEAN_SCHEMA, "true");
+assertEquals(Schema.BOOLEAN_SCHEMA, resultTrue.schema());
+assertEquals(Boolean.TRUE, res

[kafka] branch 2.0 updated: KAFKA-7503: Connect integration test harness

2019-01-14 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 6d7f6dd  KAFKA-7503: Connect integration test harness
6d7f6dd is described below

commit 6d7f6ddff1786fa076bd16eb7ac6d866d94f2e55
Author: Arjun Satish 
AuthorDate: Mon Jan 14 13:50:23 2019 -0800

KAFKA-7503: Connect integration test harness

Expose a programmatic way to bring up a Kafka and Zk cluster through Java 
API to facilitate integration tests for framework level changes in Kafka 
Connect. The Kafka classes would be similar to KafkaEmbedded in streams. The 
new classes would reuse the kafka.server.KafkaServer classes from :core, and 
provide a simple interface to bring up brokers in integration tests.

Signed-off-by: Arjun Satish 

Author: Arjun Satish 
Author: Arjun Satish 

Reviewers: Randall Hauch , Konstantine Karantasis 
, Ewen Cheslack-Postava 

Closes #5516 from wicknicks/connect-integration-test

(cherry picked from commit 69d8d2ea11c5e08884ab4c7b8079af5fd21247be)
Signed-off-by: Ewen Cheslack-Postava 
---
 build.gradle   |   2 +
 checkstyle/import-control.xml  |  14 +-
 .../kafka/connect/cli/ConnectDistributed.java  | 109 ---
 .../org/apache/kafka/connect/runtime/Connect.java  |   7 +-
 .../kafka/connect/runtime/HerderProvider.java  |  68 +
 .../runtime/health/ConnectClusterStateImpl.java|  12 +-
 .../kafka/connect/runtime/rest/RestServer.java |  31 +-
 .../runtime/rest/entities/ConnectorStateInfo.java  |  11 +-
 .../rest/resources/ConnectorPluginsResource.java   |  12 +-
 .../runtime/rest/resources/ConnectorsResource.java |  39 +--
 .../runtime/rest/resources/RootResource.java   |   8 +-
 .../kafka/connect/integration/ConnectorHandle.java | 116 +++
 .../integration/ErrorHandlingIntegrationTest.java  | 231 ++
 .../integration/ExampleConnectIntegrationTest.java | 137 +
 .../integration/MonitorableSinkConnector.java  | 115 +++
 .../kafka/connect/integration/RuntimeHandles.java  |  63 
 .../kafka/connect/integration/TaskHandle.java  | 111 +++
 .../kafka/connect/runtime/rest/RestServerTest.java |   5 +-
 .../resources/ConnectorPluginsResourceTest.java|   3 +-
 .../rest/resources/ConnectorsResourceTest.java |   3 +-
 .../runtime/rest/resources/RootResourceTest.java   |   3 +-
 .../util/clusters/EmbeddedConnectCluster.java  | 280 +
 .../util/clusters/EmbeddedKafkaCluster.java| 339 +
 .../runtime/src/test/resources/log4j.properties|   3 +-
 24 files changed, 1617 insertions(+), 105 deletions(-)

diff --git a/build.gradle b/build.gradle
index 1453c0e..95d2df1 100644
--- a/build.gradle
+++ b/build.gradle
@@ -1380,6 +1380,8 @@ project(':connect:runtime') {
 testCompile libs.powermockEasymock
 
 testCompile project(':clients').sourceSets.test.output
+testCompile project(':core')
+testCompile project(':core').sourceSets.test.output
 
 testRuntime libs.slf4jlog4j
   }
diff --git a/checkstyle/import-control.xml b/checkstyle/import-control.xml
index 106ad0a..78bb894 100644
--- a/checkstyle/import-control.xml
+++ b/checkstyle/import-control.xml
@@ -338,8 +338,6 @@
   
 
 
-
-
 
   
   
@@ -357,6 +355,18 @@
   
   
   
+  
+  
+
+
+
+
+  
+
+
+
+  
+  
 
 
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
index f8c15de..a6c6d98 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
@@ -20,6 +20,7 @@ import org.apache.kafka.common.utils.Exit;
 import org.apache.kafka.common.utils.Time;
 import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.connect.runtime.Connect;
+import org.apache.kafka.connect.runtime.HerderProvider;
 import org.apache.kafka.connect.runtime.Worker;
 import org.apache.kafka.connect.runtime.WorkerConfigTransformer;
 import org.apache.kafka.connect.runtime.WorkerInfo;
@@ -38,6 +39,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.net.URI;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.Map;
 
@@ -53,62 +55,26 @@ import java.util.Map;
 public class ConnectDistributed {
 private static final Logger log = 
LoggerFactory.getLogger(ConnectDistributed.class);
 
-public static void main(String[] args) throws Exception {
-if (args.length < 1) {
+private final Time time = Time.SYSTEM;
+private final long initStart = time.hiResCloc

[kafka] branch 2.1 updated: KAFKA-7503: Connect integration test harness

2019-01-14 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 672cc57  KAFKA-7503: Connect integration test harness
672cc57 is described below

commit 672cc578ef0407ff0fe0ae3a8ed33a4d9683635e
Author: Arjun Satish 
AuthorDate: Mon Jan 14 13:50:23 2019 -0800

KAFKA-7503: Connect integration test harness

Expose a programmatic way to bring up a Kafka and Zk cluster through Java 
API to facilitate integration tests for framework level changes in Kafka 
Connect. The Kafka classes would be similar to KafkaEmbedded in streams. The 
new classes would reuse the kafka.server.KafkaServer classes from :core, and 
provide a simple interface to bring up brokers in integration tests.

Signed-off-by: Arjun Satish 

Author: Arjun Satish 
Author: Arjun Satish 

Reviewers: Randall Hauch , Konstantine Karantasis 
, Ewen Cheslack-Postava 

Closes #5516 from wicknicks/connect-integration-test

(cherry picked from commit 69d8d2ea11c5e08884ab4c7b8079af5fd21247be)
Signed-off-by: Ewen Cheslack-Postava 
---
 build.gradle   |   2 +
 checkstyle/import-control.xml  |  14 +-
 .../kafka/connect/cli/ConnectDistributed.java  | 109 ---
 .../org/apache/kafka/connect/runtime/Connect.java  |   7 +-
 .../kafka/connect/runtime/HerderProvider.java  |  68 +
 .../runtime/health/ConnectClusterStateImpl.java|  12 +-
 .../kafka/connect/runtime/rest/RestServer.java |  31 +-
 .../runtime/rest/entities/ConnectorStateInfo.java  |  11 +-
 .../rest/resources/ConnectorPluginsResource.java   |  12 +-
 .../runtime/rest/resources/ConnectorsResource.java |  39 +--
 .../runtime/rest/resources/RootResource.java   |   8 +-
 .../kafka/connect/integration/ConnectorHandle.java | 116 +++
 .../integration/ErrorHandlingIntegrationTest.java  | 231 ++
 .../integration/ExampleConnectIntegrationTest.java | 137 +
 .../integration/MonitorableSinkConnector.java  | 115 +++
 .../kafka/connect/integration/RuntimeHandles.java  |  63 
 .../kafka/connect/integration/TaskHandle.java  | 111 +++
 .../kafka/connect/runtime/rest/RestServerTest.java |   5 +-
 .../resources/ConnectorPluginsResourceTest.java|   3 +-
 .../rest/resources/ConnectorsResourceTest.java |   3 +-
 .../runtime/rest/resources/RootResourceTest.java   |   3 +-
 .../util/clusters/EmbeddedConnectCluster.java  | 280 +
 .../util/clusters/EmbeddedKafkaCluster.java| 339 +
 .../runtime/src/test/resources/log4j.properties|   3 +-
 24 files changed, 1617 insertions(+), 105 deletions(-)

diff --git a/build.gradle b/build.gradle
index 064bd2c..6ed1a87 100644
--- a/build.gradle
+++ b/build.gradle
@@ -1409,6 +1409,8 @@ project(':connect:runtime') {
 testCompile libs.powermockEasymock
 
 testCompile project(':clients').sourceSets.test.output
+testCompile project(':core')
+testCompile project(':core').sourceSets.test.output
 
 testRuntime libs.slf4jlog4j
   }
diff --git a/checkstyle/import-control.xml b/checkstyle/import-control.xml
index 91d23f6..3927a25 100644
--- a/checkstyle/import-control.xml
+++ b/checkstyle/import-control.xml
@@ -335,8 +335,6 @@
   
 
 
-
-
 
   
   
@@ -354,6 +352,18 @@
   
   
   
+  
+  
+
+
+
+
+  
+
+
+
+  
+  
 
 
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
index f8c15de..a6c6d98 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
@@ -20,6 +20,7 @@ import org.apache.kafka.common.utils.Exit;
 import org.apache.kafka.common.utils.Time;
 import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.connect.runtime.Connect;
+import org.apache.kafka.connect.runtime.HerderProvider;
 import org.apache.kafka.connect.runtime.Worker;
 import org.apache.kafka.connect.runtime.WorkerConfigTransformer;
 import org.apache.kafka.connect.runtime.WorkerInfo;
@@ -38,6 +39,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.net.URI;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.Map;
 
@@ -53,62 +55,26 @@ import java.util.Map;
 public class ConnectDistributed {
 private static final Logger log = 
LoggerFactory.getLogger(ConnectDistributed.class);
 
-public static void main(String[] args) throws Exception {
-if (args.length < 1) {
+private final Time time = Time.SYSTEM;
+private final long initStart = time.hiResCloc

[kafka] branch trunk updated: KAFKA-7503: Connect integration test harness

2019-01-14 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 69d8d2e  KAFKA-7503: Connect integration test harness
69d8d2e is described below

commit 69d8d2ea11c5e08884ab4c7b8079af5fd21247be
Author: Arjun Satish 
AuthorDate: Mon Jan 14 13:50:23 2019 -0800

KAFKA-7503: Connect integration test harness

Expose a programmatic way to bring up a Kafka and Zk cluster through Java 
API to facilitate integration tests for framework level changes in Kafka 
Connect. The Kafka classes would be similar to KafkaEmbedded in streams. The 
new classes would reuse the kafka.server.KafkaServer classes from :core, and 
provide a simple interface to bring up brokers in integration tests.

Signed-off-by: Arjun Satish 

Author: Arjun Satish 
Author: Arjun Satish 

Reviewers: Randall Hauch , Konstantine Karantasis 
, Ewen Cheslack-Postava 

Closes #5516 from wicknicks/connect-integration-test
---
 build.gradle   |   2 +
 checkstyle/import-control.xml  |  14 +-
 .../kafka/connect/cli/ConnectDistributed.java  | 104 ---
 .../org/apache/kafka/connect/runtime/Connect.java  |   7 +-
 .../kafka/connect/runtime/HerderProvider.java  |  68 +
 .../runtime/health/ConnectClusterStateImpl.java|  12 +-
 .../kafka/connect/runtime/rest/RestServer.java |  31 +-
 .../runtime/rest/entities/ConnectorStateInfo.java  |  11 +-
 .../rest/resources/ConnectorPluginsResource.java   |  12 +-
 .../runtime/rest/resources/ConnectorsResource.java |  39 +--
 .../runtime/rest/resources/RootResource.java   |   8 +-
 .../kafka/connect/integration/ConnectorHandle.java | 116 +++
 .../integration/ErrorHandlingIntegrationTest.java  | 231 ++
 .../integration/ExampleConnectIntegrationTest.java | 137 +
 .../integration/MonitorableSinkConnector.java  | 115 +++
 .../kafka/connect/integration/RuntimeHandles.java  |  63 
 .../kafka/connect/integration/TaskHandle.java  | 111 +++
 .../kafka/connect/runtime/rest/RestServerTest.java |   5 +-
 .../resources/ConnectorPluginsResourceTest.java|   3 +-
 .../rest/resources/ConnectorsResourceTest.java |   3 +-
 .../runtime/rest/resources/RootResourceTest.java   |   3 +-
 .../util/clusters/EmbeddedConnectCluster.java  | 280 +
 .../util/clusters/EmbeddedKafkaCluster.java| 339 +
 .../runtime/src/test/resources/log4j.properties|   3 +-
 24 files changed, 1614 insertions(+), 103 deletions(-)

diff --git a/build.gradle b/build.gradle
index 75a4354..4dbe7d7 100644
--- a/build.gradle
+++ b/build.gradle
@@ -1453,6 +1453,8 @@ project(':connect:runtime') {
 testCompile libs.powermockEasymock
 
 testCompile project(':clients').sourceSets.test.output
+testCompile project(':core')
+testCompile project(':core').sourceSets.test.output
 
 testRuntime libs.slf4jlog4j
   }
diff --git a/checkstyle/import-control.xml b/checkstyle/import-control.xml
index c69f94d..8c98f8d 100644
--- a/checkstyle/import-control.xml
+++ b/checkstyle/import-control.xml
@@ -347,8 +347,6 @@
   
 
 
-
-
 
   
   
@@ -366,6 +364,18 @@
   
   
   
+  
+  
+
+
+
+
+  
+
+
+
+  
+  
 
 
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
index dd43c37..a6c6d98 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java
@@ -20,6 +20,7 @@ import org.apache.kafka.common.utils.Exit;
 import org.apache.kafka.common.utils.Time;
 import org.apache.kafka.common.utils.Utils;
 import org.apache.kafka.connect.runtime.Connect;
+import org.apache.kafka.connect.runtime.HerderProvider;
 import org.apache.kafka.connect.runtime.Worker;
 import org.apache.kafka.connect.runtime.WorkerConfigTransformer;
 import org.apache.kafka.connect.runtime.WorkerInfo;
@@ -54,62 +55,26 @@ import java.util.Map;
 public class ConnectDistributed {
 private static final Logger log = 
LoggerFactory.getLogger(ConnectDistributed.class);
 
+private final Time time = Time.SYSTEM;
+private final long initStart = time.hiResClockMs();
+
 public static void main(String[] args) {
+
 if (args.length < 1 || Arrays.asList(args).contains("--help")) {
 log.info("Usage: ConnectDistributed worker.properties");
 Exit.exit(1);
 }
 
 try {
-Time time = Time.SYSTEM;
-log.info("Kafka Connect distributed worker initializing ...&qu

[kafka] branch 2.1 updated: MINOR: Support choosing different JVMs when running integration tests

2019-01-11 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new f25a9c4  MINOR: Support choosing different JVMs when running 
integration tests
f25a9c4 is described below

commit f25a9c486d956392092f130ab05e3b9fd122b954
Author: Xi Yang 
AuthorDate: Fri Jan 11 15:11:55 2019 -0800

MINOR: Support choosing different JVMs when running integration tests

+ Add a parameter to the ducktap-ak to control the OpenJDK base image.
+ Fix a few issues of using OpenJDK:11 as the base image.

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Xi Yang 

Reviewers: Ewen Cheslack-Postava 

Closes #6071 from yangxi/ducktape-jdk

(cherry picked from commit cc33511e9a0a1493ef89afefb7df089ca546687e)
Signed-off-by: Ewen Cheslack-Postava 
---
 tests/README.md |  4 
 tests/docker/Dockerfile | 13 -
 tests/docker/ducker-ak  |  9 +++--
 3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/tests/README.md b/tests/README.md
index f42b28a..6c20553 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -36,6 +36,10 @@ 
TC_PATHS="tests/kafkatest/tests/client/pluggable_test.py::PluggableConsumerTest"
 ```
 
TC_PATHS="tests/kafkatest/tests/client/pluggable_test.py::PluggableConsumerTest.test_start_stop"
 bash tests/docker/run_tests.sh
 ```
+* Run tests with a different JVM
+```
+bash tests/docker/ducker-ak up -j 'openjdk:11'; tests/docker/run_tests.sh
+```
 
 * Notes
   - The scripts to run tests creates and destroys docker network named *knw*.
diff --git a/tests/docker/Dockerfile b/tests/docker/Dockerfile
index 7c1efd6..e7961e4 100644
--- a/tests/docker/Dockerfile
+++ b/tests/docker/Dockerfile
@@ -13,7 +13,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM openjdk:8
+ARG jdk_version=openjdk:8
+FROM $jdk_version
 
 MAINTAINER Apache Kafka d...@kafka.apache.org
 VOLUME ["/opt/kafka-dev"]
@@ -31,12 +32,14 @@ ARG ducker_creator=default
 LABEL ducker.creator=$ducker_creator
 
 # Update Linux and install necessary utilities.
-RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev && apt-get -y clean
-RUN pip install -U pip==9.0.3 setuptools && pip install --upgrade cffi 
virtualenv pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install 
--upgrade ducktape==0.7.1
+RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev  && apt-get -y clean
+RUN python -m pip install -U pip==9.0.3;
+RUN pip install --upgrade cffi virtualenv pyasn1 boto3 pycrypto pywinrm 
ipaddress enum34 && pip install --upgrade ducktape==0.7.1
 
 # Set up ssh
 COPY ./ssh-config /root/.ssh/config
-RUN ssh-keygen -q -t rsa -N '' -f /root/.ssh/id_rsa && cp -f 
/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
+# NOTE: The paramiko library supports the PEM-format private key, but does not 
support the RFC4716 format.
+RUN ssh-keygen -m PEM -q -t rsa -N '' -f /root/.ssh/id_rsa && cp -f 
/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
 
 # Install binary test dependencies.
 # we use the same versions as in vagrant/base.sh
@@ -69,7 +72,7 @@ RUN apt-get install fuse
 RUN cd /opt && git clone -q  https://github.com/confluentinc/kibosh.git && cd 
"/opt/kibosh" && git reset --hard $KIBOSH_VERSION && mkdir "/opt/kibosh/build" 
&& cd "/opt/kibosh/build" && ../configure && make -j 2
 
 # Set up the ducker user.
-RUN useradd -ms /bin/bash ducker && mkdir -p /home/ducker/ && rsync -aiq 
/root/.ssh/ /home/ducker/.ssh && chown -R ducker /home/ducker/ /mnt/ && echo 
'ducker ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
+RUN useradd -ms /bin/bash ducker && mkdir -p /home/ducker/ && rsync -aiq 
/root/.ssh/ /home/ducker/.ssh && chown -R ducker /home/ducker/ /mnt/ /var/log/ 
&& echo 'ducker ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
 USER ducker
 
 CMD sudo service ssh start && tail -f /dev/null
diff --git a/tests/docker/ducker-ak b/tests/docker/ducker-ak
inde

[kafka] branch trunk updated: MINOR: Support choosing different JVMs when running integration tests

2019-01-11 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cc33511  MINOR: Support choosing different JVMs when running 
integration tests
cc33511 is described below

commit cc33511e9a0a1493ef89afefb7df089ca546687e
Author: Xi Yang 
AuthorDate: Fri Jan 11 15:11:55 2019 -0800

MINOR: Support choosing different JVMs when running integration tests

+ Add a parameter to the ducktap-ak to control the OpenJDK base image.
+ Fix a few issues of using OpenJDK:11 as the base image.

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Xi Yang 

Reviewers: Ewen Cheslack-Postava 

Closes #6071 from yangxi/ducktape-jdk
---
 tests/README.md |  4 
 tests/docker/Dockerfile | 13 -
 tests/docker/ducker-ak  |  9 +++--
 3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/tests/README.md b/tests/README.md
index f42b28a..6c20553 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -36,6 +36,10 @@ 
TC_PATHS="tests/kafkatest/tests/client/pluggable_test.py::PluggableConsumerTest"
 ```
 
TC_PATHS="tests/kafkatest/tests/client/pluggable_test.py::PluggableConsumerTest.test_start_stop"
 bash tests/docker/run_tests.sh
 ```
+* Run tests with a different JVM
+```
+bash tests/docker/ducker-ak up -j 'openjdk:11'; tests/docker/run_tests.sh
+```
 
 * Notes
   - The scripts to run tests creates and destroys docker network named *knw*.
diff --git a/tests/docker/Dockerfile b/tests/docker/Dockerfile
index e5cf439..68efaee 100644
--- a/tests/docker/Dockerfile
+++ b/tests/docker/Dockerfile
@@ -13,7 +13,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM openjdk:8
+ARG jdk_version=openjdk:8
+FROM $jdk_version
 
 MAINTAINER Apache Kafka d...@kafka.apache.org
 VOLUME ["/opt/kafka-dev"]
@@ -31,12 +32,14 @@ ARG ducker_creator=default
 LABEL ducker.creator=$ducker_creator
 
 # Update Linux and install necessary utilities.
-RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev && apt-get -y clean
-RUN pip install -U pip==9.0.3 setuptools && pip install --upgrade cffi 
virtualenv pyasn1 boto3 pycrypto pywinrm ipaddress enum34 && pip install 
--upgrade ducktape==0.7.1
+RUN apt update && apt install -y sudo netcat iptables rsync unzip wget curl jq 
coreutils openssh-server net-tools vim python-pip python-dev libffi-dev 
libssl-dev cmake pkg-config libfuse-dev  && apt-get -y clean
+RUN python -m pip install -U pip==9.0.3;
+RUN pip install --upgrade cffi virtualenv pyasn1 boto3 pycrypto pywinrm 
ipaddress enum34 && pip install --upgrade ducktape==0.7.1
 
 # Set up ssh
 COPY ./ssh-config /root/.ssh/config
-RUN ssh-keygen -q -t rsa -N '' -f /root/.ssh/id_rsa && cp -f 
/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
+# NOTE: The paramiko library supports the PEM-format private key, but does not 
support the RFC4716 format.
+RUN ssh-keygen -m PEM -q -t rsa -N '' -f /root/.ssh/id_rsa && cp -f 
/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
 
 # Install binary test dependencies.
 # we use the same versions as in vagrant/base.sh
@@ -71,7 +74,7 @@ RUN apt-get install fuse
 RUN cd /opt && git clone -q  https://github.com/confluentinc/kibosh.git && cd 
"/opt/kibosh" && git reset --hard $KIBOSH_VERSION && mkdir "/opt/kibosh/build" 
&& cd "/opt/kibosh/build" && ../configure && make -j 2
 
 # Set up the ducker user.
-RUN useradd -ms /bin/bash ducker && mkdir -p /home/ducker/ && rsync -aiq 
/root/.ssh/ /home/ducker/.ssh && chown -R ducker /home/ducker/ /mnt/ && echo 
'ducker ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
+RUN useradd -ms /bin/bash ducker && mkdir -p /home/ducker/ && rsync -aiq 
/root/.ssh/ /home/ducker/.ssh && chown -R ducker /home/ducker/ /mnt/ /var/log/ 
&& echo 'ducker ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
 USER ducker
 
 CMD sudo service ssh start && tail -f /dev/null
diff --git a/tests/docker/ducker-ak b/tests/docker/ducker-ak
index ba8ccf4..a54bd30 100755
--- a/tests/docker/ducker-ak
+++ b/tests/docker/ducker-ak
@@ -41,6 +41,9 @@ docker_run_mem

[kafka] branch trunk updated: MINOR: Safe string conversion to avoid NPEs

2018-12-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9f954ac  MINOR: Safe string conversion to avoid NPEs
9f954ac is described below

commit 9f954ac614cd5dd7efbcabe34799207128f16e63
Author: Cyrus Vafadari 
AuthorDate: Wed Dec 5 13:23:52 2018 -0800

MINOR: Safe string conversion to avoid NPEs

Should be ported back to 2.0

Author: Cyrus Vafadari 

Reviewers: Ewen Cheslack-Postava 

Closes #6004 from cyrusv/cyrus-npe
---
 .../main/java/org/apache/kafka/connect/connector/ConnectRecord.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
index 7eced85..b181209 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
@@ -140,9 +140,9 @@ public abstract class ConnectRecord> {
 "topic='" + topic + '\'' +
 ", kafkaPartition=" + kafkaPartition +
 ", key=" + key +
-", keySchema=" + keySchema.toString() +
+", keySchema=" + keySchema +
 ", value=" + value +
-", valueSchema=" + valueSchema.toString() +
+", valueSchema=" + valueSchema +
 ", timestamp=" + timestamp +
 ", headers=" + headers +
 '}';



[kafka] branch 2.1 updated: MINOR: Safe string conversion to avoid NPEs

2018-12-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new fe2bd8e  MINOR: Safe string conversion to avoid NPEs
fe2bd8e is described below

commit fe2bd8ede410373a6f32839efe2ab6e60c03a773
Author: Cyrus Vafadari 
AuthorDate: Wed Dec 5 13:23:52 2018 -0800

MINOR: Safe string conversion to avoid NPEs

Should be ported back to 2.0

Author: Cyrus Vafadari 

Reviewers: Ewen Cheslack-Postava 

Closes #6004 from cyrusv/cyrus-npe

(cherry picked from commit 9f954ac614cd5dd7efbcabe34799207128f16e63)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/connector/ConnectRecord.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
index 55272c2..03326cc 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
@@ -140,9 +140,9 @@ public abstract class ConnectRecord> {
 "topic='" + topic + '\'' +
 ", kafkaPartition=" + kafkaPartition +
 ", key=" + key +
-", keySchema=" + keySchema.toString() +
+", keySchema=" + keySchema +
 ", value=" + value +
-", valueSchema=" + valueSchema.toString() +
+", valueSchema=" + valueSchema +
 ", timestamp=" + timestamp +
 ", headers=" + headers +
 '}';



[kafka] branch 2.0 updated: MINOR: Safe string conversion to avoid NPEs

2018-12-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 4baf0af  MINOR: Safe string conversion to avoid NPEs
4baf0af is described below

commit 4baf0afd0478392308d55052266762b3faafe516
Author: Cyrus Vafadari 
AuthorDate: Wed Dec 5 13:23:52 2018 -0800

MINOR: Safe string conversion to avoid NPEs

Should be ported back to 2.0

Author: Cyrus Vafadari 

Reviewers: Ewen Cheslack-Postava 

Closes #6004 from cyrusv/cyrus-npe

(cherry picked from commit 9f954ac614cd5dd7efbcabe34799207128f16e63)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/connector/ConnectRecord.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
index 2b5d75c..4aa7d5a 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
@@ -140,9 +140,9 @@ public abstract class ConnectRecord> {
 "topic='" + topic + '\'' +
 ", kafkaPartition=" + kafkaPartition +
 ", key=" + key +
-", keySchema=" + keySchema.toString() +
+", keySchema=" + keySchema +
 ", value=" + value +
-", valueSchema=" + valueSchema.toString() +
+", valueSchema=" + valueSchema +
 ", timestamp=" + timestamp +
 ", headers=" + headers +
 '}';



[kafka] branch trunk updated: KAFKA-7551: Refactor to create producer & consumer in the worker

2018-11-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ace4dd0  KAFKA-7551: Refactor to create producer & consumer in the 
worker
ace4dd0 is described below

commit ace4dd00566afb7d04235bbbcc50097191af0fec
Author: Magesh Nandakumar 
AuthorDate: Thu Nov 29 23:38:50 2018 -0800

KAFKA-7551: Refactor to create producer & consumer in the worker

This is minor refactoring that brings in the creation of producer and 
consumer to the Worker. Currently, the consumer is created in the 
WorkerSinkTask. This should not affect any functionality and it just makes the 
code structure easier to understand.

Author: Magesh Nandakumar 

Reviewers: Ryanne Dolan , Randall Hauch 
, Robert Yokota , Ewen Cheslack-Postava 


Closes #5842 from mageshn/KAFKA-7551
---
 .../org/apache/kafka/connect/runtime/Worker.java   | 62 ++--
 .../kafka/connect/runtime/WorkerSinkTask.java  | 31 +-
 .../connect/runtime/ErrorHandlingTaskTest.java | 10 ++--
 .../kafka/connect/runtime/WorkerSinkTaskTest.java  | 13 ++---
 .../runtime/WorkerSinkTaskThreadedTest.java|  6 +-
 .../apache/kafka/connect/runtime/WorkerTest.java   | 66 +-
 6 files changed, 123 insertions(+), 65 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
index 81a165c..673bd4e 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
@@ -16,6 +16,8 @@
  */
 package org.apache.kafka.connect.runtime;
 
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
 import org.apache.kafka.clients.producer.KafkaProducer;
 import org.apache.kafka.clients.producer.ProducerConfig;
 import org.apache.kafka.common.MetricName;
@@ -50,6 +52,7 @@ import org.apache.kafka.connect.storage.OffsetStorageReader;
 import org.apache.kafka.connect.storage.OffsetStorageReaderImpl;
 import org.apache.kafka.connect.storage.OffsetStorageWriter;
 import org.apache.kafka.connect.util.ConnectorTaskId;
+import org.apache.kafka.connect.util.SinkUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -89,7 +92,6 @@ public class Worker {
 private final Converter internalKeyConverter;
 private final Converter internalValueConverter;
 private final OffsetBackingStore offsetBackingStore;
-private final Map producerProps;
 
 private final ConcurrentMap connectors = new 
ConcurrentHashMap<>();
 private final ConcurrentMap tasks = new 
ConcurrentHashMap<>();
@@ -129,19 +131,6 @@ public class Worker {
 
 this.workerConfigTransformer = initConfigTransformer();
 
-producerProps = new HashMap<>();
-producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
Utils.join(config.getList(WorkerConfig.BOOTSTRAP_SERVERS_CONFIG), ","));
-producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer");
-producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer");
-// These settings are designed to ensure there is no data loss. They 
*may* be overridden via configs passed to the
-// worker, but this may compromise the delivery guarantees of Kafka 
Connect.
-producerProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 
Integer.toString(Integer.MAX_VALUE));
-producerProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 
Long.toString(Long.MAX_VALUE));
-producerProps.put(ProducerConfig.ACKS_CONFIG, "all");
-
producerProps.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
-producerProps.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, 
Integer.toString(Integer.MAX_VALUE));
-// User-specified overrides
-producerProps.putAll(config.originalsWithPrefix("producer."));
 }
 
 private WorkerConfigTransformer initConfigTransformer() {
@@ -499,6 +488,7 @@ public class Worker {
 internalKeyConverter, internalValueConverter);
 OffsetStorageWriter offsetWriter = new 
OffsetStorageWriter(offsetBackingStore, id.connector(),
 internalKeyConverter, internalValueConverter);
+Map producerProps = producerConfigs(config);
 KafkaProducer producer = new 
KafkaProducer<>(producerProps);
 
 // Note we pass the configState as it performs dynamic 
transformations under the covers
@@ -510,15 +500,54 @@ public class Worker {

[kafka] branch 2.1 updated: MINOR: Add logging to Connect SMTs

2018-11-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 45b3971  MINOR: Add logging to Connect SMTs
45b3971 is described below

commit 45b39710d8bfccf945f6aa1e392704ef6008339d
Author: Cyrus Vafadari 
AuthorDate: Thu Nov 29 22:29:50 2018 -0800

MINOR: Add logging to Connect SMTs

Includes Update to ConnectRecord string representation to give
visibility into schemas, useful in SMT tracing

Author: Cyrus Vafadari 

Reviewers: Randall Hauch , Konstantine Karantasis 
, Ewen Cheslack-Postava 

Closes #5860 from cyrusv/cyrus-logging

(cherry picked from commit 4712a3641619e86b8e6d901355088f6ae06e9f37)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../org/apache/kafka/connect/connector/ConnectRecord.java   |  2 ++
 .../apache/kafka/connect/runtime/TransformationChain.java   | 13 +
 .../main/java/org/apache/kafka/connect/runtime/Worker.java  |  2 ++
 .../main/java/org/apache/kafka/connect/transforms/Cast.java |  4 
 .../apache/kafka/connect/transforms/SetSchemaMetadata.java  |  7 ++-
 5 files changed, 27 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
index 2c5f514..55272c2 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
@@ -140,7 +140,9 @@ public abstract class ConnectRecord> {
 "topic='" + topic + '\'' +
 ", kafkaPartition=" + kafkaPartition +
 ", key=" + key +
+", keySchema=" + keySchema.toString() +
 ", value=" + value +
+", valueSchema=" + valueSchema.toString() +
 ", timestamp=" + timestamp +
 ", headers=" + headers +
 '}';
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
index 3680905..a077a01 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
@@ -20,11 +20,15 @@ import org.apache.kafka.connect.connector.ConnectRecord;
 import org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator;
 import org.apache.kafka.connect.runtime.errors.Stage;
 import org.apache.kafka.connect.transforms.Transformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.util.List;
 import java.util.Objects;
+import java.util.StringJoiner;
 
 public class TransformationChain> {
+private static final Logger log = 
LoggerFactory.getLogger(TransformationChain.class);
 
 private final List> transformations;
 private final RetryWithToleranceOperator retryWithToleranceOperator;
@@ -40,6 +44,8 @@ public class TransformationChain> {
 for (final Transformation transformation : transformations) {
 final R current = record;
 
+log.trace("Applying transformation {} to {}",
+transformation.getClass().getName(), record);
 // execute the operation
 record = retryWithToleranceOperator.execute(() -> 
transformation.apply(current), Stage.TRANSFORMATION, transformation.getClass());
 
@@ -68,4 +74,11 @@ public class TransformationChain> 
{
 return Objects.hash(transformations);
 }
 
+public String toString() {
+StringJoiner chain = new StringJoiner(", ", getClass().getName() + 
"{", "}");
+for (Transformation transformation : transformations) {
+chain.add(transformation.getClass().getName());
+}
+return chain.toString();
+}
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
index df73a43..1fd91d3 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
@@ -493,6 +493,7 @@ public class Worker {
 if (task instanceof SourceTask) {
 retryWithToleranceOperator.reporters(sourceTaskReporters(id, 
connConfig, errorHandlingMetrics));
 TransformationChain transformationChain = new 
TransformationChain<>(connConfig.transformations(), 
retryWithToleranceOperator);
+log.info("I

[kafka] branch trunk updated: MINOR: Add logging to Connect SMTs

2018-11-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4712a36  MINOR: Add logging to Connect SMTs
4712a36 is described below

commit 4712a3641619e86b8e6d901355088f6ae06e9f37
Author: Cyrus Vafadari 
AuthorDate: Thu Nov 29 22:29:50 2018 -0800

MINOR: Add logging to Connect SMTs

Includes Update to ConnectRecord string representation to give
visibility into schemas, useful in SMT tracing

Author: Cyrus Vafadari 

Reviewers: Randall Hauch , Konstantine Karantasis 
, Ewen Cheslack-Postava 

Closes #5860 from cyrusv/cyrus-logging
---
 .../org/apache/kafka/connect/connector/ConnectRecord.java   |  2 ++
 .../apache/kafka/connect/runtime/TransformationChain.java   | 13 +
 .../main/java/org/apache/kafka/connect/runtime/Worker.java  |  2 ++
 .../main/java/org/apache/kafka/connect/transforms/Cast.java |  4 
 .../apache/kafka/connect/transforms/SetSchemaMetadata.java  |  7 ++-
 5 files changed, 27 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
index aa58e63..7eced85 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java
@@ -140,7 +140,9 @@ public abstract class ConnectRecord> {
 "topic='" + topic + '\'' +
 ", kafkaPartition=" + kafkaPartition +
 ", key=" + key +
+", keySchema=" + keySchema.toString() +
 ", value=" + value +
+", valueSchema=" + valueSchema.toString() +
 ", timestamp=" + timestamp +
 ", headers=" + headers +
 '}';
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
index 3680905..a077a01 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
@@ -20,11 +20,15 @@ import org.apache.kafka.connect.connector.ConnectRecord;
 import org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator;
 import org.apache.kafka.connect.runtime.errors.Stage;
 import org.apache.kafka.connect.transforms.Transformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.util.List;
 import java.util.Objects;
+import java.util.StringJoiner;
 
 public class TransformationChain> {
+private static final Logger log = 
LoggerFactory.getLogger(TransformationChain.class);
 
 private final List> transformations;
 private final RetryWithToleranceOperator retryWithToleranceOperator;
@@ -40,6 +44,8 @@ public class TransformationChain> {
 for (final Transformation transformation : transformations) {
 final R current = record;
 
+log.trace("Applying transformation {} to {}",
+transformation.getClass().getName(), record);
 // execute the operation
 record = retryWithToleranceOperator.execute(() -> 
transformation.apply(current), Stage.TRANSFORMATION, transformation.getClass());
 
@@ -68,4 +74,11 @@ public class TransformationChain> 
{
 return Objects.hash(transformations);
 }
 
+public String toString() {
+StringJoiner chain = new StringJoiner(", ", getClass().getName() + 
"{", "}");
+for (Transformation transformation : transformations) {
+chain.add(transformation.getClass().getName());
+}
+return chain.toString();
+}
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
index 6e021b9..81a165c 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
@@ -494,6 +494,7 @@ public class Worker {
 if (task instanceof SourceTask) {
 retryWithToleranceOperator.reporters(sourceTaskReporters(id, 
connConfig, errorHandlingMetrics));
 TransformationChain transformationChain = new 
TransformationChain<>(connConfig.transformations(), 
retryWithToleranceOperator);
+log.info("Initializing: {}", transformationChain);
 OffsetStorageReader offsetReader = new 
OffsetStorageReaderImpl(offsetBackingStore, id

[kafka] branch trunk updated: MINOR: Fix handling of dummy record in EndToEndLatency tool

2018-11-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3acebe6  MINOR: Fix handling of dummy record in EndToEndLatency tool
3acebe6 is described below

commit 3acebe63836b4a30d21f8c2ca2934e1a0fcad2f5
Author: Anna Povzner 
AuthorDate: Thu Nov 29 22:21:20 2018 -0800

MINOR: Fix handling of dummy record in EndToEndLatency tool

EndToEndLatency tool produces a dummy record in case the topic does not 
exist. This behavior was introduced in this PR 
https://github.com/apache/kafka/pull/5319  as part of updating the tool to use 
latest consumer API. However, if we run the tool with producer acks == 1, the 
high watermark may not be updated before we reset consumer offsets to latest. 
In rare cases when this happens, the tool will throw an exception in the for 
loop where the consumer will unexpectedly consume the dumm [...]

This PR checks if topic exists, and creates the topic using AdminClient if 
it does not exist.

Author: Anna Povzner 

Reviewers: Ismael Juma , Ewen Cheslack-Postava 


Closes #5950 from apovzner/fix-EndToEndLatency
---
 .../main/scala/kafka/tools/EndToEndLatency.scala   | 41 +-
 1 file changed, 32 insertions(+), 9 deletions(-)

diff --git a/core/src/main/scala/kafka/tools/EndToEndLatency.scala 
b/core/src/main/scala/kafka/tools/EndToEndLatency.scala
index 4849b1e..8107584 100755
--- a/core/src/main/scala/kafka/tools/EndToEndLatency.scala
+++ b/core/src/main/scala/kafka/tools/EndToEndLatency.scala
@@ -19,9 +19,11 @@ package kafka.tools
 
 import java.nio.charset.StandardCharsets
 import java.time.Duration
-import java.util.{Arrays, Properties}
+import java.util.{Collections, Arrays, Properties}
 
 import kafka.utils.Exit
+import org.apache.kafka.clients.admin.NewTopic
+import org.apache.kafka.clients.{admin, CommonClientConfigs}
 import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer}
 import org.apache.kafka.clients.producer._
 import org.apache.kafka.common.TopicPartition
@@ -44,6 +46,8 @@ import scala.util.Random
 
 object EndToEndLatency {
   private val timeout: Long = 6
+  private val defaultReplicationFactor: Short = 1
+  private val defaultNumPartitions: Int = 1
 
   def main(args: Array[String]) {
 if (args.length != 5 && args.length != 6) {
@@ -61,10 +65,13 @@ object EndToEndLatency {
 if (!List("1", "all").contains(producerAcks))
   throw new IllegalArgumentException("Latency testing requires synchronous 
acknowledgement. Please use 1 or all")
 
-def loadProps: Properties = propsFile.map(Utils.loadProps).getOrElse(new 
Properties())
+def loadPropsWithBootstrapServers: Properties = {
+  val props = propsFile.map(Utils.loadProps).getOrElse(new Properties())
+  props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, brokerList)
+  props
+}
 
-val consumerProps = loadProps
-consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList)
+val consumerProps = loadPropsWithBootstrapServers
 consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())
 consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
 consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
@@ -73,8 +80,7 @@ object EndToEndLatency {
 consumerProps.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "0") //ensure 
we have no temporal batching
 val consumer = new KafkaConsumer[Array[Byte], Array[Byte]](consumerProps)
 
-val producerProps = loadProps
-producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList)
+val producerProps = loadPropsWithBootstrapServers
 producerProps.put(ProducerConfig.LINGER_MS_CONFIG, "0") //ensure writes 
are synchronous
 producerProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 
Long.MaxValue.toString)
 producerProps.put(ProducerConfig.ACKS_CONFIG, producerAcks.toString)
@@ -82,15 +88,22 @@ object EndToEndLatency {
 producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
 val producer = new KafkaProducer[Array[Byte], Array[Byte]](producerProps)
 
-// sends a dummy message to create the topic if it doesn't exist
-producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic, 
Array[Byte]())).get()
-
 def finalise() {
   consumer.commitSync()
   producer.close()
   consumer.close()
 }
 
+// create topic if it does not exist
+if (!consumer.listTopics().containsKey(topic)) {
+  try {
+createTopic(topic, loadPropsWithBootstrapServers)
+  } catch {
+case t: Throwable =>
+  finalise()
+  throw

[kafka] branch 2.1 updated: MINOR: Fix handling of dummy record in EndToEndLatency tool

2018-11-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new b0860dc  MINOR: Fix handling of dummy record in EndToEndLatency tool
b0860dc is described below

commit b0860dcb3fd00b8099d43ee941ead1f6a82167c5
Author: Anna Povzner 
AuthorDate: Thu Nov 29 22:21:20 2018 -0800

MINOR: Fix handling of dummy record in EndToEndLatency tool

EndToEndLatency tool produces a dummy record in case the topic does not 
exist. This behavior was introduced in this PR 
https://github.com/apache/kafka/pull/5319  as part of updating the tool to use 
latest consumer API. However, if we run the tool with producer acks == 1, the 
high watermark may not be updated before we reset consumer offsets to latest. 
In rare cases when this happens, the tool will throw an exception in the for 
loop where the consumer will unexpectedly consume the dumm [...]

This PR checks if topic exists, and creates the topic using AdminClient if 
it does not exist.

Author: Anna Povzner 

Reviewers: Ismael Juma , Ewen Cheslack-Postava 


Closes #5950 from apovzner/fix-EndToEndLatency

(cherry picked from commit 3acebe63836b4a30d21f8c2ca2934e1a0fcad2f5)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/scala/kafka/tools/EndToEndLatency.scala   | 41 +-
 1 file changed, 32 insertions(+), 9 deletions(-)

diff --git a/core/src/main/scala/kafka/tools/EndToEndLatency.scala 
b/core/src/main/scala/kafka/tools/EndToEndLatency.scala
index 4849b1e..8107584 100755
--- a/core/src/main/scala/kafka/tools/EndToEndLatency.scala
+++ b/core/src/main/scala/kafka/tools/EndToEndLatency.scala
@@ -19,9 +19,11 @@ package kafka.tools
 
 import java.nio.charset.StandardCharsets
 import java.time.Duration
-import java.util.{Arrays, Properties}
+import java.util.{Collections, Arrays, Properties}
 
 import kafka.utils.Exit
+import org.apache.kafka.clients.admin.NewTopic
+import org.apache.kafka.clients.{admin, CommonClientConfigs}
 import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer}
 import org.apache.kafka.clients.producer._
 import org.apache.kafka.common.TopicPartition
@@ -44,6 +46,8 @@ import scala.util.Random
 
 object EndToEndLatency {
   private val timeout: Long = 6
+  private val defaultReplicationFactor: Short = 1
+  private val defaultNumPartitions: Int = 1
 
   def main(args: Array[String]) {
 if (args.length != 5 && args.length != 6) {
@@ -61,10 +65,13 @@ object EndToEndLatency {
 if (!List("1", "all").contains(producerAcks))
   throw new IllegalArgumentException("Latency testing requires synchronous 
acknowledgement. Please use 1 or all")
 
-def loadProps: Properties = propsFile.map(Utils.loadProps).getOrElse(new 
Properties())
+def loadPropsWithBootstrapServers: Properties = {
+  val props = propsFile.map(Utils.loadProps).getOrElse(new Properties())
+  props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, brokerList)
+  props
+}
 
-val consumerProps = loadProps
-consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList)
+val consumerProps = loadPropsWithBootstrapServers
 consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())
 consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
 consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
@@ -73,8 +80,7 @@ object EndToEndLatency {
 consumerProps.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "0") //ensure 
we have no temporal batching
 val consumer = new KafkaConsumer[Array[Byte], Array[Byte]](consumerProps)
 
-val producerProps = loadProps
-producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList)
+val producerProps = loadPropsWithBootstrapServers
 producerProps.put(ProducerConfig.LINGER_MS_CONFIG, "0") //ensure writes 
are synchronous
 producerProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 
Long.MaxValue.toString)
 producerProps.put(ProducerConfig.ACKS_CONFIG, producerAcks.toString)
@@ -82,15 +88,22 @@ object EndToEndLatency {
 producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
 val producer = new KafkaProducer[Array[Byte], Array[Byte]](producerProps)
 
-// sends a dummy message to create the topic if it doesn't exist
-producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic, 
Array[Byte]())).get()
-
 def finalise() {
   consumer.commitSync()
   producer.close()
   consumer.close()
 }
 
+// create topic if it does not exist
+if (!consumer.listTopics().containsKey(topic)) {
+  try {
+createTopic(topic, loadPropsWithBootstrap

[kafka] branch 2.0 updated: KAFKA-7620: Fix restart logic for TTLs in WorkerConfigTransformer

2018-11-27 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new e7298f4  KAFKA-7620: Fix restart logic for TTLs in 
WorkerConfigTransformer
e7298f4 is described below

commit e7298f4fc53f27f91564f60c3818fa392287ff33
Author: Robert Yokota 
AuthorDate: Tue Nov 27 22:01:21 2018 -0800

KAFKA-7620: Fix restart logic for TTLs in WorkerConfigTransformer

The restart logic for TTLs in `WorkerConfigTransformer` was broken when 
trying to make it toggle-able.   Accessing the toggle through the `Herder` 
causes the same code to be called recursively.  This fix just accesses the 
toggle by simply looking in the properties map that is passed to 
`WorkerConfigTransformer`.

Author: Robert Yokota 

Reviewers: Magesh Nandakumar , Ewen 
Cheslack-Postava 

Closes #5914 from rayokota/KAFKA-7620

(cherry picked from commit a2e87feb8b1db8200ca3a34aa72b0802e8f61096)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../kafka/connect/runtime/ConnectorConfig.java |  5 ++-
 .../org/apache/kafka/connect/runtime/Herder.java   |  6 ---
 .../connect/runtime/WorkerConfigTransformer.java   | 44 ++
 .../runtime/distributed/DistributedHerder.java |  8 
 .../runtime/standalone/StandaloneHerder.java   |  8 
 .../runtime/WorkerConfigTransformerTest.java   | 13 ---
 6 files changed, 39 insertions(+), 45 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
index 9d1a50d..d030fed 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
@@ -35,6 +35,7 @@ import java.util.HashMap;
 import java.util.HashSet;
 import java.util.LinkedHashSet;
 import java.util.List;
+import java.util.Locale;
 import java.util.Map;
 
 import static 
org.apache.kafka.common.config.ConfigDef.NonEmptyStringWithoutControlChars.nonEmptyStringWithoutControlChars;
@@ -105,8 +106,8 @@ public class ConnectorConfig extends AbstractConfig {
 "indicates that a configuration value will expire in the future.";
 
 private static final String CONFIG_RELOAD_ACTION_DISPLAY = "Reload Action";
-public static final String CONFIG_RELOAD_ACTION_NONE = 
Herder.ConfigReloadAction.NONE.toString();
-public static final String CONFIG_RELOAD_ACTION_RESTART = 
Herder.ConfigReloadAction.RESTART.toString();
+public static final String CONFIG_RELOAD_ACTION_NONE = 
Herder.ConfigReloadAction.NONE.name().toLowerCase(Locale.ROOT);
+public static final String CONFIG_RELOAD_ACTION_RESTART = 
Herder.ConfigReloadAction.RESTART.name().toLowerCase(Locale.ROOT);
 
 public static final String ERRORS_RETRY_TIMEOUT_CONFIG = 
"errors.retry.timeout";
 public static final String ERRORS_RETRY_TIMEOUT_DISPLAY = "Retry Timeout 
for Errors";
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
index 5c7cc14..c572e20 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
@@ -149,12 +149,6 @@ public interface Herder {
 void restartTask(ConnectorTaskId id, Callback cb);
 
 /**
- * Get the configuration reload action.
- * @param connName name of the connector
- */
-ConfigReloadAction connectorConfigReloadAction(final String connName);
-
-/**
  * Restart the connector.
  * @param connName name of the connector
  * @param cb callback to invoke upon completion
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
index 1b715c7..3373d5c 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
@@ -16,10 +16,15 @@
  */
 package org.apache.kafka.connect.runtime;
 
+import org.apache.kafka.common.config.ConfigDef;
 import org.apache.kafka.common.config.provider.ConfigProvider;
 import org.apache.kafka.common.config.ConfigTransformer;
 import org.apache.kafka.common.config.ConfigTransformerResult;
+import org.apache.kafka.connect.runtime.Herder.ConfigReloadAction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
+import java.util.Locale;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent

[kafka] branch 2.1 updated: KAFKA-7620: Fix restart logic for TTLs in WorkerConfigTransformer

2018-11-27 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 9951bf9  KAFKA-7620: Fix restart logic for TTLs in 
WorkerConfigTransformer
9951bf9 is described below

commit 9951bf911125c51a2574ac0dbb9913bc0500b594
Author: Robert Yokota 
AuthorDate: Tue Nov 27 22:01:21 2018 -0800

KAFKA-7620: Fix restart logic for TTLs in WorkerConfigTransformer

The restart logic for TTLs in `WorkerConfigTransformer` was broken when 
trying to make it toggle-able.   Accessing the toggle through the `Herder` 
causes the same code to be called recursively.  This fix just accesses the 
toggle by simply looking in the properties map that is passed to 
`WorkerConfigTransformer`.

Author: Robert Yokota 

Reviewers: Magesh Nandakumar , Ewen 
Cheslack-Postava 

Closes #5914 from rayokota/KAFKA-7620

(cherry picked from commit a2e87feb8b1db8200ca3a34aa72b0802e8f61096)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../kafka/connect/runtime/ConnectorConfig.java |  5 ++-
 .../org/apache/kafka/connect/runtime/Herder.java   |  6 ---
 .../connect/runtime/WorkerConfigTransformer.java   | 44 ++
 .../runtime/distributed/DistributedHerder.java |  8 
 .../runtime/standalone/StandaloneHerder.java   |  8 
 .../runtime/WorkerConfigTransformerTest.java   | 13 ---
 6 files changed, 39 insertions(+), 45 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
index 10096a5..e915843 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
@@ -35,6 +35,7 @@ import java.util.HashMap;
 import java.util.HashSet;
 import java.util.LinkedHashSet;
 import java.util.List;
+import java.util.Locale;
 import java.util.Map;
 
 import static 
org.apache.kafka.common.config.ConfigDef.NonEmptyStringWithoutControlChars.nonEmptyStringWithoutControlChars;
@@ -105,8 +106,8 @@ public class ConnectorConfig extends AbstractConfig {
 "indicates that a configuration value will expire in the future.";
 
 private static final String CONFIG_RELOAD_ACTION_DISPLAY = "Reload Action";
-public static final String CONFIG_RELOAD_ACTION_NONE = 
Herder.ConfigReloadAction.NONE.toString();
-public static final String CONFIG_RELOAD_ACTION_RESTART = 
Herder.ConfigReloadAction.RESTART.toString();
+public static final String CONFIG_RELOAD_ACTION_NONE = 
Herder.ConfigReloadAction.NONE.name().toLowerCase(Locale.ROOT);
+public static final String CONFIG_RELOAD_ACTION_RESTART = 
Herder.ConfigReloadAction.RESTART.name().toLowerCase(Locale.ROOT);
 
 public static final String ERRORS_RETRY_TIMEOUT_CONFIG = 
"errors.retry.timeout";
 public static final String ERRORS_RETRY_TIMEOUT_DISPLAY = "Retry Timeout 
for Errors";
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
index 5c7cc14..c572e20 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
@@ -149,12 +149,6 @@ public interface Herder {
 void restartTask(ConnectorTaskId id, Callback cb);
 
 /**
- * Get the configuration reload action.
- * @param connName name of the connector
- */
-ConfigReloadAction connectorConfigReloadAction(final String connName);
-
-/**
  * Restart the connector.
  * @param connName name of the connector
  * @param cb callback to invoke upon completion
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
index 1b715c7..3373d5c 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
@@ -16,10 +16,15 @@
  */
 package org.apache.kafka.connect.runtime;
 
+import org.apache.kafka.common.config.ConfigDef;
 import org.apache.kafka.common.config.provider.ConfigProvider;
 import org.apache.kafka.common.config.ConfigTransformer;
 import org.apache.kafka.common.config.ConfigTransformerResult;
+import org.apache.kafka.connect.runtime.Herder.ConfigReloadAction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
+import java.util.Locale;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent

[kafka] branch trunk updated: KAFKA-7620: Fix restart logic for TTLs in WorkerConfigTransformer

2018-11-27 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a2e87fe  KAFKA-7620: Fix restart logic for TTLs in 
WorkerConfigTransformer
a2e87fe is described below

commit a2e87feb8b1db8200ca3a34aa72b0802e8f61096
Author: Robert Yokota 
AuthorDate: Tue Nov 27 22:01:21 2018 -0800

KAFKA-7620: Fix restart logic for TTLs in WorkerConfigTransformer

The restart logic for TTLs in `WorkerConfigTransformer` was broken when 
trying to make it toggle-able.   Accessing the toggle through the `Herder` 
causes the same code to be called recursively.  This fix just accesses the 
toggle by simply looking in the properties map that is passed to 
`WorkerConfigTransformer`.

Author: Robert Yokota 

Reviewers: Magesh Nandakumar , Ewen 
Cheslack-Postava 

Closes #5914 from rayokota/KAFKA-7620
---
 .../kafka/connect/runtime/ConnectorConfig.java |  5 ++-
 .../org/apache/kafka/connect/runtime/Herder.java   |  6 ---
 .../connect/runtime/WorkerConfigTransformer.java   | 44 ++
 .../runtime/distributed/DistributedHerder.java |  8 
 .../runtime/standalone/StandaloneHerder.java   |  8 
 .../runtime/WorkerConfigTransformerTest.java   | 13 ---
 6 files changed, 39 insertions(+), 45 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
index efcc01d..8889aad 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
@@ -35,6 +35,7 @@ import java.util.HashMap;
 import java.util.HashSet;
 import java.util.LinkedHashSet;
 import java.util.List;
+import java.util.Locale;
 import java.util.Map;
 
 import static 
org.apache.kafka.common.config.ConfigDef.NonEmptyStringWithoutControlChars.nonEmptyStringWithoutControlChars;
@@ -105,8 +106,8 @@ public class ConnectorConfig extends AbstractConfig {
 "indicates that a configuration value will expire in the future.";
 
 private static final String CONFIG_RELOAD_ACTION_DISPLAY = "Reload Action";
-public static final String CONFIG_RELOAD_ACTION_NONE = 
Herder.ConfigReloadAction.NONE.toString();
-public static final String CONFIG_RELOAD_ACTION_RESTART = 
Herder.ConfigReloadAction.RESTART.toString();
+public static final String CONFIG_RELOAD_ACTION_NONE = 
Herder.ConfigReloadAction.NONE.name().toLowerCase(Locale.ROOT);
+public static final String CONFIG_RELOAD_ACTION_RESTART = 
Herder.ConfigReloadAction.RESTART.name().toLowerCase(Locale.ROOT);
 
 public static final String ERRORS_RETRY_TIMEOUT_CONFIG = 
"errors.retry.timeout";
 public static final String ERRORS_RETRY_TIMEOUT_DISPLAY = "Retry Timeout 
for Errors";
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
index 5c7cc14..c572e20 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Herder.java
@@ -149,12 +149,6 @@ public interface Herder {
 void restartTask(ConnectorTaskId id, Callback cb);
 
 /**
- * Get the configuration reload action.
- * @param connName name of the connector
- */
-ConfigReloadAction connectorConfigReloadAction(final String connName);
-
-/**
  * Restart the connector.
  * @param connName name of the connector
  * @param cb callback to invoke upon completion
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
index 1b715c7..3373d5c 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
@@ -16,10 +16,15 @@
  */
 package org.apache.kafka.connect.runtime;
 
+import org.apache.kafka.common.config.ConfigDef;
 import org.apache.kafka.common.config.provider.ConfigProvider;
 import org.apache.kafka.common.config.ConfigTransformer;
 import org.apache.kafka.common.config.ConfigTransformerResult;
+import org.apache.kafka.connect.runtime.Herder.ConfigReloadAction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
+import java.util.Locale;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
@@ -29,6 +34,8 @@ import java.util.concurrent.ConcurrentMap;
  * retrieved TTL values.
  */
 public class Worke

[kafka] branch 2.1 updated: KAFKA-7560; PushHttpMetricsReporter should not convert metric value to double

2018-11-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new c5acbef  KAFKA-7560; PushHttpMetricsReporter should not convert metric 
value to double
c5acbef is described below

commit c5acbef82b6050df603a073961c25688a0fdebb9
Author: Dong Lin 
AuthorDate: Wed Nov 7 08:04:29 2018 -0800

KAFKA-7560; PushHttpMetricsReporter should not convert metric value to 
double

Currently PushHttpMetricsReporter will convert value from 
KafkaMetric.metricValue() to double. This will not work for non-numerical 
metrics such as version in AppInfoParser whose value can be string. This has 
caused issue for PushHttpMetricsReporter which in turn caused system test 
kafkatest.tests.client.quota_test.QuotaTest.test_quota to fail.

Since we allow metric value to be object, PushHttpMetricsReporter should 
also read metric value as object and pass it to the http server.

Author: Dong Lin 

Reviewers: Manikumar Reddy O , Ewen 
Cheslack-Postava 

Closes #5886 from lindong28/KAFKA-7560

(cherry picked from commit df0faee09787ec4d14a1a5da98fe9bf4cd7f461c)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../kafka/tools/PushHttpMetricsReporter.java   |  9 ++---
 .../kafka/tools/PushHttpMetricsReporterTest.java   | 47 --
 2 files changed, 38 insertions(+), 18 deletions(-)

diff --git 
a/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java 
b/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java
index 6adebf5..b33b75c 100644
--- a/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java
+++ b/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java
@@ -174,8 +174,7 @@ public class PushHttpMetricsReporter implements 
MetricsReporter {
 samples = new ArrayList<>(metrics.size());
 for (KafkaMetric metric : metrics.values()) {
 MetricName name = metric.metricName();
-double value = (Double) metric.metricValue();
-samples.add(new MetricValue(name.name(), name.group(), 
name.tags(), value));
+samples.add(new MetricValue(name.name(), name.group(), 
name.tags(), metric.metricValue()));
 }
 }
 
@@ -212,9 +211,9 @@ public class PushHttpMetricsReporter implements 
MetricsReporter {
 } else {
 log.info("Finished reporting metrics with response code 
{}", responseCode);
 }
-} catch (Exception e) {
-log.error("Error reporting metrics", e);
-throw new KafkaException("Failed to report current metrics", 
e);
+} catch (Throwable t) {
+log.error("Error reporting metrics", t);
+throw new KafkaException("Failed to report current metrics", 
t);
 } finally {
 if (connection != null) {
 connection.disconnect();
diff --git 
a/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java 
b/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java
index 1cd3799..3a8458c 100644
--- 
a/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java
+++ 
b/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java
@@ -18,10 +18,11 @@ package org.apache.kafka.tools;
 
 import com.fasterxml.jackson.databind.JsonNode;
 import com.fasterxml.jackson.databind.ObjectMapper;
+import java.util.List;
 import org.apache.kafka.common.MetricName;
 import org.apache.kafka.common.config.ConfigException;
+import org.apache.kafka.common.metrics.Gauge;
 import org.apache.kafka.common.metrics.KafkaMetric;
-import org.apache.kafka.common.metrics.Measurable;
 import org.apache.kafka.common.metrics.MetricConfig;
 import org.apache.kafka.common.utils.MockTime;
 import org.apache.kafka.common.utils.Time;
@@ -184,32 +185,40 @@ public class PushHttpMetricsReporterTest {
 KafkaMetric metric1 = new KafkaMetric(
 new Object(),
 new MetricName("name1", "group1", "desc1", 
Collections.singletonMap("key1", "value1")),
-new ImmutableValue(1.0),
+new ImmutableValue<>(1.0),
 null,
 time
 );
 KafkaMetric newMetric1 = new KafkaMetric(
 new Object(),
 new MetricName("name1", "group1", "desc1", 
Collections.singletonMap("key1", "value1")),
-new ImmutableValue(-1.0),
+new ImmutableValue<>(-1.0),
 null,
 time
 );
  

[kafka] branch trunk updated: KAFKA-7560; PushHttpMetricsReporter should not convert metric value to double

2018-11-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new df0faee  KAFKA-7560; PushHttpMetricsReporter should not convert metric 
value to double
df0faee is described below

commit df0faee09787ec4d14a1a5da98fe9bf4cd7f461c
Author: Dong Lin 
AuthorDate: Wed Nov 7 08:04:29 2018 -0800

KAFKA-7560; PushHttpMetricsReporter should not convert metric value to 
double

Currently PushHttpMetricsReporter will convert value from 
KafkaMetric.metricValue() to double. This will not work for non-numerical 
metrics such as version in AppInfoParser whose value can be string. This has 
caused issue for PushHttpMetricsReporter which in turn caused system test 
kafkatest.tests.client.quota_test.QuotaTest.test_quota to fail.

Since we allow metric value to be object, PushHttpMetricsReporter should 
also read metric value as object and pass it to the http server.

Author: Dong Lin 

Reviewers: Manikumar Reddy O , Ewen 
Cheslack-Postava 

Closes #5886 from lindong28/KAFKA-7560
---
 .../kafka/tools/PushHttpMetricsReporter.java   |  9 ++---
 .../kafka/tools/PushHttpMetricsReporterTest.java   | 47 --
 2 files changed, 38 insertions(+), 18 deletions(-)

diff --git 
a/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java 
b/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java
index 6adebf5..b33b75c 100644
--- a/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java
+++ b/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java
@@ -174,8 +174,7 @@ public class PushHttpMetricsReporter implements 
MetricsReporter {
 samples = new ArrayList<>(metrics.size());
 for (KafkaMetric metric : metrics.values()) {
 MetricName name = metric.metricName();
-double value = (Double) metric.metricValue();
-samples.add(new MetricValue(name.name(), name.group(), 
name.tags(), value));
+samples.add(new MetricValue(name.name(), name.group(), 
name.tags(), metric.metricValue()));
 }
 }
 
@@ -212,9 +211,9 @@ public class PushHttpMetricsReporter implements 
MetricsReporter {
 } else {
 log.info("Finished reporting metrics with response code 
{}", responseCode);
 }
-} catch (Exception e) {
-log.error("Error reporting metrics", e);
-throw new KafkaException("Failed to report current metrics", 
e);
+} catch (Throwable t) {
+log.error("Error reporting metrics", t);
+throw new KafkaException("Failed to report current metrics", 
t);
 } finally {
 if (connection != null) {
 connection.disconnect();
diff --git 
a/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java 
b/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java
index 1cd3799..3a8458c 100644
--- 
a/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java
+++ 
b/tools/src/test/java/org/apache/kafka/tools/PushHttpMetricsReporterTest.java
@@ -18,10 +18,11 @@ package org.apache.kafka.tools;
 
 import com.fasterxml.jackson.databind.JsonNode;
 import com.fasterxml.jackson.databind.ObjectMapper;
+import java.util.List;
 import org.apache.kafka.common.MetricName;
 import org.apache.kafka.common.config.ConfigException;
+import org.apache.kafka.common.metrics.Gauge;
 import org.apache.kafka.common.metrics.KafkaMetric;
-import org.apache.kafka.common.metrics.Measurable;
 import org.apache.kafka.common.metrics.MetricConfig;
 import org.apache.kafka.common.utils.MockTime;
 import org.apache.kafka.common.utils.Time;
@@ -184,32 +185,40 @@ public class PushHttpMetricsReporterTest {
 KafkaMetric metric1 = new KafkaMetric(
 new Object(),
 new MetricName("name1", "group1", "desc1", 
Collections.singletonMap("key1", "value1")),
-new ImmutableValue(1.0),
+new ImmutableValue<>(1.0),
 null,
 time
 );
 KafkaMetric newMetric1 = new KafkaMetric(
 new Object(),
 new MetricName("name1", "group1", "desc1", 
Collections.singletonMap("key1", "value1")),
-new ImmutableValue(-1.0),
+new ImmutableValue<>(-1.0),
 null,
 time
 );
 KafkaMetric metric2 = new KafkaMetric(
 new Object(),
 new MetricName("name2&q

[kafka] branch 2.0 updated: MINOR: Fix undefined variable in Connect test

2018-10-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new dc89548  MINOR: Fix undefined variable in Connect test
dc89548 is described below

commit dc89548be61e2bcba90ac0ff6875813dec643e32
Author: Randall Hauch 
AuthorDate: Wed Oct 24 13:16:34 2018 -0700

MINOR: Fix undefined variable in Connect test

Corrects an error in the system tests:
```
07:55:45 [ERROR:2018-10-23 07:55:45,738]: Failed to import 
kafkatest.tests.connect.connect_test, which may indicate a broken test that 
cannot be loaded: NameError: name 'EXTERNAL_CONFIGS_FILE' is not defined
```

The constant is defined in the 
[services/connect.py](https://github.com/apache/kafka/blob/trunk/tests/kafkatest/services/connect.py#L43)
 file in the `ConnectServiceBase` class, but the problem is in the 
[tests/connect/connect_test.py](https://github.com/apache/kafka/blob/trunk/tests/kafkatest/tests/connect/connect_test.py#L50)
 `ConnectStandaloneFileTest`, which does *not* extend the `ConnectServiceBase 
class`. Suggestions welcome to be able to reuse that variable without 
duplicating t [...]

System test run with this PR: 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2004/

If approved, this should be merged as far back as the `2.0` branch.

Author: Randall Hauch 

Reviewers: Ewen Cheslack-Postava 

Closes #5832 from rhauch/fix-connect-externals-tests

(cherry picked from commit 8b1d705404cf52b508874c7ae0ab1d86cab83bfc)
Signed-off-by: Ewen Cheslack-Postava 
---
 tests/kafkatest/tests/connect/connect_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/tests/connect/connect_test.py 
b/tests/kafkatest/tests/connect/connect_test.py
index e2618e9..2d8ac2d 100644
--- a/tests/kafkatest/tests/connect/connect_test.py
+++ b/tests/kafkatest/tests/connect/connect_test.py
@@ -47,7 +47,7 @@ class ConnectStandaloneFileTest(Test):
 
 OFFSETS_FILE = "/mnt/connect.offsets"
 
-TOPIC = "${file:" + EXTERNAL_CONFIGS_FILE + ":topic.external}"
+TOPIC = 
"${file:/mnt/connect/connect-file-external.properties:topic.external}"
 TOPIC_TEST = "test"
 
 FIRST_INPUT_LIST = ["foo", "bar", "baz"]



[kafka] branch 2.1 updated: MINOR: Fix undefined variable in Connect test

2018-10-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 378f043  MINOR: Fix undefined variable in Connect test
378f043 is described below

commit 378f0434bea1f1448075c48c8053107911e48efa
Author: Randall Hauch 
AuthorDate: Wed Oct 24 13:16:34 2018 -0700

MINOR: Fix undefined variable in Connect test

Corrects an error in the system tests:
```
07:55:45 [ERROR:2018-10-23 07:55:45,738]: Failed to import 
kafkatest.tests.connect.connect_test, which may indicate a broken test that 
cannot be loaded: NameError: name 'EXTERNAL_CONFIGS_FILE' is not defined
```

The constant is defined in the 
[services/connect.py](https://github.com/apache/kafka/blob/trunk/tests/kafkatest/services/connect.py#L43)
 file in the `ConnectServiceBase` class, but the problem is in the 
[tests/connect/connect_test.py](https://github.com/apache/kafka/blob/trunk/tests/kafkatest/tests/connect/connect_test.py#L50)
 `ConnectStandaloneFileTest`, which does *not* extend the `ConnectServiceBase 
class`. Suggestions welcome to be able to reuse that variable without 
duplicating t [...]

System test run with this PR: 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2004/

If approved, this should be merged as far back as the `2.0` branch.

Author: Randall Hauch 

Reviewers: Ewen Cheslack-Postava 

Closes #5832 from rhauch/fix-connect-externals-tests

(cherry picked from commit 8b1d705404cf52b508874c7ae0ab1d86cab83bfc)
Signed-off-by: Ewen Cheslack-Postava 
---
 tests/kafkatest/tests/connect/connect_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/tests/connect/connect_test.py 
b/tests/kafkatest/tests/connect/connect_test.py
index e2618e9..2d8ac2d 100644
--- a/tests/kafkatest/tests/connect/connect_test.py
+++ b/tests/kafkatest/tests/connect/connect_test.py
@@ -47,7 +47,7 @@ class ConnectStandaloneFileTest(Test):
 
 OFFSETS_FILE = "/mnt/connect.offsets"
 
-TOPIC = "${file:" + EXTERNAL_CONFIGS_FILE + ":topic.external}"
+TOPIC = 
"${file:/mnt/connect/connect-file-external.properties:topic.external}"
 TOPIC_TEST = "test"
 
 FIRST_INPUT_LIST = ["foo", "bar", "baz"]



[kafka] branch trunk updated: MINOR: Fix undefined variable in Connect test

2018-10-24 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8b1d705  MINOR: Fix undefined variable in Connect test
8b1d705 is described below

commit 8b1d705404cf52b508874c7ae0ab1d86cab83bfc
Author: Randall Hauch 
AuthorDate: Wed Oct 24 13:16:34 2018 -0700

MINOR: Fix undefined variable in Connect test

Corrects an error in the system tests:
```
07:55:45 [ERROR:2018-10-23 07:55:45,738]: Failed to import 
kafkatest.tests.connect.connect_test, which may indicate a broken test that 
cannot be loaded: NameError: name 'EXTERNAL_CONFIGS_FILE' is not defined
```

The constant is defined in the 
[services/connect.py](https://github.com/apache/kafka/blob/trunk/tests/kafkatest/services/connect.py#L43)
 file in the `ConnectServiceBase` class, but the problem is in the 
[tests/connect/connect_test.py](https://github.com/apache/kafka/blob/trunk/tests/kafkatest/tests/connect/connect_test.py#L50)
 `ConnectStandaloneFileTest`, which does *not* extend the `ConnectServiceBase 
class`. Suggestions welcome to be able to reuse that variable without 
duplicating t [...]

System test run with this PR: 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2004/

If approved, this should be merged as far back as the `2.0` branch.

Author: Randall Hauch 

Reviewers: Ewen Cheslack-Postava 

Closes #5832 from rhauch/fix-connect-externals-tests
---
 tests/kafkatest/tests/connect/connect_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/tests/connect/connect_test.py 
b/tests/kafkatest/tests/connect/connect_test.py
index e2618e9..2d8ac2d 100644
--- a/tests/kafkatest/tests/connect/connect_test.py
+++ b/tests/kafkatest/tests/connect/connect_test.py
@@ -47,7 +47,7 @@ class ConnectStandaloneFileTest(Test):
 
 OFFSETS_FILE = "/mnt/connect.offsets"
 
-TOPIC = "${file:" + EXTERNAL_CONFIGS_FILE + ":topic.external}"
+TOPIC = 
"${file:/mnt/connect/connect-file-external.properties:topic.external}"
 TOPIC_TEST = "test"
 
 FIRST_INPUT_LIST = ["foo", "bar", "baz"]



[kafka] branch 2.1 updated: KAFKA-7131: Update release script to generate announcement email text

2018-10-20 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new 4055a24  KAFKA-7131: Update release script to generate announcement 
email text
4055a24 is described below

commit 4055a248b4c4baf5b29f067b2f203e252e1ba563
Author: Bibin Sebastian 
AuthorDate: Sat Oct 20 20:43:22 2018 -0700

KAFKA-7131: Update release script to generate announcement email text

Author: Bibin Sebastian 
Author: Ewen Cheslack-Postava 

Reviewers: Matthias J. Sax , Ewen Cheslack-Postava 


Closes #5572 from bibinss/release_mail

(cherry picked from commit 83e98334a94ce2071a3294a8b310f1d646127f1c)
Signed-off-by: Ewen Cheslack-Postava 
---
 release.py | 141 +
 1 file changed, 133 insertions(+), 8 deletions(-)

diff --git a/release.py b/release.py
index 3573a7f..802c9de 100755
--- a/release.py
+++ b/release.py
@@ -45,6 +45,10 @@ release.py stage-docs [kafka-site-path]
   With no arguments this script assumes you have the Kafka repository and 
kafka-site repository checked out side-by-side, but
   you can specify a full path to the kafka-site repository if this is not the 
case.
 
+release.py release-email
+
+  Generates the email content/template for sending release announcement email.
+
 """
 
 from __future__ import print_function
@@ -56,6 +60,7 @@ import os
 import subprocess
 import sys
 import tempfile
+import re
 
 PROJECT_NAME = "kafka"
 CAPITALIZED_PROJECT_NAME = "kafka".upper()
@@ -256,11 +261,138 @@ def command_stage_docs():
 
 sys.exit(0)
 
+def validate_release_version_parts(version):
+try:
+version_parts = version.split('.')
+if len(version_parts) != 3:
+fail("Invalid release version, should have 3 version number 
components")
+# Validate each part is a number
+[int(x) for x in version_parts]
+except ValueError:
+fail("Invalid release version, should be a dotted version number")
+
+def get_release_version_parts(version):
+validate_release_version_parts(version)
+return version.split('.')
+
+def validate_release_num(version):
+tags = cmd_output('git tag').split()
+if version not in tags:
+fail("The specified version is not a valid release version number")
+validate_release_version_parts(version)
+
+def command_release_announcement_email():
+tags = cmd_output('git tag').split()
+release_tag_pattern = re.compile('^[0-9]+\.[0-9]+\.[0-9]+$')
+release_tags = sorted([t for t in tags if re.match(release_tag_pattern, 
t)])
+release_version_num = release_tags[-1]
+if not user_ok("""Is the current release %s ? (y/n): """ % 
release_version_num):
+release_version_num = raw_input('What is the current release version:')
+validate_release_num(release_version_num)
+previous_release_version_num = release_tags[-2]
+if not user_ok("""Is the previous release %s ? (y/n): """ % 
previous_release_version_num):
+previous_release_version_num = raw_input('What is the previous release 
version:')
+validate_release_num(previous_release_version_num)
+if release_version_num < previous_release_version_num :
+fail("Current release version number can't be less than previous 
release version number")
+number_of_contributors = int(subprocess.check_output('git shortlog -sn 
--no-merges %s..%s | wc -l' % (previous_release_version_num, 
release_version_num) , shell=True))
+contributors = subprocess.check_output("git shortlog -sn --no-merges 
%s..%s | cut -f2 | sort --ignore-case" % (previous_release_version_num, 
release_version_num), shell=True)
+release_announcement_data = {
+'number_of_contributors': number_of_contributors,
+'contributors': ', '.join(str(x) for x in filter(None, 
contributors.split('\n'))),
+'release_version': release_version_num
+}
+
+release_announcement_email = """
+To: annou...@apache.org, d...@kafka.apache.org, us...@kafka.apache.org, 
kafka-clie...@googlegroups.com
+Subject: [ANNOUNCE] Apache Kafka %(release_version)s
+
+The Apache Kafka community is pleased to announce the release for Apache Kafka 
%(release_version)s
+
+
+
+All of the changes in this release can be found in the release notes:
+https://www.apache.org/dist/kafka/%(release_version)s/RELEASE_NOTES.html
+
+
+You can download the source and binary release (Scala ) from:
+https://kafka.apache.org/downloads#%(release_version)s
+
+---
+
+
+Apache Kafka is a distributed streaming platform with four core APIs:
+
+
+** The Producer A

[kafka] branch trunk updated: KAFKA-7131: Update release script to generate announcement email text

2018-10-20 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 83e9833  KAFKA-7131: Update release script to generate announcement 
email text
83e9833 is described below

commit 83e98334a94ce2071a3294a8b310f1d646127f1c
Author: Bibin Sebastian 
AuthorDate: Sat Oct 20 20:43:22 2018 -0700

KAFKA-7131: Update release script to generate announcement email text

Author: Bibin Sebastian 
Author: Ewen Cheslack-Postava 

Reviewers: Matthias J. Sax , Ewen Cheslack-Postava 


Closes #5572 from bibinss/release_mail
---
 release.py | 141 +
 1 file changed, 133 insertions(+), 8 deletions(-)

diff --git a/release.py b/release.py
index 1cf54c4..d91f535 100755
--- a/release.py
+++ b/release.py
@@ -45,6 +45,10 @@ release.py stage-docs [kafka-site-path]
   With no arguments this script assumes you have the Kafka repository and 
kafka-site repository checked out side-by-side, but
   you can specify a full path to the kafka-site repository if this is not the 
case.
 
+release.py release-email
+
+  Generates the email content/template for sending release announcement email.
+
 """
 
 from __future__ import print_function
@@ -56,6 +60,7 @@ import os
 import subprocess
 import sys
 import tempfile
+import re
 
 PROJECT_NAME = "kafka"
 CAPITALIZED_PROJECT_NAME = "kafka".upper()
@@ -256,11 +261,138 @@ def command_stage_docs():
 
 sys.exit(0)
 
+def validate_release_version_parts(version):
+try:
+version_parts = version.split('.')
+if len(version_parts) != 3:
+fail("Invalid release version, should have 3 version number 
components")
+# Validate each part is a number
+[int(x) for x in version_parts]
+except ValueError:
+fail("Invalid release version, should be a dotted version number")
+
+def get_release_version_parts(version):
+validate_release_version_parts(version)
+return version.split('.')
+
+def validate_release_num(version):
+tags = cmd_output('git tag').split()
+if version not in tags:
+fail("The specified version is not a valid release version number")
+validate_release_version_parts(version)
+
+def command_release_announcement_email():
+tags = cmd_output('git tag').split()
+release_tag_pattern = re.compile('^[0-9]+\.[0-9]+\.[0-9]+$')
+release_tags = sorted([t for t in tags if re.match(release_tag_pattern, 
t)])
+release_version_num = release_tags[-1]
+if not user_ok("""Is the current release %s ? (y/n): """ % 
release_version_num):
+release_version_num = raw_input('What is the current release version:')
+validate_release_num(release_version_num)
+previous_release_version_num = release_tags[-2]
+if not user_ok("""Is the previous release %s ? (y/n): """ % 
previous_release_version_num):
+previous_release_version_num = raw_input('What is the previous release 
version:')
+validate_release_num(previous_release_version_num)
+if release_version_num < previous_release_version_num :
+fail("Current release version number can't be less than previous 
release version number")
+number_of_contributors = int(subprocess.check_output('git shortlog -sn 
--no-merges %s..%s | wc -l' % (previous_release_version_num, 
release_version_num) , shell=True))
+contributors = subprocess.check_output("git shortlog -sn --no-merges 
%s..%s | cut -f2 | sort --ignore-case" % (previous_release_version_num, 
release_version_num), shell=True)
+release_announcement_data = {
+'number_of_contributors': number_of_contributors,
+'contributors': ', '.join(str(x) for x in filter(None, 
contributors.split('\n'))),
+'release_version': release_version_num
+}
+
+release_announcement_email = """
+To: annou...@apache.org, d...@kafka.apache.org, us...@kafka.apache.org, 
kafka-clie...@googlegroups.com
+Subject: [ANNOUNCE] Apache Kafka %(release_version)s
+
+The Apache Kafka community is pleased to announce the release for Apache Kafka 
%(release_version)s
+
+
+
+All of the changes in this release can be found in the release notes:
+https://www.apache.org/dist/kafka/%(release_version)s/RELEASE_NOTES.html
+
+
+You can download the source and binary release (Scala ) from:
+https://kafka.apache.org/downloads#%(release_version)s
+
+---
+
+
+Apache Kafka is a distributed streaming platform with four core APIs:
+
+
+** The Producer API allows an application to publish a stream records to
+one or more Kafka topics.
+
+** The Consumer API allows an

[kafka] branch trunk updated: MINOR: Fix some typos

2018-10-20 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 83c3996  MINOR: Fix some typos
83c3996 is described below

commit 83c39969745dc7076e3756439f6842e7431a8c55
Author: John Eismeier 
AuthorDate: Sat Oct 20 19:40:53 2018 -0700

MINOR: Fix some typos

Just a doc change

Author: John Eismeier 

Reviewers: Ewen Cheslack-Postava 

Closes #4573 from jeis2497052/trunk
---
 .../apache/kafka/common/record/FileRecordsTest.java  |  2 +-
 core/src/main/scala/kafka/utils/Mx4jLoader.scala |  4 ++--
 core/src/test/scala/unit/kafka/admin/AdminTest.scala |  2 +-
 .../test/scala/unit/kafka/zk/AdminZkClientTest.scala |  2 +-
 docs/design.html |  2 +-
 docs/security.html   | 12 ++--
 release.py   | 20 ++--
 .../java/org/apache/kafka/streams/TopologyTest.java  |  2 +-
 .../internals/InternalTopologyBuilderTest.java   |  4 ++--
 9 files changed, 25 insertions(+), 25 deletions(-)

diff --git 
a/clients/src/test/java/org/apache/kafka/common/record/FileRecordsTest.java 
b/clients/src/test/java/org/apache/kafka/common/record/FileRecordsTest.java
index 4b2b361..637da93 100644
--- a/clients/src/test/java/org/apache/kafka/common/record/FileRecordsTest.java
+++ b/clients/src/test/java/org/apache/kafka/common/record/FileRecordsTest.java
@@ -220,7 +220,7 @@ public class FileRecordsTest {
 position += message2Size + batches.get(2).sizeInBytes();
 
 int message4Size = batches.get(3).sizeInBytes();
-assertEquals("Should be able to find fourth message from a 
non-existant offset",
+assertEquals("Should be able to find fourth message from a 
non-existent offset",
 new FileRecords.LogOffsetPosition(50L, position, message4Size),
 fileRecords.searchForOffsetWithSize(3, position));
 assertEquals("Should be able to find fourth message by correct offset",
diff --git a/core/src/main/scala/kafka/utils/Mx4jLoader.scala 
b/core/src/main/scala/kafka/utils/Mx4jLoader.scala
index d9d1cb4..f2c8644 100644
--- a/core/src/main/scala/kafka/utils/Mx4jLoader.scala
+++ b/core/src/main/scala/kafka/utils/Mx4jLoader.scala
@@ -57,11 +57,11 @@ object Mx4jLoader extends Logging {
   httpAdaptorClass.getMethod("setProcessor", 
Class.forName("mx4j.tools.adaptor.http.ProcessorMBean")).invoke(httpAdaptor, 
xsltProcessor.asInstanceOf[AnyRef])
   mbs.registerMBean(xsltProcessor, processorName)
   httpAdaptorClass.getMethod("start").invoke(httpAdaptor)
-  info("mx4j successfuly loaded")
+  info("mx4j successfully loaded")
   return true
 }
 catch {
- case _: ClassNotFoundException =>
+  case _: ClassNotFoundException =>
 info("Will not load MX4J, mx4j-tools.jar is not in the classpath")
   case e: Throwable =>
 warn("Could not start register mbean in JMX", e)
diff --git a/core/src/test/scala/unit/kafka/admin/AdminTest.scala 
b/core/src/test/scala/unit/kafka/admin/AdminTest.scala
index a1c317e..88aff62 100755
--- a/core/src/test/scala/unit/kafka/admin/AdminTest.scala
+++ b/core/src/test/scala/unit/kafka/admin/AdminTest.scala
@@ -169,7 +169,7 @@ class AdminTest extends ZooKeeperTestHarness with Logging 
with RackAwareTest {
 zkUtils.updatePersistentPath(ConfigEntityZNode.path(ConfigType.Client, 
clientId), Json.encodeAsString(map.asJava))
 
 val configInZk: Map[String, Properties] = 
AdminUtils.fetchAllEntityConfigs(zkUtils, ConfigType.Client)
-assertEquals("Must have 1 overriden client config", 1, configInZk.size)
+assertEquals("Must have 1 overridden client config", 1, configInZk.size)
 assertEquals(props, configInZk(clientId))
 
 // Test that the existing clientId overrides are read
diff --git a/core/src/test/scala/unit/kafka/zk/AdminZkClientTest.scala 
b/core/src/test/scala/unit/kafka/zk/AdminZkClientTest.scala
index 81d938b..9f81c18 100644
--- a/core/src/test/scala/unit/kafka/zk/AdminZkClientTest.scala
+++ b/core/src/test/scala/unit/kafka/zk/AdminZkClientTest.scala
@@ -307,7 +307,7 @@ class AdminZkClientTest extends ZooKeeperTestHarness with 
Logging with RackAware
 zkClient.setOrCreateEntityConfigs(ConfigType.Client, clientId, props)
 
 val configInZk: Map[String, Properties] = 
adminZkClient.fetchAllEntityConfigs(ConfigType.Client)
-assertEquals("Must have 1 overriden client config", 1, configInZk.size)
+assertEquals("Must have 1 overridden client config", 1, configInZk.size)
 assertEquals(props, configInZk(clientId))
 
 // Test that the existing clientId overrides a

[kafka] branch 0.10.2 updated: MINOR: Switch to use AWS spot instances

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.2
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.2 by this push:
 new 11e4f13  MINOR: Switch to use AWS spot instances
11e4f13 is described below

commit 11e4f13a3a9e306efdef1b0f081dcfabd36c2498
Author: Max Zheng 
AuthorDate: Fri Oct 5 10:21:25 2018 -0700

MINOR: Switch to use AWS spot instances

Pricing for m3.xlarge: On-Demand is at $0.266. Reserved is at about $0.16 
(40% discount). And Spot is at $0.0627 (76% discount relative to On-Demand, or 
60% discount relative to Reserved). Insignificant fluctuation in the past 3 
months.

Ran on branch builder and works as expected -- each worker is created using 
spot instances 
(https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1982/console)

This can be safely backported to 0.10.2 (tested using 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1983/)

Author: Max Zheng 

Reviewers: Ewen Cheslack-Postava 

Closes #5707 from maxzheng/minor-switch@trunk

(cherry picked from commit 50ec82940d4af61f12300235b7553bd5cf231894)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 3636076..88f2028 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -42,6 +42,8 @@ ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
 ec2_ami = "ami-905730e8"
 ec2_instance_type = "m3.medium"
+ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
+ec2_spot_max_price = "0.113"  # On-demand price for instance type
 ec2_user = "ubuntu"
 ec2_instance_name_prefix = "kafka-vagrant"
 ec2_security_groups = nil
@@ -133,6 +135,10 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 else
   aws.associate_public_ip = ec2_associate_public_ip
 end
+aws.region_config ec2_region do |region|
+  region.spot_instance = ec2_spot_instance
+  region.spot_max_price = ec2_spot_max_price
+end
 
 # Exclude some directories that can grow very large from syncing
 override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: 
['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
diff --git a/tests/README.md b/tests/README.md
index 87a6fed..6bac439 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -139,6 +139,7 @@ the test driver machine.
 ec2_instance_type = "..." # Pick something appropriate for your
   # test. Note that the default m3.medium has
   # a small disk.
+ec2_spot_max_price = "0.123"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_kafka = 0
diff --git a/vagrant/aws/aws-example-Vagrantfile.local 
b/vagrant/aws/aws-example-Vagrantfile.local
index ee9db9a..23187a0 100644
--- a/vagrant/aws/aws-example-Vagrantfile.local
+++ b/vagrant/aws/aws-example-Vagrantfile.local
@@ -17,6 +17,7 @@
 # To use it, move it to the base kafka directory and rename
 # it to Vagrantfile.local, and adjust variables as needed.
 ec2_instance_type = "m3.xlarge"
+ec2_spot_max_price = "0.266"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_brokers = 0
diff --git a/vagrant/aws/aws-init.sh b/vagrant/aws/aws-init.sh
index c0a6f67..994295e 100755
--- a/vagrant/aws/aws-init.sh
+++ b/vagrant/aws/aws-init.sh
@@ -25,15 +25,18 @@ base_dir=`dirname $0`/../..
 
 if [ -z `which vagrant` ]; then
 echo "Installing vagrant..."
-wget https://releases.hashicorp.com/vagrant/1.7.2/vagrant_1.7.2_x86_64.deb
-sudo dpkg -i vagrant_1.7.2_x86_64.deb
-rm -f vagrant_1.7.2_x86_64.deb
+wget https://releases.hashicorp.com/vagrant/2.1.5/vagrant_2.1.5_x86_64.deb
+sudo dpkg -i vagrant_2.1.5_x86_64.deb
+rm -f vagrant_2.1.5_x86_64.deb
 fi
 
 # Install necessary vagrant plugins
 # Note: Do NOT install vagrant-cachier since it doesn't work on AWS and only
 # adds log noise
-vagrant_plugins="vagrant-aws vagrant-hostmanager"
+
+# Custom vagrant-aws with spot instance support. See 
https://github.com/mitchellh/vagrant-aws/issues/32
+wget -nv 
https://s3-us-west-2.amazonaws.com/confluent-packaging-tools/vagrant-aws-0.7.2.spot.gem
 -P /tmp
+vagrant_plugins="/tmp/vagrant-aws-0.7.2.spot.gem vagrant-hostmanager"
 existing=`vagrant plugin list`
 for plugin in $vagrant_plugins; do
 echo $existing | grep $plugin > /dev/null



[kafka] branch 0.11.0 updated: MINOR: Switch to use AWS spot instances

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.11.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.11.0 by this push:
 new 5d6c809  MINOR: Switch to use AWS spot instances
5d6c809 is described below

commit 5d6c809b5cc89fab43edfbd1ae703b35f9da8817
Author: Max Zheng 
AuthorDate: Fri Oct 5 10:21:25 2018 -0700

MINOR: Switch to use AWS spot instances

Pricing for m3.xlarge: On-Demand is at $0.266. Reserved is at about $0.16 
(40% discount). And Spot is at $0.0627 (76% discount relative to On-Demand, or 
60% discount relative to Reserved). Insignificant fluctuation in the past 3 
months.

Ran on branch builder and works as expected -- each worker is created using 
spot instances 
(https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1982/console)

This can be safely backported to 0.10.2 (tested using 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1983/)

Author: Max Zheng 

Reviewers: Ewen Cheslack-Postava 

Closes #5707 from maxzheng/minor-switch@trunk

(cherry picked from commit 50ec82940d4af61f12300235b7553bd5cf231894)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 3636076..88f2028 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -42,6 +42,8 @@ ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
 ec2_ami = "ami-905730e8"
 ec2_instance_type = "m3.medium"
+ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
+ec2_spot_max_price = "0.113"  # On-demand price for instance type
 ec2_user = "ubuntu"
 ec2_instance_name_prefix = "kafka-vagrant"
 ec2_security_groups = nil
@@ -133,6 +135,10 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 else
   aws.associate_public_ip = ec2_associate_public_ip
 end
+aws.region_config ec2_region do |region|
+  region.spot_instance = ec2_spot_instance
+  region.spot_max_price = ec2_spot_max_price
+end
 
 # Exclude some directories that can grow very large from syncing
 override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: 
['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
diff --git a/tests/README.md b/tests/README.md
index 469522f..6782e48 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -441,6 +441,7 @@ the test driver machine.
 ec2_instance_type = "..." # Pick something appropriate for your
   # test. Note that the default m3.medium has
   # a small disk.
+ec2_spot_max_price = "0.123"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_kafka = 0
diff --git a/vagrant/aws/aws-example-Vagrantfile.local 
b/vagrant/aws/aws-example-Vagrantfile.local
index ee9db9a..23187a0 100644
--- a/vagrant/aws/aws-example-Vagrantfile.local
+++ b/vagrant/aws/aws-example-Vagrantfile.local
@@ -17,6 +17,7 @@
 # To use it, move it to the base kafka directory and rename
 # it to Vagrantfile.local, and adjust variables as needed.
 ec2_instance_type = "m3.xlarge"
+ec2_spot_max_price = "0.266"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_brokers = 0
diff --git a/vagrant/aws/aws-init.sh b/vagrant/aws/aws-init.sh
index c0a6f67..994295e 100755
--- a/vagrant/aws/aws-init.sh
+++ b/vagrant/aws/aws-init.sh
@@ -25,15 +25,18 @@ base_dir=`dirname $0`/../..
 
 if [ -z `which vagrant` ]; then
 echo "Installing vagrant..."
-wget https://releases.hashicorp.com/vagrant/1.7.2/vagrant_1.7.2_x86_64.deb
-sudo dpkg -i vagrant_1.7.2_x86_64.deb
-rm -f vagrant_1.7.2_x86_64.deb
+wget https://releases.hashicorp.com/vagrant/2.1.5/vagrant_2.1.5_x86_64.deb
+sudo dpkg -i vagrant_2.1.5_x86_64.deb
+rm -f vagrant_2.1.5_x86_64.deb
 fi
 
 # Install necessary vagrant plugins
 # Note: Do NOT install vagrant-cachier since it doesn't work on AWS and only
 # adds log noise
-vagrant_plugins="vagrant-aws vagrant-hostmanager"
+
+# Custom vagrant-aws with spot instance support. See 
https://github.com/mitchellh/vagrant-aws/issues/32
+wget -nv 
https://s3-us-west-2.amazonaws.com/confluent-packaging-tools/vagrant-aws-0.7.2.spot.gem
 -P /tmp
+vagrant_plugins="/tmp/vagrant-aws-0.7.2.spot.gem vagrant-hostmanager"
 existing=`vagrant plugin list`
 for plugin in $vagrant_plugins; do
 echo $existing | grep $plugin > /dev/null



[kafka] branch 1.0 updated: MINOR: Switch to use AWS spot instances

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new b88b582  MINOR: Switch to use AWS spot instances
b88b582 is described below

commit b88b5823d013f3c64e0a516fead3b2e8f0d22e80
Author: Max Zheng 
AuthorDate: Fri Oct 5 10:21:25 2018 -0700

MINOR: Switch to use AWS spot instances

Pricing for m3.xlarge: On-Demand is at $0.266. Reserved is at about $0.16 
(40% discount). And Spot is at $0.0627 (76% discount relative to On-Demand, or 
60% discount relative to Reserved). Insignificant fluctuation in the past 3 
months.

Ran on branch builder and works as expected -- each worker is created using 
spot instances 
(https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1982/console)

This can be safely backported to 0.10.2 (tested using 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1983/)

Author: Max Zheng 

Reviewers: Ewen Cheslack-Postava 

Closes #5707 from maxzheng/minor-switch@trunk

(cherry picked from commit 50ec82940d4af61f12300235b7553bd5cf231894)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 3636076..88f2028 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -42,6 +42,8 @@ ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
 ec2_ami = "ami-905730e8"
 ec2_instance_type = "m3.medium"
+ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
+ec2_spot_max_price = "0.113"  # On-demand price for instance type
 ec2_user = "ubuntu"
 ec2_instance_name_prefix = "kafka-vagrant"
 ec2_security_groups = nil
@@ -133,6 +135,10 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 else
   aws.associate_public_ip = ec2_associate_public_ip
 end
+aws.region_config ec2_region do |region|
+  region.spot_instance = ec2_spot_instance
+  region.spot_max_price = ec2_spot_max_price
+end
 
 # Exclude some directories that can grow very large from syncing
 override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: 
['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
diff --git a/tests/README.md b/tests/README.md
index f0ffdf5..f42b28a 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -461,6 +461,7 @@ the test driver machine.
 ec2_instance_type = "..." # Pick something appropriate for your
   # test. Note that the default m3.medium has
   # a small disk.
+ec2_spot_max_price = "0.123"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_kafka = 0
diff --git a/vagrant/aws/aws-example-Vagrantfile.local 
b/vagrant/aws/aws-example-Vagrantfile.local
index ee9db9a..23187a0 100644
--- a/vagrant/aws/aws-example-Vagrantfile.local
+++ b/vagrant/aws/aws-example-Vagrantfile.local
@@ -17,6 +17,7 @@
 # To use it, move it to the base kafka directory and rename
 # it to Vagrantfile.local, and adjust variables as needed.
 ec2_instance_type = "m3.xlarge"
+ec2_spot_max_price = "0.266"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_brokers = 0
diff --git a/vagrant/aws/aws-init.sh b/vagrant/aws/aws-init.sh
index 7517626..54092c8 100755
--- a/vagrant/aws/aws-init.sh
+++ b/vagrant/aws/aws-init.sh
@@ -31,15 +31,18 @@ base_dir=`dirname $0`/../..
 
 if [ -z `which vagrant` ]; then
 echo "Installing vagrant..."
-wget https://releases.hashicorp.com/vagrant/1.9.3/vagrant_1.9.3_x86_64.deb
-sudo dpkg -i vagrant_1.9.3_x86_64.deb
-rm -f vagrant_1.9.3_x86_64.deb
+wget https://releases.hashicorp.com/vagrant/2.1.5/vagrant_2.1.5_x86_64.deb
+sudo dpkg -i vagrant_2.1.5_x86_64.deb
+rm -f vagrant_2.1.5_x86_64.deb
 fi
 
 # Install necessary vagrant plugins
 # Note: Do NOT install vagrant-cachier since it doesn't work on AWS and only
 # adds log noise
-vagrant_plugins="vagrant-aws vagrant-hostmanager"
+
+# Custom vagrant-aws with spot instance support. See 
https://github.com/mitchellh/vagrant-aws/issues/32
+wget -nv 
https://s3-us-west-2.amazonaws.com/confluent-packaging-tools/vagrant-aws-0.7.2.spot.gem
 -P /tmp
+vagrant_plugins="/tmp/vagrant-aws-0.7.2.spot.gem vagrant-hostmanager"
 existing=`vagrant plugin list`
 for plugin in $vagrant_plugins; do
 echo $existing | grep $plugin > /dev/null



[kafka] branch 2.0 updated: MINOR: Switch to use AWS spot instances

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new b49c721  MINOR: Switch to use AWS spot instances
b49c721 is described below

commit b49c721ba1b324492a53056c71980fae5d1c2470
Author: Max Zheng 
AuthorDate: Fri Oct 5 10:21:25 2018 -0700

MINOR: Switch to use AWS spot instances

Pricing for m3.xlarge: On-Demand is at $0.266. Reserved is at about $0.16 
(40% discount). And Spot is at $0.0627 (76% discount relative to On-Demand, or 
60% discount relative to Reserved). Insignificant fluctuation in the past 3 
months.

Ran on branch builder and works as expected -- each worker is created using 
spot instances 
(https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1982/console)

This can be safely backported to 0.10.2 (tested using 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1983/)

Author: Max Zheng 

Reviewers: Ewen Cheslack-Postava 

Closes #5707 from maxzheng/minor-switch@trunk

(cherry picked from commit 50ec82940d4af61f12300235b7553bd5cf231894)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 3636076..88f2028 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -42,6 +42,8 @@ ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
 ec2_ami = "ami-905730e8"
 ec2_instance_type = "m3.medium"
+ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
+ec2_spot_max_price = "0.113"  # On-demand price for instance type
 ec2_user = "ubuntu"
 ec2_instance_name_prefix = "kafka-vagrant"
 ec2_security_groups = nil
@@ -133,6 +135,10 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 else
   aws.associate_public_ip = ec2_associate_public_ip
 end
+aws.region_config ec2_region do |region|
+  region.spot_instance = ec2_spot_instance
+  region.spot_max_price = ec2_spot_max_price
+end
 
 # Exclude some directories that can grow very large from syncing
 override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: 
['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
diff --git a/tests/README.md b/tests/README.md
index f0ffdf5..f42b28a 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -461,6 +461,7 @@ the test driver machine.
 ec2_instance_type = "..." # Pick something appropriate for your
   # test. Note that the default m3.medium has
   # a small disk.
+ec2_spot_max_price = "0.123"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_kafka = 0
diff --git a/vagrant/aws/aws-example-Vagrantfile.local 
b/vagrant/aws/aws-example-Vagrantfile.local
index ee9db9a..23187a0 100644
--- a/vagrant/aws/aws-example-Vagrantfile.local
+++ b/vagrant/aws/aws-example-Vagrantfile.local
@@ -17,6 +17,7 @@
 # To use it, move it to the base kafka directory and rename
 # it to Vagrantfile.local, and adjust variables as needed.
 ec2_instance_type = "m3.xlarge"
+ec2_spot_max_price = "0.266"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_brokers = 0
diff --git a/vagrant/aws/aws-init.sh b/vagrant/aws/aws-init.sh
index 7517626..54092c8 100755
--- a/vagrant/aws/aws-init.sh
+++ b/vagrant/aws/aws-init.sh
@@ -31,15 +31,18 @@ base_dir=`dirname $0`/../..
 
 if [ -z `which vagrant` ]; then
 echo "Installing vagrant..."
-wget https://releases.hashicorp.com/vagrant/1.9.3/vagrant_1.9.3_x86_64.deb
-sudo dpkg -i vagrant_1.9.3_x86_64.deb
-rm -f vagrant_1.9.3_x86_64.deb
+wget https://releases.hashicorp.com/vagrant/2.1.5/vagrant_2.1.5_x86_64.deb
+sudo dpkg -i vagrant_2.1.5_x86_64.deb
+rm -f vagrant_2.1.5_x86_64.deb
 fi
 
 # Install necessary vagrant plugins
 # Note: Do NOT install vagrant-cachier since it doesn't work on AWS and only
 # adds log noise
-vagrant_plugins="vagrant-aws vagrant-hostmanager"
+
+# Custom vagrant-aws with spot instance support. See 
https://github.com/mitchellh/vagrant-aws/issues/32
+wget -nv 
https://s3-us-west-2.amazonaws.com/confluent-packaging-tools/vagrant-aws-0.7.2.spot.gem
 -P /tmp
+vagrant_plugins="/tmp/vagrant-aws-0.7.2.spot.gem vagrant-hostmanager"
 existing=`vagrant plugin list`
 for plugin in $vagrant_plugins; do
 echo $existing | grep $plugin > /dev/null



[kafka] branch 1.1 updated: MINOR: Switch to use AWS spot instances

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 1c4350e  MINOR: Switch to use AWS spot instances
1c4350e is described below

commit 1c4350ec6c3afbd84079fb97ab63c947b9e37023
Author: Max Zheng 
AuthorDate: Fri Oct 5 10:21:25 2018 -0700

MINOR: Switch to use AWS spot instances

Pricing for m3.xlarge: On-Demand is at $0.266. Reserved is at about $0.16 
(40% discount). And Spot is at $0.0627 (76% discount relative to On-Demand, or 
60% discount relative to Reserved). Insignificant fluctuation in the past 3 
months.

Ran on branch builder and works as expected -- each worker is created using 
spot instances 
(https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1982/console)

This can be safely backported to 0.10.2 (tested using 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1983/)

Author: Max Zheng 

Reviewers: Ewen Cheslack-Postava 

Closes #5707 from maxzheng/minor-switch@trunk

(cherry picked from commit 50ec82940d4af61f12300235b7553bd5cf231894)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 3636076..88f2028 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -42,6 +42,8 @@ ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
 ec2_ami = "ami-905730e8"
 ec2_instance_type = "m3.medium"
+ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
+ec2_spot_max_price = "0.113"  # On-demand price for instance type
 ec2_user = "ubuntu"
 ec2_instance_name_prefix = "kafka-vagrant"
 ec2_security_groups = nil
@@ -133,6 +135,10 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 else
   aws.associate_public_ip = ec2_associate_public_ip
 end
+aws.region_config ec2_region do |region|
+  region.spot_instance = ec2_spot_instance
+  region.spot_max_price = ec2_spot_max_price
+end
 
 # Exclude some directories that can grow very large from syncing
 override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: 
['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
diff --git a/tests/README.md b/tests/README.md
index f0ffdf5..f42b28a 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -461,6 +461,7 @@ the test driver machine.
 ec2_instance_type = "..." # Pick something appropriate for your
   # test. Note that the default m3.medium has
   # a small disk.
+ec2_spot_max_price = "0.123"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_kafka = 0
diff --git a/vagrant/aws/aws-example-Vagrantfile.local 
b/vagrant/aws/aws-example-Vagrantfile.local
index ee9db9a..23187a0 100644
--- a/vagrant/aws/aws-example-Vagrantfile.local
+++ b/vagrant/aws/aws-example-Vagrantfile.local
@@ -17,6 +17,7 @@
 # To use it, move it to the base kafka directory and rename
 # it to Vagrantfile.local, and adjust variables as needed.
 ec2_instance_type = "m3.xlarge"
+ec2_spot_max_price = "0.266"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_brokers = 0
diff --git a/vagrant/aws/aws-init.sh b/vagrant/aws/aws-init.sh
index 7517626..54092c8 100755
--- a/vagrant/aws/aws-init.sh
+++ b/vagrant/aws/aws-init.sh
@@ -31,15 +31,18 @@ base_dir=`dirname $0`/../..
 
 if [ -z `which vagrant` ]; then
 echo "Installing vagrant..."
-wget https://releases.hashicorp.com/vagrant/1.9.3/vagrant_1.9.3_x86_64.deb
-sudo dpkg -i vagrant_1.9.3_x86_64.deb
-rm -f vagrant_1.9.3_x86_64.deb
+wget https://releases.hashicorp.com/vagrant/2.1.5/vagrant_2.1.5_x86_64.deb
+sudo dpkg -i vagrant_2.1.5_x86_64.deb
+rm -f vagrant_2.1.5_x86_64.deb
 fi
 
 # Install necessary vagrant plugins
 # Note: Do NOT install vagrant-cachier since it doesn't work on AWS and only
 # adds log noise
-vagrant_plugins="vagrant-aws vagrant-hostmanager"
+
+# Custom vagrant-aws with spot instance support. See 
https://github.com/mitchellh/vagrant-aws/issues/32
+wget -nv 
https://s3-us-west-2.amazonaws.com/confluent-packaging-tools/vagrant-aws-0.7.2.spot.gem
 -P /tmp
+vagrant_plugins="/tmp/vagrant-aws-0.7.2.spot.gem vagrant-hostmanager"
 existing=`vagrant plugin list`
 for plugin in $vagrant_plugins; do
 echo $existing | grep $plugin > /dev/null



[kafka] branch trunk updated (1bc620d -> 50ec829)

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 1bc620d  MINOR: Clarify usage of stateful processor node (#5740)
 add 50ec829  MINOR: Switch to use AWS spot instances

No new revisions were added by this update.

Summary of changes:
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)



[kafka] branch 2.1 updated: MINOR: Switch to use AWS spot instances

2018-10-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new c5285bd  MINOR: Switch to use AWS spot instances
c5285bd is described below

commit c5285bd46d30f7532557279bb569bd1949315e2e
Author: Max Zheng 
AuthorDate: Fri Oct 5 10:21:25 2018 -0700

MINOR: Switch to use AWS spot instances

Pricing for m3.xlarge: On-Demand is at $0.266. Reserved is at about $0.16 
(40% discount). And Spot is at $0.0627 (76% discount relative to On-Demand, or 
60% discount relative to Reserved). Insignificant fluctuation in the past 3 
months.

Ran on branch builder and works as expected -- each worker is created using 
spot instances 
(https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1982/console)

This can be safely backported to 0.10.2 (tested using 
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/1983/)

Author: Max Zheng 

Reviewers: Ewen Cheslack-Postava 

Closes #5707 from maxzheng/minor-switch@trunk

(cherry picked from commit 50ec82940d4af61f12300235b7553bd5cf231894)
Signed-off-by: Ewen Cheslack-Postava 
---
 Vagrantfile   |  6 ++
 tests/README.md   |  1 +
 vagrant/aws/aws-example-Vagrantfile.local |  1 +
 vagrant/aws/aws-init.sh   | 11 +++
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Vagrantfile b/Vagrantfile
index 3636076..88f2028 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -42,6 +42,8 @@ ec2_region = "us-east-1"
 ec2_az = nil # Uses set by AWS
 ec2_ami = "ami-905730e8"
 ec2_instance_type = "m3.medium"
+ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : 
true
+ec2_spot_max_price = "0.113"  # On-demand price for instance type
 ec2_user = "ubuntu"
 ec2_instance_name_prefix = "kafka-vagrant"
 ec2_security_groups = nil
@@ -133,6 +135,10 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 else
   aws.associate_public_ip = ec2_associate_public_ip
 end
+aws.region_config ec2_region do |region|
+  region.spot_instance = ec2_spot_instance
+  region.spot_max_price = ec2_spot_max_price
+end
 
 # Exclude some directories that can grow very large from syncing
 override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: 
['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
diff --git a/tests/README.md b/tests/README.md
index f0ffdf5..f42b28a 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -461,6 +461,7 @@ the test driver machine.
 ec2_instance_type = "..." # Pick something appropriate for your
   # test. Note that the default m3.medium has
   # a small disk.
+ec2_spot_max_price = "0.123"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_kafka = 0
diff --git a/vagrant/aws/aws-example-Vagrantfile.local 
b/vagrant/aws/aws-example-Vagrantfile.local
index ee9db9a..23187a0 100644
--- a/vagrant/aws/aws-example-Vagrantfile.local
+++ b/vagrant/aws/aws-example-Vagrantfile.local
@@ -17,6 +17,7 @@
 # To use it, move it to the base kafka directory and rename
 # it to Vagrantfile.local, and adjust variables as needed.
 ec2_instance_type = "m3.xlarge"
+ec2_spot_max_price = "0.266"  # On-demand price for instance type
 enable_hostmanager = false
 num_zookeepers = 0
 num_brokers = 0
diff --git a/vagrant/aws/aws-init.sh b/vagrant/aws/aws-init.sh
index 7517626..54092c8 100755
--- a/vagrant/aws/aws-init.sh
+++ b/vagrant/aws/aws-init.sh
@@ -31,15 +31,18 @@ base_dir=`dirname $0`/../..
 
 if [ -z `which vagrant` ]; then
 echo "Installing vagrant..."
-wget https://releases.hashicorp.com/vagrant/1.9.3/vagrant_1.9.3_x86_64.deb
-sudo dpkg -i vagrant_1.9.3_x86_64.deb
-rm -f vagrant_1.9.3_x86_64.deb
+wget https://releases.hashicorp.com/vagrant/2.1.5/vagrant_2.1.5_x86_64.deb
+sudo dpkg -i vagrant_2.1.5_x86_64.deb
+rm -f vagrant_2.1.5_x86_64.deb
 fi
 
 # Install necessary vagrant plugins
 # Note: Do NOT install vagrant-cachier since it doesn't work on AWS and only
 # adds log noise
-vagrant_plugins="vagrant-aws vagrant-hostmanager"
+
+# Custom vagrant-aws with spot instance support. See 
https://github.com/mitchellh/vagrant-aws/issues/32
+wget -nv 
https://s3-us-west-2.amazonaws.com/confluent-packaging-tools/vagrant-aws-0.7.2.spot.gem
 -P /tmp
+vagrant_plugins="/tmp/vagrant-aws-0.7.2.spot.gem vagrant-hostmanager"
 existing=`vagrant plugin list`
 for plugin in $vagrant_plugins; do
 echo $existing | grep $plugin > /dev/null



[kafka] branch 0.9.0 updated (34ae29a -> 342c817)

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a change to branch 0.9.0
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 34ae29a  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
 add 342c817  KAFKA-7476: Fix Date-based types in SchemaProjector

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)



[kafka] branch 0.10.0 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.0 by this push:
 new 0ab0369  KAFKA-7476: Fix Date-based types in SchemaProjector
0ab0369 is described below

commit 0ab0369c9275cb2555e53967ed74c0b35d7691bf
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index 6277e44..08ee37a 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -159,7 +159,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 101be04..ef6d029 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -351,6 +351,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 0.10.1 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.1 by this push:
 new daf38aa  KAFKA-7476: Fix Date-based types in SchemaProjector
daf38aa is described below

commit daf38aa825b37b2a86e3c0238ee8e493d54e158f
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index 6277e44..08ee37a 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -159,7 +159,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 101be04..ef6d029 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -351,6 +351,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 0.11.0 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.11.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.11.0 by this push:
 new a9ca107  KAFKA-7476: Fix Date-based types in SchemaProjector
a9ca107 is described below

commit a9ca1079bd28c716ba05f703b7cc814620cb1586
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index ea31752..5400705 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -160,7 +160,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 151114e..0db4eec 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -352,6 +352,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 0.10.2 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.2
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.2 by this push:
 new b90c609  KAFKA-7476: Fix Date-based types in SchemaProjector
b90c609 is described below

commit b90c609ee08bf239f3539d81dda5e8990cb34600
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index 6277e44..08ee37a 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -159,7 +159,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 101be04..ef6d029 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -351,6 +351,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 1.0 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new 2b17877  KAFKA-7476: Fix Date-based types in SchemaProjector
2b17877 is described below

commit 2b1787742a2d535bf2b14620c50169cd7cea2328
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index ea31752..5400705 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -160,7 +160,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 151114e..0db4eec 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -352,6 +352,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 2.0 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 5cef640  KAFKA-7476: Fix Date-based types in SchemaProjector
5cef640 is described below

commit 5cef640876f731ee68b359b7ca3afe939e54cabc
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index ea31752..5400705 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -160,7 +160,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 151114e..0db4eec 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -352,6 +352,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 1.1 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new ef6c69d  KAFKA-7476: Fix Date-based types in SchemaProjector
ef6c69d is described below

commit ef6c69d6285be12d5d4efe1dab9505d50636e00d
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index ea31752..5400705 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -160,7 +160,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 151114e..0db4eec 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -352,6 +352,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch trunk updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3edd8e7  KAFKA-7476: Fix Date-based types in SchemaProjector
3edd8e7 is described below

commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index ea31752..5400705 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -160,7 +160,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 151114e..0db4eec 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -352,6 +352,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 2.1 updated: KAFKA-7476: Fix Date-based types in SchemaProjector

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.1 by this push:
 new af930d3  KAFKA-7476: Fix Date-based types in SchemaProjector
af930d3 is described below

commit af930d3c76db24961cb6a3cafdeb414726ee3952
Author: Robert Yokota 
AuthorDate: Thu Oct 4 20:34:50 2018 -0700

KAFKA-7476: Fix Date-based types in SchemaProjector

Various converters (AvroConverter and JsonConverter) produce a
SchemaAndValue consisting of a logical schema type and a java.util.Date.
This is a fix for SchemaProjector to properly handle the Date.

Author: Robert Yokota 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5736 from rayokota/KAFKA-7476

(cherry picked from commit 3edd8e7333ec0bb32ab5ae4ec4814fe30bb8f91d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/data/SchemaProjector.java   |  2 +-
 .../org/apache/kafka/connect/data/SchemaProjectorTest.java| 11 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
index ea31752..5400705 100644
--- 
a/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
+++ 
b/connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java
@@ -160,7 +160,7 @@ public class SchemaProjector {
 assert source.type().isPrimitive();
 assert target.type().isPrimitive();
 Object result;
-if (isPromotable(source.type(), target.type())) {
+if (isPromotable(source.type(), target.type()) && record instanceof 
Number) {
 Number numberRecord = (Number) record;
 switch (target.type()) {
 case INT8:
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
index 151114e..0db4eec 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaProjectorTest.java
@@ -352,6 +352,17 @@ public class SchemaProjectorTest {
 projected = SchemaProjector.project(Timestamp.SCHEMA, 34567L, 
Timestamp.SCHEMA);
 assertEquals(34567L, projected);
 
+java.util.Date date = new java.util.Date();
+
+projected = SchemaProjector.project(Date.SCHEMA, date, Date.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Time.SCHEMA, date, Time.SCHEMA);
+assertEquals(date, projected);
+
+projected = SchemaProjector.project(Timestamp.SCHEMA, date, 
Timestamp.SCHEMA);
+assertEquals(date, projected);
+
 Schema namedSchema = 
SchemaBuilder.int32().name("invalidLogicalTypeName").build();
 for (Schema logicalTypeSchema: logicalTypeSchemas) {
 try {



[kafka] branch 1.0 updated: MINOR: Increase timeout for starting JMX tool (#5735)

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new 53d76c7  MINOR: Increase timeout for starting JMX tool (#5735)
53d76c7 is described below

commit 53d76c73587f751923dfca83178847c9d3deb338
Author: Randall Hauch 
AuthorDate: Wed Oct 3 10:56:44 2018 -0500

MINOR: Increase timeout for starting JMX tool (#5735)

In some tests, the check monitoring the JMX tool log output doesn’t quite 
wait long enough before failing. Increasing the timeout from 10 to 20 seconds.
---
 tests/kafkatest/services/monitor/jmx.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/monitor/jmx.py 
b/tests/kafkatest/services/monitor/jmx.py
index 6f6e221..a64842c 100644
--- a/tests/kafkatest/services/monitor/jmx.py
+++ b/tests/kafkatest/services/monitor/jmx.py
@@ -83,7 +83,7 @@ class JmxMixin(object):
 
 self.logger.debug("%s: Start JmxTool %d command: %s" % (node.account, 
idx, cmd))
 node.account.ssh(cmd, allow_fail=False)
-wait_until(lambda: self._jmx_has_output(node), timeout_sec=10, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
+wait_until(lambda: self._jmx_has_output(node), timeout_sec=20, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
 self.started[idx-1] = True
 
 def _jmx_has_output(self, node):



[kafka] branch 0.10.2 updated: MINOR: Increase timeout for starting JMX tool (#5735)

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.2
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.2 by this push:
 new 6396b77  MINOR: Increase timeout for starting JMX tool (#5735)
6396b77 is described below

commit 6396b776a47a2f58d3a5a3d76bc8de495d8bc43c
Author: Randall Hauch 
AuthorDate: Wed Oct 3 10:56:44 2018 -0500

MINOR: Increase timeout for starting JMX tool (#5735)

In some tests, the check monitoring the JMX tool log output doesn’t quite 
wait long enough before failing. Increasing the timeout from 10 to 20 seconds.
---
 tests/kafkatest/services/monitor/jmx.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/monitor/jmx.py 
b/tests/kafkatest/services/monitor/jmx.py
index 0859bb4..f2ac33c 100644
--- a/tests/kafkatest/services/monitor/jmx.py
+++ b/tests/kafkatest/services/monitor/jmx.py
@@ -70,7 +70,7 @@ class JmxMixin(object):
 
 self.logger.debug("%s: Start JmxTool %d command: %s" % (node.account, 
idx, cmd))
 node.account.ssh(cmd, allow_fail=False)
-wait_until(lambda: self._jmx_has_output(node), timeout_sec=10, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
+wait_until(lambda: self._jmx_has_output(node), timeout_sec=20, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
 self.started[idx-1] = True
 
 def _jmx_has_output(self, node):



[kafka] branch 0.11.0 updated: MINOR: Increase timeout for starting JMX tool (#5735)

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.11.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.11.0 by this push:
 new 59991f6  MINOR: Increase timeout for starting JMX tool (#5735)
59991f6 is described below

commit 59991f69cde98f691c2382754ed22bcd0fd884cf
Author: Randall Hauch 
AuthorDate: Wed Oct 3 10:56:44 2018 -0500

MINOR: Increase timeout for starting JMX tool (#5735)

In some tests, the check monitoring the JMX tool log output doesn’t quite 
wait long enough before failing. Increasing the timeout from 10 to 20 seconds.
---
 tests/kafkatest/services/monitor/jmx.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/monitor/jmx.py 
b/tests/kafkatest/services/monitor/jmx.py
index 7331cb9f..5af2b18 100644
--- a/tests/kafkatest/services/monitor/jmx.py
+++ b/tests/kafkatest/services/monitor/jmx.py
@@ -77,7 +77,7 @@ class JmxMixin(object):
 
 self.logger.debug("%s: Start JmxTool %d command: %s" % (node.account, 
idx, cmd))
 node.account.ssh(cmd, allow_fail=False)
-wait_until(lambda: self._jmx_has_output(node), timeout_sec=10, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
+wait_until(lambda: self._jmx_has_output(node), timeout_sec=20, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
 self.started[idx-1] = True
 
 def _jmx_has_output(self, node):



[kafka] branch 1.1 updated: MINOR: Increase timeout for starting JMX tool (#5735)

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 9b61a0e  MINOR: Increase timeout for starting JMX tool (#5735)
9b61a0e is described below

commit 9b61a0ee0901867ed62a47edee21baa0e4ce79fa
Author: Randall Hauch 
AuthorDate: Wed Oct 3 10:56:44 2018 -0500

MINOR: Increase timeout for starting JMX tool (#5735)

In some tests, the check monitoring the JMX tool log output doesn’t quite 
wait long enough before failing. Increasing the timeout from 10 to 20 seconds.
---
 tests/kafkatest/services/monitor/jmx.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/monitor/jmx.py 
b/tests/kafkatest/services/monitor/jmx.py
index 6f6e221..a64842c 100644
--- a/tests/kafkatest/services/monitor/jmx.py
+++ b/tests/kafkatest/services/monitor/jmx.py
@@ -83,7 +83,7 @@ class JmxMixin(object):
 
 self.logger.debug("%s: Start JmxTool %d command: %s" % (node.account, 
idx, cmd))
 node.account.ssh(cmd, allow_fail=False)
-wait_until(lambda: self._jmx_has_output(node), timeout_sec=10, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
+wait_until(lambda: self._jmx_has_output(node), timeout_sec=20, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
 self.started[idx-1] = True
 
 def _jmx_has_output(self, node):



[kafka] branch 2.0 updated: MINOR: Increase timeout for starting JMX tool (#5735)

2018-10-04 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new e7093d6  MINOR: Increase timeout for starting JMX tool (#5735)
e7093d6 is described below

commit e7093d6c1a946abd23103ba5b3802c4d3dd38892
Author: Randall Hauch 
AuthorDate: Wed Oct 3 10:56:44 2018 -0500

MINOR: Increase timeout for starting JMX tool (#5735)

In some tests, the check monitoring the JMX tool log output doesn’t quite 
wait long enough before failing. Increasing the timeout from 10 to 20 seconds.
---
 tests/kafkatest/services/monitor/jmx.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/monitor/jmx.py 
b/tests/kafkatest/services/monitor/jmx.py
index 542d3a5..cf8cbc3 100644
--- a/tests/kafkatest/services/monitor/jmx.py
+++ b/tests/kafkatest/services/monitor/jmx.py
@@ -83,7 +83,7 @@ class JmxMixin(object):
 
 self.logger.debug("%s: Start JmxTool %d command: %s" % (node.account, 
idx, cmd))
 node.account.ssh(cmd, allow_fail=False)
-wait_until(lambda: self._jmx_has_output(node), timeout_sec=10, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
+wait_until(lambda: self._jmx_has_output(node), timeout_sec=20, 
backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
 self.started[idx-1] = True
 
 def _jmx_has_output(self, node):



[kafka] branch trunk updated: KAFKA-6684: Support casting Connect values with bytes schema to string

2018-09-30 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fd44dc7  KAFKA-6684: Support casting Connect values with bytes schema 
to string
fd44dc7 is described below

commit fd44dc7fb210614349a873cdd82087ef5677f583
Author: Amit Sela 
AuthorDate: Sun Sep 30 22:24:09 2018 -0700

KAFKA-6684: Support casting Connect values with bytes schema to string

Allow to cast LogicalType to string by calling the serialized (Java) 
object's toString().

Added tests for `BigDecimal` and `Date` as whole record and as fields.

Author: Amit Sela 

Reviewers: Randall Hauch , Robert Yokota 
, Ewen Cheslack-Postava 

Closes #4820 from amitsela/cast-transform-bytes
---
 .../java/org/apache/kafka/connect/data/Values.java |  2 +-
 .../org/apache/kafka/connect/transforms/Cast.java  | 62 ++
 .../apache/kafka/connect/transforms/CastTest.java  | 42 ++-
 3 files changed, 83 insertions(+), 23 deletions(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
index c944745..c2bd9f4 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
@@ -713,7 +713,7 @@ public class Values {
 return DOUBLEQOUTE.matcher(replace1).replaceAll("\"");
 }
 
-protected static DateFormat dateFormatFor(java.util.Date value) {
+public static DateFormat dateFormatFor(java.util.Date value) {
 if (value.getTime() < MILLIS_PER_DAY) {
 return new SimpleDateFormat(ISO_8601_TIME_FORMAT_PATTERN);
 }
diff --git 
a/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/Cast.java
 
b/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/Cast.java
index a593c7b..07ccd37 100644
--- 
a/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/Cast.java
+++ 
b/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/Cast.java
@@ -28,6 +28,7 @@ import org.apache.kafka.connect.data.Field;
 import org.apache.kafka.connect.data.Schema;
 import org.apache.kafka.connect.data.SchemaBuilder;
 import org.apache.kafka.connect.data.Struct;
+import org.apache.kafka.connect.data.Values;
 import org.apache.kafka.connect.errors.DataException;
 import org.apache.kafka.connect.transforms.util.SchemaUtil;
 import org.apache.kafka.connect.transforms.util.SimpleConfig;
@@ -78,9 +79,16 @@ public abstract class Cast> 
implements Transformation
 
 private static final String PURPOSE = "cast types";
 
-private static final Set SUPPORTED_CAST_TYPES = EnumSet.of(
+private static final Set SUPPORTED_CAST_INPUT_TYPES = 
EnumSet.of(
 Schema.Type.INT8, Schema.Type.INT16, Schema.Type.INT32, 
Schema.Type.INT64,
-Schema.Type.FLOAT32, Schema.Type.FLOAT64, 
Schema.Type.BOOLEAN, Schema.Type.STRING
+Schema.Type.FLOAT32, Schema.Type.FLOAT64, 
Schema.Type.BOOLEAN,
+Schema.Type.STRING, Schema.Type.BYTES
+);
+
+private static final Set SUPPORTED_CAST_OUTPUT_TYPES = 
EnumSet.of(
+Schema.Type.INT8, Schema.Type.INT16, Schema.Type.INT32, 
Schema.Type.INT64,
+Schema.Type.FLOAT32, Schema.Type.FLOAT64, 
Schema.Type.BOOLEAN,
+Schema.Type.STRING
 );
 
 // As a special case for casting the entire value (e.g. the incoming key 
is a int64 but you know it could be an
@@ -120,14 +128,14 @@ public abstract class Cast> 
implements Transformation
 
 private R applySchemaless(R record) {
 if (wholeValueCastType != null) {
-return newRecord(record, null, 
castValueToType(operatingValue(record), wholeValueCastType));
+return newRecord(record, null, castValueToType(null, 
operatingValue(record), wholeValueCastType));
 }
 
 final Map value = requireMap(operatingValue(record), 
PURPOSE);
 final HashMap updatedValue = new HashMap<>(value);
 for (Map.Entry fieldSpec : casts.entrySet()) {
 String field = fieldSpec.getKey();
-updatedValue.put(field, castValueToType(value.get(field), 
fieldSpec.getValue()));
+updatedValue.put(field, castValueToType(null, value.get(field), 
fieldSpec.getValue()));
 }
 return newRecord(record, null, updatedValue);
 }
@@ -138,7 +146,7 @@ public abstract class Cast> 
implements Transformation
 
 // Whole-record casting
 if (wholeValueCastType != null)
-return newRecord(record, updatedSchema, 
castValueToType(operatingValue(record), wholeValueCastType));
+return newRecord(record, 

[kafka] branch trunk updated: KAFKA-7460: Fix Connect Values converter date format pattern

2018-09-30 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c1457be  KAFKA-7460: Fix Connect Values converter date format pattern
c1457be is described below

commit c1457be99555063b774db61f526fd0c059721c69
Author: Amit Sela 
AuthorDate: Sun Sep 30 19:51:59 2018 -0700

KAFKA-7460: Fix Connect Values converter date format pattern

Switches to normal year format instead of week date years and day of month 
instead of day of year.

This is directly from #4820, but separated into a different JIRA/PR to keep 
the fixes independent. Original authorship should be maintained in the commit.

Author: Amit Sela 

Reviewers: Ewen Cheslack-Postava 

Closes #5718 from ewencp/fix-header-converter-date-format
---
 connect/api/src/main/java/org/apache/kafka/connect/data/Values.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
index ceb1768..c944745 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
@@ -70,7 +70,7 @@ public class Values {
 private static final String FALSE_LITERAL = Boolean.TRUE.toString();
 private static final long MILLIS_PER_DAY = 24 * 60 * 60 * 1000;
 private static final String NULL_VALUE = "null";
-private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-DD";
+private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-dd";
 private static final String ISO_8601_TIME_FORMAT_PATTERN = 
"HH:mm:ss.SSS'Z'";
 private static final String ISO_8601_TIMESTAMP_FORMAT_PATTERN = 
ISO_8601_DATE_FORMAT_PATTERN + "'T'" + ISO_8601_TIME_FORMAT_PATTERN;
 



[kafka] branch 1.1 updated: KAFKA-7460: Fix Connect Values converter date format pattern

2018-09-30 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 6b0f28b  KAFKA-7460: Fix Connect Values converter date format pattern
6b0f28b is described below

commit 6b0f28bf4eac0aa31c0f6383de0da91ba70bbd44
Author: Amit Sela 
AuthorDate: Sun Sep 30 19:51:59 2018 -0700

KAFKA-7460: Fix Connect Values converter date format pattern

Switches to normal year format instead of week date years and day of month 
instead of day of year.

This is directly from #4820, but separated into a different JIRA/PR to keep 
the fixes independent. Original authorship should be maintained in the commit.

Author: Amit Sela 

Reviewers: Ewen Cheslack-Postava 

Closes #5718 from ewencp/fix-header-converter-date-format

(cherry picked from commit c1457be99555063b774db61f526fd0c059721c69)
Signed-off-by: Ewen Cheslack-Postava 
---
 connect/api/src/main/java/org/apache/kafka/connect/data/Values.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
index 05248ef..042079b 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
@@ -69,7 +69,7 @@ public class Values {
 private static final String FALSE_LITERAL = Boolean.TRUE.toString();
 private static final long MILLIS_PER_DAY = 24 * 60 * 60 * 1000;
 private static final String NULL_VALUE = "null";
-private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-DD";
+private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-dd";
 private static final String ISO_8601_TIME_FORMAT_PATTERN = 
"HH:mm:ss.SSS'Z'";
 private static final String ISO_8601_TIMESTAMP_FORMAT_PATTERN = 
ISO_8601_DATE_FORMAT_PATTERN + "'T'" + ISO_8601_TIME_FORMAT_PATTERN;
 



[kafka] branch 2.0 updated: KAFKA-7460: Fix Connect Values converter date format pattern

2018-09-30 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new be74944  KAFKA-7460: Fix Connect Values converter date format pattern
be74944 is described below

commit be7494452cccd417f1007fc14381ef7dc74a7207
Author: Amit Sela 
AuthorDate: Sun Sep 30 19:51:59 2018 -0700

KAFKA-7460: Fix Connect Values converter date format pattern

Switches to normal year format instead of week date years and day of month 
instead of day of year.

This is directly from #4820, but separated into a different JIRA/PR to keep 
the fixes independent. Original authorship should be maintained in the commit.

Author: Amit Sela 

Reviewers: Ewen Cheslack-Postava 

Closes #5718 from ewencp/fix-header-converter-date-format

(cherry picked from commit c1457be99555063b774db61f526fd0c059721c69)
Signed-off-by: Ewen Cheslack-Postava 
---
 connect/api/src/main/java/org/apache/kafka/connect/data/Values.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
index d643aa2..f705dcc 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
@@ -69,7 +69,7 @@ public class Values {
 private static final String FALSE_LITERAL = Boolean.TRUE.toString();
 private static final long MILLIS_PER_DAY = 24 * 60 * 60 * 1000;
 private static final String NULL_VALUE = "null";
-private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-DD";
+private static final String ISO_8601_DATE_FORMAT_PATTERN = "-MM-dd";
 private static final String ISO_8601_TIME_FORMAT_PATTERN = 
"HH:mm:ss.SSS'Z'";
 private static final String ISO_8601_TIMESTAMP_FORMAT_PATTERN = 
ISO_8601_DATE_FORMAT_PATTERN + "'T'" + ISO_8601_TIME_FORMAT_PATTERN;
 



[kafka] branch trunk updated: KAFKA-7434: Fix NPE in DeadLetterQueueReporter

2018-09-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 22f1724  KAFKA-7434: Fix NPE in DeadLetterQueueReporter
22f1724 is described below

commit 22f1724123c267352116c18db1abdee25c31b382
Author: Michał Borowiecki 
AuthorDate: Sat Sep 29 10:19:10 2018 -0700

KAFKA-7434: Fix NPE in DeadLetterQueueReporter

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Michał Borowiecki 

Reviewers: Arjun Satish , Ewen Cheslack-Postava 


Closes #5700 from mihbor/KAFKA-7434
---
 .../runtime/errors/DeadLetterQueueReporter.java|  6 -
 .../connect/runtime/errors/ErrorReporterTest.java  | 30 ++
 2 files changed, 35 insertions(+), 1 deletion(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
index c059dcf..2312269 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
@@ -199,6 +199,10 @@ public class DeadLetterQueueReporter implements 
ErrorReporter {
 }
 
 private byte[] toBytes(String value) {
-return value.getBytes(StandardCharsets.UTF_8);
+if (value != null) {
+return value.getBytes(StandardCharsets.UTF_8);
+} else {
+return null;
+}
 }
 }
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
index fa628b0..00a922f 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
@@ -59,6 +59,7 @@ import static 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.ER
 import static 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.ERROR_HEADER_TASK_ID;
 import static org.easymock.EasyMock.replay;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 @RunWith(PowerMockRunner.class)
@@ -205,6 +206,7 @@ public class ErrorReporterTest {
 assertEquals(configuration.dlqTopicReplicationFactor(), 7);
 }
 
+@Test
 public void testDlqHeaderConsumerRecord() {
 Map props = new HashMap<>();
 props.put(SinkConnectorConfig.DLQ_TOPIC_NAME_CONFIG, DLQ_TOPIC);
@@ -233,6 +235,34 @@ public class ErrorReporterTest {
 }
 
 @Test
+public void testDlqHeaderOnNullExceptionMessage() {
+Map props = new HashMap<>();
+props.put(SinkConnectorConfig.DLQ_TOPIC_NAME_CONFIG, DLQ_TOPIC);
+props.put(SinkConnectorConfig.DLQ_CONTEXT_HEADERS_ENABLE_CONFIG, 
"true");
+DeadLetterQueueReporter deadLetterQueueReporter = new 
DeadLetterQueueReporter(producer, config(props), TASK_ID, errorHandlingMetrics);
+
+ProcessingContext context = new ProcessingContext();
+context.consumerRecord(new ConsumerRecord<>("source-topic", 7, 10, 
"source-key".getBytes(), "source-value".getBytes()));
+context.currentContext(Stage.TRANSFORMATION, Transformation.class);
+context.error(new NullPointerException());
+
+ProducerRecord producerRecord = new 
ProducerRecord<>(DLQ_TOPIC, "source-key".getBytes(), "source-value".getBytes());
+
+deadLetterQueueReporter.populateContextHeaders(producerRecord, 
context);
+assertEquals("source-topic", headerValue(producerRecord, 
ERROR_HEADER_ORIG_TOPIC));
+assertEquals("7", headerValue(producerRecord, 
ERROR_HEADER_ORIG_PARTITION));
+assertEquals("10", headerValue(producerRecord, 
ERROR_HEADER_ORIG_OFFSET));
+assertEquals(TASK_ID.connector(), headerValue(producerRecord, 
ERROR_HEADER_CONNECTOR_NAME));
+assertEquals(String.valueOf(TASK_ID.task()), 
headerValue(producerRecord, ERROR_HEADER_TASK_ID));
+assertEquals(Stage.TRANSFORMATION.name(), headerValue(producerRecord, 
ERROR_HEADER_STAGE));
+assertEquals(Transformation.class.getName(), 
headerValue(produc

[kafka] branch 2.0 updated: KAFKA-7434: Fix NPE in DeadLetterQueueReporter

2018-09-29 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 74c8b83  KAFKA-7434: Fix NPE in DeadLetterQueueReporter
74c8b83 is described below

commit 74c8b831472ed07e10ceda660e0e504a6a6821c4
Author: Michał Borowiecki 
AuthorDate: Sat Sep 29 10:19:10 2018 -0700

KAFKA-7434: Fix NPE in DeadLetterQueueReporter

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Michał Borowiecki 

Reviewers: Arjun Satish , Ewen Cheslack-Postava 


Closes #5700 from mihbor/KAFKA-7434

(cherry picked from commit 22f1724123c267352116c18db1abdee25c31b382)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../runtime/errors/DeadLetterQueueReporter.java|  6 -
 .../connect/runtime/errors/ErrorReporterTest.java  | 30 ++
 2 files changed, 35 insertions(+), 1 deletion(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
index c059dcf..2312269 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
@@ -199,6 +199,10 @@ public class DeadLetterQueueReporter implements 
ErrorReporter {
 }
 
 private byte[] toBytes(String value) {
-return value.getBytes(StandardCharsets.UTF_8);
+if (value != null) {
+return value.getBytes(StandardCharsets.UTF_8);
+} else {
+return null;
+}
 }
 }
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
index fa628b0..00a922f 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/errors/ErrorReporterTest.java
@@ -59,6 +59,7 @@ import static 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.ER
 import static 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.ERROR_HEADER_TASK_ID;
 import static org.easymock.EasyMock.replay;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 @RunWith(PowerMockRunner.class)
@@ -205,6 +206,7 @@ public class ErrorReporterTest {
 assertEquals(configuration.dlqTopicReplicationFactor(), 7);
 }
 
+@Test
 public void testDlqHeaderConsumerRecord() {
 Map props = new HashMap<>();
 props.put(SinkConnectorConfig.DLQ_TOPIC_NAME_CONFIG, DLQ_TOPIC);
@@ -233,6 +235,34 @@ public class ErrorReporterTest {
 }
 
 @Test
+public void testDlqHeaderOnNullExceptionMessage() {
+Map props = new HashMap<>();
+props.put(SinkConnectorConfig.DLQ_TOPIC_NAME_CONFIG, DLQ_TOPIC);
+props.put(SinkConnectorConfig.DLQ_CONTEXT_HEADERS_ENABLE_CONFIG, 
"true");
+DeadLetterQueueReporter deadLetterQueueReporter = new 
DeadLetterQueueReporter(producer, config(props), TASK_ID, errorHandlingMetrics);
+
+ProcessingContext context = new ProcessingContext();
+context.consumerRecord(new ConsumerRecord<>("source-topic", 7, 10, 
"source-key".getBytes(), "source-value".getBytes()));
+context.currentContext(Stage.TRANSFORMATION, Transformation.class);
+context.error(new NullPointerException());
+
+ProducerRecord producerRecord = new 
ProducerRecord<>(DLQ_TOPIC, "source-key".getBytes(), "source-value".getBytes());
+
+deadLetterQueueReporter.populateContextHeaders(producerRecord, 
context);
+assertEquals("source-topic", headerValue(producerRecord, 
ERROR_HEADER_ORIG_TOPIC));
+assertEquals("7", headerValue(producerRecord, 
ERROR_HEADER_ORIG_PARTITION));
+assertEquals("10", headerValue(producerRecord, 
ERROR_HEADER_ORIG_OFFSET));
+assertEquals(TASK_ID.connector(), headerValue(producerRecord, 
ERROR_HEADER_CONNECTOR_NAME));
+assertEquals(String.valueOf(TASK_ID.task()), 
headerValue(producerRecord, ERROR_HEADER_TASK_ID));
+assertEquals(Stage.TRANSFORMATION.name(), hea

[kafka] branch trunk updated: KAFKA-4932: Add support for UUID serialization and deserialization (KIP-206)

2018-09-09 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 164ef94  KAFKA-4932: Add support for UUID serialization and 
deserialization (KIP-206)
164ef94 is described below

commit 164ef9462e9d18a36f7be856243a1cacf9a300bf
Author: Brandon Kirchner 
AuthorDate: Sun Sep 9 17:22:18 2018 -0700

KAFKA-4932: Add support for UUID serialization and deserialization (KIP-206)

[KAFKA-4932](https://issues.apache.org/jira/browse/KAFKA-4932)

Added a UUID Serializer / Deserializer.

Added the UUID type to the SerializationTest

Author: Brandon Kirchner 

Reviewers: Jeff Klukas , Ewen Cheslack-Postava 


Closes #4438 from brandonkirchner/KAFKA-4932.uuid-serde
---
 .../apache/kafka/common/serialization/Serdes.java  | 20 +++-
 .../common/serialization/UUIDDeserializer.java | 60 ++
 .../kafka/common/serialization/UUIDSerializer.java | 58 +
 .../common/serialization/SerializationTest.java|  2 +
 4 files changed, 139 insertions(+), 1 deletion(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/serialization/Serdes.java 
b/clients/src/main/java/org/apache/kafka/common/serialization/Serdes.java
index 7825ad4..9f1c7ce 100644
--- a/clients/src/main/java/org/apache/kafka/common/serialization/Serdes.java
+++ b/clients/src/main/java/org/apache/kafka/common/serialization/Serdes.java
@@ -20,6 +20,7 @@ import org.apache.kafka.common.utils.Bytes;
 
 import java.nio.ByteBuffer;
 import java.util.Map;
+import java.util.UUID;
 
 /**
  * Factory for creating serializers / deserializers.
@@ -112,6 +113,12 @@ public class Serdes {
 }
 }
 
+static public final class UUIDSerde extends WrapperSerde {
+public UUIDSerde() {
+super(new UUIDSerializer(), new UUIDDeserializer());
+}
+}
+
 @SuppressWarnings("unchecked")
 static public  Serde serdeFrom(Class type) {
 if (String.class.isAssignableFrom(type)) {
@@ -150,9 +157,13 @@ public class Serdes {
 return (Serde) Bytes();
 }
 
+if (UUID.class.isAssignableFrom(type)) {
+return (Serde) UUID();
+}
+
 // TODO: we can also serializes objects of type T using generic Java 
serialization by default
 throw new IllegalArgumentException("Unknown class for built-in 
serializer. Supported types are: " +
-"String, Short, Integer, Long, Float, Double, ByteArray, 
ByteBuffer, Bytes");
+"String, Short, Integer, Long, Float, Double, ByteArray, 
ByteBuffer, Bytes, UUID");
 }
 
 /**
@@ -229,6 +240,13 @@ public class Serdes {
 }
 
 /*
+ * A serde for nullable {@code UUID} type
+ */
+static public Serde UUID() {
+return new UUIDSerde();
+}
+
+/*
  * A serde for nullable {@code byte[]} type.
  */
 static public Serde ByteArray() {
diff --git 
a/clients/src/main/java/org/apache/kafka/common/serialization/UUIDDeserializer.java
 
b/clients/src/main/java/org/apache/kafka/common/serialization/UUIDDeserializer.java
new file mode 100644
index 000..a6eb2ea
--- /dev/null
+++ 
b/clients/src/main/java/org/apache/kafka/common/serialization/UUIDDeserializer.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.serialization;
+
+import org.apache.kafka.common.errors.SerializationException;
+
+import java.io.UnsupportedEncodingException;
+import java.util.Map;
+import java.util.UUID;
+
+/**
+ *  We are converting the byte array to String before deserializing to UUID. 
String encoding defaults to UTF8 and can be customized by setting
+ *  the property key.deserializer.encoding, value.deserializer.encoding or 
deserializer.encoding. The first two take precedence over the last.
+ */
+public class UUIDDeserializer implements Deserializer {
+private String encoding = "UTF8";
+
+@Override
+public void configure(Map configs, boolean isKe

[kafka] branch trunk updated: KAFKA-7353: Connect logs 'this' for anonymous inner classes

2018-09-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 847780e  KAFKA-7353: Connect logs 'this' for anonymous inner classes
847780e is described below

commit 847780e5a5f376fa2ce8705f483bfd33b319b83d
Author: Kevin Lafferty 
AuthorDate: Wed Sep 5 20:15:25 2018 -0700

KAFKA-7353: Connect logs 'this' for anonymous inner classes

Replace 'this' reference in anonymous inner class logs to out class's 'this'

Author: Kevin Lafferty 

Reviewers: Randall Hauch , Arjun Satish 
, Ewen Cheslack-Postava 

Closes #5583 from kevin-laff/connect_logging
---
 .../java/org/apache/kafka/connect/runtime/WorkerConnector.java |  2 +-
 .../java/org/apache/kafka/connect/runtime/WorkerSinkTask.java  |  2 +-
 .../org/apache/kafka/connect/runtime/WorkerSourceTask.java | 10 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
index 611e196..55d4860 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
@@ -89,7 +89,7 @@ public class WorkerConnector {
 
 @Override
 public void raiseError(Exception e) {
-log.error("{} Connector raised an error", this, e);
+log.error("{} Connector raised an error", 
WorkerConnector.this, e);
 onFailure(e);
 ctx.raiseError(e);
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
index 692331e..39e0c6d 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
@@ -649,7 +649,7 @@ class WorkerSinkTask extends WorkerTask {
 long pos = consumer.position(tp);
 lastCommittedOffsets.put(tp, new OffsetAndMetadata(pos));
 currentOffsets.put(tp, new OffsetAndMetadata(pos));
-log.debug("{} Assigned topic partition {} with offset {}", 
this, tp, pos);
+log.debug("{} Assigned topic partition {} with offset {}", 
WorkerSinkTask.this, tp, pos);
 }
 sinkTaskMetricsGroup.assignedOffsets(currentOffsets);
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
index 70d0cf9..623a210 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
@@ -326,11 +326,11 @@ class WorkerSourceTask extends WorkerTask {
 // timeouts, callbacks with exceptions 
should never be invoked in practice. If the
 // user overrode these settings, the best 
we can do is notify them of the failure via
 // logging.
-log.error("{} failed to send record to {}: 
{}", this, topic, e);
-log.debug("{} Failed record: {}", this, 
preTransformRecord);
+log.error("{} failed to send record to {}: 
{}", WorkerSourceTask.this, topic, e);
+log.debug("{} Failed record: {}", 
WorkerSourceTask.this, preTransformRecord);
 } else {
 log.trace("{} Wrote record successfully: 
topic {} partition {} offset {}",
-this,
+WorkerSourceTask.this,
 recordMetadata.topic(), 
recordMetadata.partition(),
 recordMetadata.offset());
 commitTaskRecord(preTransformRecord);
@@ -454,9 +454,9 @@ class WorkerSourceTask extends WorkerTask {
 @Override
 public void onCompletion(Throwable error, Void result) {
 if (error != null) {
-log.error("{} Failed to flush offsets to storage: ", this, 
error);
+log.error("{} Failed to flush offsets to storage: ", 
WorkerSourceTask.t

[kafka] branch 1.0 updated: KAFKA-7353: Connect logs 'this' for anonymous inner classes

2018-09-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new 4866c33  KAFKA-7353: Connect logs 'this' for anonymous inner classes
4866c33 is described below

commit 4866c33ac309ba5cc098a02948253f55a83666a3
Author: Kevin Lafferty 
AuthorDate: Wed Sep 5 20:15:25 2018 -0700

KAFKA-7353: Connect logs 'this' for anonymous inner classes

Replace 'this' reference in anonymous inner class logs to out class's 'this'

Author: Kevin Lafferty 

Reviewers: Randall Hauch , Arjun Satish 
, Ewen Cheslack-Postava 

Closes #5583 from kevin-laff/connect_logging

(cherry picked from commit 847780e5a5f376fa2ce8705f483bfd33b319b83d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/runtime/WorkerConnector.java |  2 +-
 .../java/org/apache/kafka/connect/runtime/WorkerSinkTask.java  |  2 +-
 .../org/apache/kafka/connect/runtime/WorkerSourceTask.java | 10 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
index 9b934f3..86d313e 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
@@ -86,7 +86,7 @@ public class WorkerConnector {
 
 @Override
 public void raiseError(Exception e) {
-log.error("{} Connector raised an error", this, e);
+log.error("{} Connector raised an error", 
WorkerConnector.this, e);
 onFailure(e);
 ctx.raiseError(e);
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
index de75cba..a06dca6 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
@@ -592,7 +592,7 @@ class WorkerSinkTask extends WorkerTask {
 long pos = consumer.position(tp);
 lastCommittedOffsets.put(tp, new OffsetAndMetadata(pos));
 currentOffsets.put(tp, new OffsetAndMetadata(pos));
-log.debug("{} Assigned topic partition {} with offset {}", 
this, tp, pos);
+log.debug("{} Assigned topic partition {} with offset {}", 
WorkerSinkTask.this, tp, pos);
 }
 sinkTaskMetricsGroup.assignedOffsets(currentOffsets);
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
index 59071b7..27dd388 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
@@ -274,11 +274,11 @@ class WorkerSourceTask extends WorkerTask {
 // timeouts, callbacks with exceptions 
should never be invoked in practice. If the
 // user overrode these settings, the best 
we can do is notify them of the failure via
 // logging.
-log.error("{} failed to send record to {}: 
{}", this, topic, e);
-log.debug("{} Failed record: {}", this, 
preTransformRecord);
+log.error("{} failed to send record to {}: 
{}", WorkerSourceTask.this, topic, e);
+log.debug("{} Failed record: {}", 
WorkerSourceTask.this, preTransformRecord);
 } else {
 log.trace("{} Wrote record successfully: 
topic {} partition {} offset {}",
-this,
+WorkerSourceTask.this,
 recordMetadata.topic(), 
recordMetadata.partition(),
 recordMetadata.offset());
 commitTaskRecord(preTransformRecord);
@@ -388,9 +388,9 @@ class WorkerSourceTask extends WorkerTask {
 @Override
 public void onCompletion(Throwable error, Void result) {
 if (error != null) {
-log.error("{} Failed to flush offsets to storage: ", this, 
error);
+  

[kafka] branch 1.1 updated: KAFKA-7353: Connect logs 'this' for anonymous inner classes

2018-09-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 57d7f11  KAFKA-7353: Connect logs 'this' for anonymous inner classes
57d7f11 is described below

commit 57d7f11e38e41892191f6fe87faae8f23aa0362e
Author: Kevin Lafferty 
AuthorDate: Wed Sep 5 20:15:25 2018 -0700

KAFKA-7353: Connect logs 'this' for anonymous inner classes

Replace 'this' reference in anonymous inner class logs to out class's 'this'

Author: Kevin Lafferty 

Reviewers: Randall Hauch , Arjun Satish 
, Ewen Cheslack-Postava 

Closes #5583 from kevin-laff/connect_logging

(cherry picked from commit 847780e5a5f376fa2ce8705f483bfd33b319b83d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/runtime/WorkerConnector.java |  2 +-
 .../java/org/apache/kafka/connect/runtime/WorkerSinkTask.java  |  2 +-
 .../org/apache/kafka/connect/runtime/WorkerSourceTask.java | 10 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
index 611e196..55d4860 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
@@ -89,7 +89,7 @@ public class WorkerConnector {
 
 @Override
 public void raiseError(Exception e) {
-log.error("{} Connector raised an error", this, e);
+log.error("{} Connector raised an error", 
WorkerConnector.this, e);
 onFailure(e);
 ctx.raiseError(e);
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
index 6edcfd4..478e952 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
@@ -621,7 +621,7 @@ class WorkerSinkTask extends WorkerTask {
 long pos = consumer.position(tp);
 lastCommittedOffsets.put(tp, new OffsetAndMetadata(pos));
 currentOffsets.put(tp, new OffsetAndMetadata(pos));
-log.debug("{} Assigned topic partition {} with offset {}", 
this, tp, pos);
+log.debug("{} Assigned topic partition {} with offset {}", 
WorkerSinkTask.this, tp, pos);
 }
 sinkTaskMetricsGroup.assignedOffsets(currentOffsets);
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
index ab92054..589e6b7 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
@@ -282,11 +282,11 @@ class WorkerSourceTask extends WorkerTask {
 // timeouts, callbacks with exceptions 
should never be invoked in practice. If the
 // user overrode these settings, the best 
we can do is notify them of the failure via
 // logging.
-log.error("{} failed to send record to {}: 
{}", this, topic, e);
-log.debug("{} Failed record: {}", this, 
preTransformRecord);
+log.error("{} failed to send record to {}: 
{}", WorkerSourceTask.this, topic, e);
+log.debug("{} Failed record: {}", 
WorkerSourceTask.this, preTransformRecord);
 } else {
 log.trace("{} Wrote record successfully: 
topic {} partition {} offset {}",
-this,
+WorkerSourceTask.this,
 recordMetadata.topic(), 
recordMetadata.partition(),
 recordMetadata.offset());
 commitTaskRecord(preTransformRecord);
@@ -410,9 +410,9 @@ class WorkerSourceTask extends WorkerTask {
 @Override
 public void onCompletion(Throwable error, Void result) {
 if (error != null) {
-log.error("{} Failed to flush offsets to storage: ", this, 
error);
+  

[kafka] branch 2.0 updated: KAFKA-7353: Connect logs 'this' for anonymous inner classes

2018-09-05 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 7616ff4  KAFKA-7353: Connect logs 'this' for anonymous inner classes
7616ff4 is described below

commit 7616ff449a790811fc809b4c21f17f20561936e3
Author: Kevin Lafferty 
AuthorDate: Wed Sep 5 20:15:25 2018 -0700

KAFKA-7353: Connect logs 'this' for anonymous inner classes

Replace 'this' reference in anonymous inner class logs to out class's 'this'

Author: Kevin Lafferty 

Reviewers: Randall Hauch , Arjun Satish 
, Ewen Cheslack-Postava 

Closes #5583 from kevin-laff/connect_logging

(cherry picked from commit 847780e5a5f376fa2ce8705f483bfd33b319b83d)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../java/org/apache/kafka/connect/runtime/WorkerConnector.java |  2 +-
 .../java/org/apache/kafka/connect/runtime/WorkerSinkTask.java  |  2 +-
 .../org/apache/kafka/connect/runtime/WorkerSourceTask.java | 10 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
index 611e196..55d4860 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java
@@ -89,7 +89,7 @@ public class WorkerConnector {
 
 @Override
 public void raiseError(Exception e) {
-log.error("{} Connector raised an error", this, e);
+log.error("{} Connector raised an error", 
WorkerConnector.this, e);
 onFailure(e);
 ctx.raiseError(e);
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
index 47f8529..828f4a3 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java
@@ -648,7 +648,7 @@ class WorkerSinkTask extends WorkerTask {
 long pos = consumer.position(tp);
 lastCommittedOffsets.put(tp, new OffsetAndMetadata(pos));
 currentOffsets.put(tp, new OffsetAndMetadata(pos));
-log.debug("{} Assigned topic partition {} with offset {}", 
this, tp, pos);
+log.debug("{} Assigned topic partition {} with offset {}", 
WorkerSinkTask.this, tp, pos);
 }
 sinkTaskMetricsGroup.assignedOffsets(currentOffsets);
 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
index 70d0cf9..623a210 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java
@@ -326,11 +326,11 @@ class WorkerSourceTask extends WorkerTask {
 // timeouts, callbacks with exceptions 
should never be invoked in practice. If the
 // user overrode these settings, the best 
we can do is notify them of the failure via
 // logging.
-log.error("{} failed to send record to {}: 
{}", this, topic, e);
-log.debug("{} Failed record: {}", this, 
preTransformRecord);
+log.error("{} failed to send record to {}: 
{}", WorkerSourceTask.this, topic, e);
+log.debug("{} Failed record: {}", 
WorkerSourceTask.this, preTransformRecord);
 } else {
 log.trace("{} Wrote record successfully: 
topic {} partition {} offset {}",
-this,
+WorkerSourceTask.this,
 recordMetadata.topic(), 
recordMetadata.partition(),
 recordMetadata.offset());
 commitTaskRecord(preTransformRecord);
@@ -454,9 +454,9 @@ class WorkerSourceTask extends WorkerTask {
 @Override
 public void onCompletion(Throwable error, Void result) {
 if (error != null) {
-log.error("{} Failed to flush offsets to storage: ", this, 
error);
+  

[kafka] branch 2.0 updated: KAFKA-7242: Reverse xform configs before saving (KIP-297)

2018-08-28 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 4f789be  KAFKA-7242: Reverse xform configs before saving (KIP-297)
4f789be is described below

commit 4f789bebf4e8a403b3871f04bc3ba3053440b0be
Author: Robert Yokota 
AuthorDate: Tue Aug 28 12:59:08 2018 -0700

KAFKA-7242: Reverse xform configs before saving (KIP-297)

During actions such as a reconfiguration, the task configs are obtained
via `Worker.connectorTaskConfigs` and then subsequently saved into an
instance of `ClusterConfigState`.  The values of the properties that are 
saved
are post-transformation (of variable references) when they should be
pre-transformation.  This is to avoid secrets appearing in plaintext in
the `connect-configs` topic, for example.

The fix is to change the 2 clients of `Worker.connectorTaskConfigs` to
perform a reverse transformation (values converted back into variable
references) before saving them into an instance of `ClusterConfigState`.
The 2 places where the save is performed are
`DistributedHerder.reconfigureConnector` and
`StandaloneHerder.updateConnectorTasks`.

The way that the reverse transformation works is by using the
"raw" connector config (with variable references still intact) from
`ClusterConfigState` to convert config values back into variable
references for those keys that are common between the task config
and the connector config.

There are 2 additional small changes that only affect `StandaloneHerder`:

1) `ClusterConfigState.allTasksConfigs` has been changed to perform a
transformation (resolution) on all variable references.  This is
necessary because the result of this method is compared directly to
`Worker.connectorTaskConfigs`, which also has variable references
resolved.

2) `StandaloneHerder.startConnector` has been changed to match
`DistributedHerder.startConnector`.  This is to fix an issue where
during `StandaloneHerder.restartConnector`, the post-transformed
connector config would be saved back into `ClusterConfigState`.

I also performed an analysis of all other code paths where configs are
saved back into `ClusterConfigState` and did not find any other
issues.

Author: Robert Yokota 

Reviewers: Ewen Cheslack-Postava 

Closes #5475 from rayokota/KAFKA-7242-reverse-xform-props

(cherry picked from commit fd5acd73e648a2aab4b970ddf04ad4cace6bad9a)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../kafka/common/config/ConfigTransformer.java |  2 +-
 .../kafka/connect/runtime/AbstractHerder.java  | 43 
 .../runtime/distributed/ClusterConfigState.java| 22 +-
 .../runtime/distributed/DistributedHerder.java |  5 +-
 .../runtime/standalone/StandaloneHerder.java   | 14 ++--
 .../kafka/connect/runtime/AbstractHerderTest.java  | 81 ++
 6 files changed, 154 insertions(+), 13 deletions(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
index f5a3737..6430ffd 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
@@ -53,7 +53,7 @@ import java.util.regex.Pattern;
  * {@link ConfigProvider#unsubscribe(String, Set, ConfigChangeCallback)} 
methods.
  */
 public class ConfigTransformer {
-private static final Pattern DEFAULT_PATTERN = 
Pattern.compile("\\$\\{(.*?):((.*?):)?(.*?)\\}");
+public static final Pattern DEFAULT_PATTERN = 
Pattern.compile("\\$\\{(.*?):((.*?):)?(.*?)\\}");
 private static final String EMPTY_PATH = "";
 
 private final Map configProviders;
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
index cadb4e0..82fdecc 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
@@ -20,9 +20,11 @@ import org.apache.kafka.common.config.Config;
 import org.apache.kafka.common.config.ConfigDef;
 import org.apache.kafka.common.config.ConfigDef.ConfigKey;
 import org.apache.kafka.common.config.ConfigDef.Type;
+import org.apache.kafka.common.config.ConfigTransformer;
 import org.apache.kafka.common.config.ConfigValue;
 import org.apache.kafka.connect.connector.Connector;
 import org.apache.kafka.connect.errors.NotFoundException;
+import org.apache.kafka.connect.runtime.distributed.ClusterConfigState;
 import org.apache

[kafka] branch trunk updated: KAFKA-7242: Reverse xform configs before saving (KIP-297)

2018-08-28 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fd5acd7  KAFKA-7242: Reverse xform configs before saving (KIP-297)
fd5acd7 is described below

commit fd5acd73e648a2aab4b970ddf04ad4cace6bad9a
Author: Robert Yokota 
AuthorDate: Tue Aug 28 12:59:08 2018 -0700

KAFKA-7242: Reverse xform configs before saving (KIP-297)

During actions such as a reconfiguration, the task configs are obtained
via `Worker.connectorTaskConfigs` and then subsequently saved into an
instance of `ClusterConfigState`.  The values of the properties that are 
saved
are post-transformation (of variable references) when they should be
pre-transformation.  This is to avoid secrets appearing in plaintext in
the `connect-configs` topic, for example.

The fix is to change the 2 clients of `Worker.connectorTaskConfigs` to
perform a reverse transformation (values converted back into variable
references) before saving them into an instance of `ClusterConfigState`.
The 2 places where the save is performed are
`DistributedHerder.reconfigureConnector` and
`StandaloneHerder.updateConnectorTasks`.

The way that the reverse transformation works is by using the
"raw" connector config (with variable references still intact) from
`ClusterConfigState` to convert config values back into variable
references for those keys that are common between the task config
and the connector config.

There are 2 additional small changes that only affect `StandaloneHerder`:

1) `ClusterConfigState.allTasksConfigs` has been changed to perform a
transformation (resolution) on all variable references.  This is
necessary because the result of this method is compared directly to
`Worker.connectorTaskConfigs`, which also has variable references
resolved.

2) `StandaloneHerder.startConnector` has been changed to match
`DistributedHerder.startConnector`.  This is to fix an issue where
during `StandaloneHerder.restartConnector`, the post-transformed
connector config would be saved back into `ClusterConfigState`.

I also performed an analysis of all other code paths where configs are
saved back into `ClusterConfigState` and did not find any other
issues.

Author: Robert Yokota 

Reviewers: Ewen Cheslack-Postava 

Closes #5475 from rayokota/KAFKA-7242-reverse-xform-props
---
 .../kafka/common/config/ConfigTransformer.java |  2 +-
 .../kafka/connect/runtime/AbstractHerder.java  | 43 
 .../runtime/distributed/ClusterConfigState.java| 22 +-
 .../runtime/distributed/DistributedHerder.java |  5 +-
 .../runtime/standalone/StandaloneHerder.java   | 14 ++--
 .../kafka/connect/runtime/AbstractHerderTest.java  | 81 ++
 6 files changed, 154 insertions(+), 13 deletions(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
index f5a3737..6430ffd 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
@@ -53,7 +53,7 @@ import java.util.regex.Pattern;
  * {@link ConfigProvider#unsubscribe(String, Set, ConfigChangeCallback)} 
methods.
  */
 public class ConfigTransformer {
-private static final Pattern DEFAULT_PATTERN = 
Pattern.compile("\\$\\{(.*?):((.*?):)?(.*?)\\}");
+public static final Pattern DEFAULT_PATTERN = 
Pattern.compile("\\$\\{(.*?):((.*?):)?(.*?)\\}");
 private static final String EMPTY_PATH = "";
 
 private final Map configProviders;
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
index cadb4e0..82fdecc 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
@@ -20,9 +20,11 @@ import org.apache.kafka.common.config.Config;
 import org.apache.kafka.common.config.ConfigDef;
 import org.apache.kafka.common.config.ConfigDef.ConfigKey;
 import org.apache.kafka.common.config.ConfigDef.Type;
+import org.apache.kafka.common.config.ConfigTransformer;
 import org.apache.kafka.common.config.ConfigValue;
 import org.apache.kafka.connect.connector.Connector;
 import org.apache.kafka.connect.errors.NotFoundException;
+import org.apache.kafka.connect.runtime.distributed.ClusterConfigState;
 import org.apache.kafka.connect.runtime.isolation.Plugins;
 import org.apache.kafka.connect.r

[kafka] branch 2.0 updated: MINOR: System test for error handling and writes to DeadLetterQueue

2018-08-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new ca2589c  MINOR: System test for error handling and writes to 
DeadLetterQueue
ca2589c is described below

commit ca2589cc7ff60c48ab7492e4e8cd22e78bda9acb
Author: Arjun Satish 
AuthorDate: Tue Aug 7 14:44:01 2018 -0700

MINOR: System test for error handling and writes to DeadLetterQueue

Added a system test which creates a file sink with json converter and 
attempts to feed it bad records. The bad records should land in the DLQ if it 
is enabled, and the task should be killed or bad records skipped based on test 
parameters.

Signed-off-by: Arjun Satish 

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Arjun Satish 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5456 from wicknicks/error-handling-sys-test

(cherry picked from commit 28a1ae4183c707af363b69e2ec2b743bdf4f236c)
Signed-off-by: Ewen Cheslack-Postava 
---
 tests/kafkatest/services/connect.py|  5 ++
 tests/kafkatest/tests/connect/connect_test.py  | 72 ++
 .../connect/templates/connect-file-sink.properties | 18 +-
 .../templates/connect-file-source.properties   |  7 +++
 4 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/connect.py 
b/tests/kafkatest/services/connect.py
index d7ef204..19beddd 100644
--- a/tests/kafkatest/services/connect.py
+++ b/tests/kafkatest/services/connect.py
@@ -326,6 +326,11 @@ class ConnectDistributedService(ConnectServiceBase):
 raise RuntimeError("No process ids recorded")
 
 
+class ErrorTolerance(object):
+ALL = "all"
+NONE = "none"
+
+
 class ConnectRestError(RuntimeError):
 def __init__(self, status, msg, url):
 self.status = status
diff --git a/tests/kafkatest/tests/connect/connect_test.py 
b/tests/kafkatest/tests/connect/connect_test.py
index 9d34c48..c961681 100644
--- a/tests/kafkatest/tests/connect/connect_test.py
+++ b/tests/kafkatest/tests/connect/connect_test.py
@@ -18,10 +18,12 @@ from ducktape.mark.resource import cluster
 from ducktape.utils.util import wait_until
 from ducktape.mark import parametrize, matrix
 from ducktape.cluster.remoteaccount import RemoteCommandError
+from ducktape.errors import TimeoutError
 
 from kafkatest.services.zookeeper import ZookeeperService
 from kafkatest.services.kafka import KafkaService
 from kafkatest.services.connect import ConnectStandaloneService
+from kafkatest.services.connect import ErrorTolerance
 from kafkatest.services.console_consumer import ConsoleConsumer
 from kafkatest.services.security.security_config import SecurityConfig
 
@@ -134,3 +136,73 @@ class ConnectStandaloneFileTest(Test):
 return output_hash == hashlib.md5(value).hexdigest()
 except RemoteCommandError:
 return False
+
+@cluster(num_nodes=5)
+@parametrize(error_tolerance=ErrorTolerance.ALL)
+@parametrize(error_tolerance=ErrorTolerance.NONE)
+def test_skip_and_log_to_dlq(self, error_tolerance):
+self.kafka = KafkaService(self.test_context, self.num_brokers, 
self.zk, topics=self.topics)
+
+# set config props
+self.override_error_tolerance_props = error_tolerance
+self.enable_deadletterqueue = True
+
+successful_records = []
+faulty_records = []
+records = []
+for i in range(0, 1000):
+if i % 2 == 0:
+records.append('{"some_key":' + str(i) + '}')
+successful_records.append('{some_key=' + str(i) + '}')
+else:
+# badly formatted json records (missing a quote after the key)
+records.append('{"some_key:' + str(i) + '}')
+faulty_records.append('{"some_key:' + str(i) + '}')
+
+records = "\n".join(records) + "\n"
+successful_records = "\n".join(successful_records) + "\n"
+if error_tolerance == ErrorTolerance.ALL:
+faulty_records = ",".join(faulty_records)
+else:
+faulty_records = faulty_records[0]
+
+self.source = ConnectStandaloneService(self.test_context, self.kafka, 
[self.INPUT_FILE, self.OFFSETS_FILE])
+self.sink = ConnectStandaloneService(self.test_context, self.kafka, 
[self.OUTPUT_FILE, self.OFFSETS_

[kafka] branch trunk updated: MINOR: System test for error handling and writes to DeadLetterQueue

2018-08-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 28a1ae4  MINOR: System test for error handling and writes to 
DeadLetterQueue
28a1ae4 is described below

commit 28a1ae4183c707af363b69e2ec2b743bdf4f236c
Author: Arjun Satish 
AuthorDate: Tue Aug 7 14:44:01 2018 -0700

MINOR: System test for error handling and writes to DeadLetterQueue

Added a system test which creates a file sink with json converter and 
attempts to feed it bad records. The bad records should land in the DLQ if it 
is enabled, and the task should be killed or bad records skipped based on test 
parameters.

Signed-off-by: Arjun Satish 

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Arjun Satish 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5456 from wicknicks/error-handling-sys-test
---
 tests/kafkatest/services/connect.py|  5 ++
 tests/kafkatest/tests/connect/connect_test.py  | 72 ++
 .../connect/templates/connect-file-sink.properties | 18 +-
 .../templates/connect-file-source.properties   |  7 +++
 4 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/tests/kafkatest/services/connect.py 
b/tests/kafkatest/services/connect.py
index d7ef204..19beddd 100644
--- a/tests/kafkatest/services/connect.py
+++ b/tests/kafkatest/services/connect.py
@@ -326,6 +326,11 @@ class ConnectDistributedService(ConnectServiceBase):
 raise RuntimeError("No process ids recorded")
 
 
+class ErrorTolerance(object):
+ALL = "all"
+NONE = "none"
+
+
 class ConnectRestError(RuntimeError):
 def __init__(self, status, msg, url):
 self.status = status
diff --git a/tests/kafkatest/tests/connect/connect_test.py 
b/tests/kafkatest/tests/connect/connect_test.py
index 9d34c48..c961681 100644
--- a/tests/kafkatest/tests/connect/connect_test.py
+++ b/tests/kafkatest/tests/connect/connect_test.py
@@ -18,10 +18,12 @@ from ducktape.mark.resource import cluster
 from ducktape.utils.util import wait_until
 from ducktape.mark import parametrize, matrix
 from ducktape.cluster.remoteaccount import RemoteCommandError
+from ducktape.errors import TimeoutError
 
 from kafkatest.services.zookeeper import ZookeeperService
 from kafkatest.services.kafka import KafkaService
 from kafkatest.services.connect import ConnectStandaloneService
+from kafkatest.services.connect import ErrorTolerance
 from kafkatest.services.console_consumer import ConsoleConsumer
 from kafkatest.services.security.security_config import SecurityConfig
 
@@ -134,3 +136,73 @@ class ConnectStandaloneFileTest(Test):
 return output_hash == hashlib.md5(value).hexdigest()
 except RemoteCommandError:
 return False
+
+@cluster(num_nodes=5)
+@parametrize(error_tolerance=ErrorTolerance.ALL)
+@parametrize(error_tolerance=ErrorTolerance.NONE)
+def test_skip_and_log_to_dlq(self, error_tolerance):
+self.kafka = KafkaService(self.test_context, self.num_brokers, 
self.zk, topics=self.topics)
+
+# set config props
+self.override_error_tolerance_props = error_tolerance
+self.enable_deadletterqueue = True
+
+successful_records = []
+faulty_records = []
+records = []
+for i in range(0, 1000):
+if i % 2 == 0:
+records.append('{"some_key":' + str(i) + '}')
+successful_records.append('{some_key=' + str(i) + '}')
+else:
+# badly formatted json records (missing a quote after the key)
+records.append('{"some_key:' + str(i) + '}')
+faulty_records.append('{"some_key:' + str(i) + '}')
+
+records = "\n".join(records) + "\n"
+successful_records = "\n".join(successful_records) + "\n"
+if error_tolerance == ErrorTolerance.ALL:
+faulty_records = ",".join(faulty_records)
+else:
+faulty_records = faulty_records[0]
+
+self.source = ConnectStandaloneService(self.test_context, self.kafka, 
[self.INPUT_FILE, self.OFFSETS_FILE])
+self.sink = ConnectStandaloneService(self.test_context, self.kafka, 
[self.OUTPUT_FILE, self.OFFSETS_FILE])
+
+self.zk.start()
+self.kafka.start()
+
+self.override_key_co

[kafka] branch trunk updated: MINOR: Add connector configs to site-docs

2018-08-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e876c92  MINOR: Add connector configs to site-docs
e876c92 is described below

commit e876c921b0cbf01e4d2aef26db56834da1d70c80
Author: Arjun Satish 
AuthorDate: Tue Aug 7 14:34:27 2018 -0700

MINOR: Add connector configs to site-docs

In AK's documentation, the config props for connectors are not listed 
(https://kafka.apache.org/documentation/#connectconfigs). This PR adds these 
sink and source connector configs to the html site-docs.

Signed-off-by: Arjun Satish 

Author: Arjun Satish 

Reviewers: Ewen Cheslack-Postava 

Closes #5469 from wicknicks/add-connector-configs-to-docs
---
 build.gradle  | 15 +++
 .../java/org/apache/kafka/common/config/ConfigDef.java| 13 +
 .../org/apache/kafka/connect/runtime/ConnectorConfig.java |  5 +
 .../apache/kafka/connect/runtime/SinkConnectorConfig.java |  4 
 .../kafka/connect/runtime/SourceConnectorConfig.java  |  4 
 docs/configuration.html   |  8 
 6 files changed, 49 insertions(+)

diff --git a/build.gradle b/build.gradle
index 3e8558d..83b169b 100644
--- a/build.gradle
+++ b/build.gradle
@@ -709,6 +709,7 @@ project(':core') {
'genAdminClientConfigDocs', 
'genProducerConfigDocs', 'genConsumerConfigDocs',
'genKafkaConfigDocs', 'genTopicConfigDocs',
':connect:runtime:genConnectConfigDocs', 
':connect:runtime:genConnectTransformationDocs',
+   ':connect:runtime:genSinkConnectorConfigDocs', 
':connect:runtime:genSourceConnectorConfigDocs',
':streams:genStreamsConfigDocs', 
'genConsumerMetricsDocs', 'genProducerMetricsDocs',
':connect:runtime:genConnectMetricsDocs'], 
type: Tar) {
 classifier = 'site-docs'
@@ -1407,6 +1408,20 @@ project(':connect:runtime') {
 standardOutput = new File(generatedDocsDir, 
"connect_config.html").newOutputStream()
   }
 
+  task genSinkConnectorConfigDocs(type: JavaExec) {
+classpath = sourceSets.main.runtimeClasspath
+main = 'org.apache.kafka.connect.runtime.SinkConnectorConfig'
+if( !generatedDocsDir.exists() ) { generatedDocsDir.mkdirs() }
+standardOutput = new File(generatedDocsDir, 
"sink_connector_config.html").newOutputStream()
+  }
+
+  task genSourceConnectorConfigDocs(type: JavaExec) {
+classpath = sourceSets.main.runtimeClasspath
+main = 'org.apache.kafka.connect.runtime.SourceConnectorConfig'
+if( !generatedDocsDir.exists() ) { generatedDocsDir.mkdirs() }
+standardOutput = new File(generatedDocsDir, 
"source_connector_config.html").newOutputStream()
+  }
+
   task genConnectTransformationDocs(type: JavaExec) {
 classpath = sourceSets.main.runtimeClasspath
 main = 'org.apache.kafka.connect.tools.TransformationDoc'
diff --git 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
index 08ac125..af2f6c4 100644
--- a/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
+++ b/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
@@ -973,6 +973,19 @@ public class ConfigDef {
 validator.ensureValid(name, value);
 }
 }
+
+@Override
+public String toString() {
+if (validators == null) return "";
+StringBuilder desc = new StringBuilder();
+for (Validator v: validators) {
+if (desc.length() > 0) {
+desc.append(',').append(' ');
+}
+desc.append(String.valueOf(v));
+}
+return desc.toString();
+}
 }
 
 public static class NonEmptyString implements Validator {
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
index 9d1a50d..10096a5 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java
@@ -169,6 +169,11 @@ public class ConnectorConfig extends AbstractConfig {
 throw new ConfigException(name, value, "Duplicate 
alias provided.");
 }
 }
+
+@Override
+public String toString() {
+return "unique transformation aliases";
+  

[kafka] branch 2.0 updated: KAFKA-7225: Pretransform validated props

2018-08-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new beaac98  KAFKA-7225: Pretransform validated props
beaac98 is described below

commit beaac98b292c1943b16f12f2005fbe09dc6d376e
Author: Robert Yokota 
AuthorDate: Tue Aug 7 13:18:16 2018 -0700

KAFKA-7225: Pretransform validated props

If a property requires validation, it should be pretransformed if it is a 
variable reference, in order to have a value that will properly pass the 
validation.

Author: Robert Yokota 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5445 from rayokota/KAFKA-7225-pretransform-validated-props

(cherry picked from commit 36a8fec0ab2d05a8386ecd386bbbd294c3dc9126)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../apache/kafka/connect/runtime/AbstractHerder.java |  3 +++
 .../connect/runtime/WorkerConfigTransformer.java |  8 +++-
 .../kafka/connect/runtime/AbstractHerderTest.java|  4 
 .../runtime/distributed/DistributedHerderTest.java   | 20 
 .../runtime/standalone/StandaloneHerderTest.java | 17 +
 tests/kafkatest/tests/connect/connect_test.py| 11 +--
 .../templates/connect-file-external.properties   | 16 
 .../connect/templates/connect-standalone.properties  |  3 +++
 8 files changed, 79 insertions(+), 3 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
index b5e0ec2..cadb4e0 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
@@ -246,6 +246,9 @@ public abstract class AbstractHerder implements Herder, 
TaskStatus.Listener, Con
 
 @Override
 public ConfigInfos validateConnectorConfig(Map 
connectorProps) {
+if (worker.configTransformer() != null) {
+connectorProps = 
worker.configTransformer().transform(connectorProps);
+}
 String connType = 
connectorProps.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG);
 if (connType == null)
 throw new BadRequestException("Connector config " + connectorProps 
+ " contains no connector type");
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
index 7efb481..1b715c7 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
@@ -38,10 +38,16 @@ public class WorkerConfigTransformer {
 this.configTransformer = new ConfigTransformer(configProviders);
 }
 
+public Map transform(Map configs) {
+return transform(null, configs);
+}
+
 public Map transform(String connectorName, Map configs) {
 if (configs == null) return null;
 ConfigTransformerResult result = configTransformer.transform(configs);
-scheduleReload(connectorName, result.ttls());
+if (connectorName != null) {
+scheduleReload(connectorName, result.ttls());
+}
 return result.data();
 }
 
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
index 5728465..db3cf27 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
@@ -67,6 +67,7 @@ public class AbstractHerderTest {
 private final String connector = "connector";
 
 @MockStrict private Worker worker;
+@MockStrict private WorkerConfigTransformer transformer;
 @MockStrict private Plugins plugins;
 @MockStrict private ClassLoader classLoader;
 @MockStrict private ConfigBackingStore configStore;
@@ -261,6 +262,9 @@ public class AbstractHerderTest {
 EasyMock.expect(herder.generation()).andStubReturn(generation);
 
 // Call to validateConnectorConfig
+
EasyMock.expect(worker.configTransformer()).andReturn(transformer).times(2);
+final Capture> configCapture = 
EasyMock.newCapture();
+
EasyMock.expect(transformer.transform(EasyMock.capture(configCapture))).andAnswer(configCapture::getValue);
 EasyMock.expect(worker.getPlugins()).andStubReturn(plugins);
 final Connector connector;
 try {
diff --git 
a/connect/runtime/src/test/java/org/apache/

[kafka] branch trunk updated: KAFKA-7225: Pretransform validated props

2018-08-07 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 36a8fec  KAFKA-7225: Pretransform validated props
36a8fec is described below

commit 36a8fec0ab2d05a8386ecd386bbbd294c3dc9126
Author: Robert Yokota 
AuthorDate: Tue Aug 7 13:18:16 2018 -0700

KAFKA-7225: Pretransform validated props

If a property requires validation, it should be pretransformed if it is a 
variable reference, in order to have a value that will properly pass the 
validation.

Author: Robert Yokota 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5445 from rayokota/KAFKA-7225-pretransform-validated-props
---
 .../apache/kafka/connect/runtime/AbstractHerder.java |  3 +++
 .../connect/runtime/WorkerConfigTransformer.java |  8 +++-
 .../kafka/connect/runtime/AbstractHerderTest.java|  4 
 .../runtime/distributed/DistributedHerderTest.java   | 20 
 .../runtime/standalone/StandaloneHerderTest.java | 17 +
 tests/kafkatest/tests/connect/connect_test.py| 11 +--
 .../templates/connect-file-external.properties   | 16 
 .../connect/templates/connect-standalone.properties  |  3 +++
 8 files changed, 79 insertions(+), 3 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
index b5e0ec2..cadb4e0 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java
@@ -246,6 +246,9 @@ public abstract class AbstractHerder implements Herder, 
TaskStatus.Listener, Con
 
 @Override
 public ConfigInfos validateConnectorConfig(Map 
connectorProps) {
+if (worker.configTransformer() != null) {
+connectorProps = 
worker.configTransformer().transform(connectorProps);
+}
 String connType = 
connectorProps.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG);
 if (connType == null)
 throw new BadRequestException("Connector config " + connectorProps 
+ " contains no connector type");
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
index 7efb481..1b715c7 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java
@@ -38,10 +38,16 @@ public class WorkerConfigTransformer {
 this.configTransformer = new ConfigTransformer(configProviders);
 }
 
+public Map transform(Map configs) {
+return transform(null, configs);
+}
+
 public Map transform(String connectorName, Map configs) {
 if (configs == null) return null;
 ConfigTransformerResult result = configTransformer.transform(configs);
-scheduleReload(connectorName, result.ttls());
+if (connectorName != null) {
+scheduleReload(connectorName, result.ttls());
+}
 return result.data();
 }
 
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
index 5728465..db3cf27 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/AbstractHerderTest.java
@@ -67,6 +67,7 @@ public class AbstractHerderTest {
 private final String connector = "connector";
 
 @MockStrict private Worker worker;
+@MockStrict private WorkerConfigTransformer transformer;
 @MockStrict private Plugins plugins;
 @MockStrict private ClassLoader classLoader;
 @MockStrict private ConfigBackingStore configStore;
@@ -261,6 +262,9 @@ public class AbstractHerderTest {
 EasyMock.expect(herder.generation()).andStubReturn(generation);
 
 // Call to validateConnectorConfig
+
EasyMock.expect(worker.configTransformer()).andReturn(transformer).times(2);
+final Capture> configCapture = 
EasyMock.newCapture();
+
EasyMock.expect(transformer.transform(EasyMock.capture(configCapture))).andAnswer(configCapture::getValue);
 EasyMock.expect(worker.getPlugins()).andStubReturn(plugins);
 final Connector connector;
 try {
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/distributed/DistributedHerderTest.java
 
b/connect/runtime/src/test/java/org/apache/

[kafka] branch 2.0 updated: KAFKA-7228: Set errorHandlingMetrics for dead letter queue

2018-08-02 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 2af214a  KAFKA-7228: Set errorHandlingMetrics for dead letter queue
2af214a is described below

commit 2af214a51cef984056f9cb403b8541945082238d
Author: Arjun Satish 
AuthorDate: Thu Aug 2 14:36:02 2018 -0700

KAFKA-7228: Set errorHandlingMetrics for dead letter queue

DLQ reporter does not get a `errorHandlingMetrics` object when created by 
the worker. This results in an NPE.

Signed-off-by: Arjun Satish 

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Arjun Satish 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5440 from wicknicks/KAFKA-7228

(cherry picked from commit 70d882861e1bf3eb503c84a31834e8b628de2df9)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../org/apache/kafka/connect/runtime/Worker.java   |  8 ++
 .../runtime/errors/DeadLetterQueueReporter.java| 20 +++--
 .../connect/runtime/errors/ErrorReporter.java  |  8 --
 .../kafka/connect/runtime/errors/LogReporter.java  | 15 +-
 .../connect/runtime/ErrorHandlingTaskTest.java |  9 ++
 .../connect/runtime/errors/ErrorReporterTest.java  | 33 +++---
 6 files changed, 43 insertions(+), 50 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
index 7291d4f..1096584 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
@@ -523,14 +523,13 @@ public class Worker {
 private List sinkTaskReporters(ConnectorTaskId id, 
SinkConnectorConfig connConfig,
   ErrorHandlingMetrics 
errorHandlingMetrics) {
 ArrayList reporters = new ArrayList<>();
-LogReporter logReporter = new LogReporter(id, connConfig);
-logReporter.metrics(errorHandlingMetrics);
+LogReporter logReporter = new LogReporter(id, connConfig, 
errorHandlingMetrics);
 reporters.add(logReporter);
 
 // check if topic for dead letter queue exists
 String topic = connConfig.dlqTopicName();
 if (topic != null && !topic.isEmpty()) {
-DeadLetterQueueReporter reporter = 
DeadLetterQueueReporter.createAndSetup(config, id, connConfig, producerProps);
+DeadLetterQueueReporter reporter = 
DeadLetterQueueReporter.createAndSetup(config, id, connConfig, producerProps, 
errorHandlingMetrics);
 reporters.add(reporter);
 }
 
@@ -540,8 +539,7 @@ public class Worker {
 private List sourceTaskReporters(ConnectorTaskId id, 
ConnectorConfig connConfig,
   ErrorHandlingMetrics 
errorHandlingMetrics) {
 List reporters = new ArrayList<>();
-LogReporter logReporter = new LogReporter(id, connConfig);
-logReporter.metrics(errorHandlingMetrics);
+LogReporter logReporter = new LogReporter(id, connConfig, 
errorHandlingMetrics);
 reporters.add(logReporter);
 
 return reporters;
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
index d36ec22..c059dcf 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
@@ -36,6 +36,7 @@ import java.io.IOException;
 import java.io.PrintStream;
 import java.nio.charset.StandardCharsets;
 import java.util.Map;
+import java.util.Objects;
 import java.util.concurrent.ExecutionException;
 
 import static java.util.Collections.singleton;
@@ -66,13 +67,14 @@ public class DeadLetterQueueReporter implements 
ErrorReporter {
 
 private final SinkConnectorConfig connConfig;
 private final ConnectorTaskId connectorTaskId;
+private final ErrorHandlingMetrics errorHandlingMetrics;
 
 private KafkaProducer kafkaProducer;
-private ErrorHandlingMetrics errorHandlingMetrics;
 
 public static DeadLetterQueueReporter createAndSetup(WorkerConfig 
workerConfig,

[kafka] branch trunk updated: KAFKA-7228: Set errorHandlingMetrics for dead letter queue

2018-08-02 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 70d8828  KAFKA-7228: Set errorHandlingMetrics for dead letter queue
70d8828 is described below

commit 70d882861e1bf3eb503c84a31834e8b628de2df9
Author: Arjun Satish 
AuthorDate: Thu Aug 2 14:36:02 2018 -0700

KAFKA-7228: Set errorHandlingMetrics for dead letter queue

DLQ reporter does not get a `errorHandlingMetrics` object when created by 
the worker. This results in an NPE.

Signed-off-by: Arjun Satish 

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

Author: Arjun Satish 

Reviewers: Konstantine Karantasis , Ewen 
Cheslack-Postava 

Closes #5440 from wicknicks/KAFKA-7228
---
 .../org/apache/kafka/connect/runtime/Worker.java   |  8 ++
 .../runtime/errors/DeadLetterQueueReporter.java| 20 +++--
 .../connect/runtime/errors/ErrorReporter.java  |  8 --
 .../kafka/connect/runtime/errors/LogReporter.java  | 15 +-
 .../connect/runtime/ErrorHandlingTaskTest.java |  9 ++
 .../connect/runtime/errors/ErrorReporterTest.java  | 33 +++---
 6 files changed, 43 insertions(+), 50 deletions(-)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
index e2fe6b6..df73a43 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
@@ -523,14 +523,13 @@ public class Worker {
 private List sinkTaskReporters(ConnectorTaskId id, 
SinkConnectorConfig connConfig,
   ErrorHandlingMetrics 
errorHandlingMetrics) {
 ArrayList reporters = new ArrayList<>();
-LogReporter logReporter = new LogReporter(id, connConfig);
-logReporter.metrics(errorHandlingMetrics);
+LogReporter logReporter = new LogReporter(id, connConfig, 
errorHandlingMetrics);
 reporters.add(logReporter);
 
 // check if topic for dead letter queue exists
 String topic = connConfig.dlqTopicName();
 if (topic != null && !topic.isEmpty()) {
-DeadLetterQueueReporter reporter = 
DeadLetterQueueReporter.createAndSetup(config, id, connConfig, producerProps);
+DeadLetterQueueReporter reporter = 
DeadLetterQueueReporter.createAndSetup(config, id, connConfig, producerProps, 
errorHandlingMetrics);
 reporters.add(reporter);
 }
 
@@ -540,8 +539,7 @@ public class Worker {
 private List sourceTaskReporters(ConnectorTaskId id, 
ConnectorConfig connConfig,
   ErrorHandlingMetrics 
errorHandlingMetrics) {
 List reporters = new ArrayList<>();
-LogReporter logReporter = new LogReporter(id, connConfig);
-logReporter.metrics(errorHandlingMetrics);
+LogReporter logReporter = new LogReporter(id, connConfig, 
errorHandlingMetrics);
 reporters.add(logReporter);
 
 return reporters;
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
index d36ec22..c059dcf 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java
@@ -36,6 +36,7 @@ import java.io.IOException;
 import java.io.PrintStream;
 import java.nio.charset.StandardCharsets;
 import java.util.Map;
+import java.util.Objects;
 import java.util.concurrent.ExecutionException;
 
 import static java.util.Collections.singleton;
@@ -66,13 +67,14 @@ public class DeadLetterQueueReporter implements 
ErrorReporter {
 
 private final SinkConnectorConfig connConfig;
 private final ConnectorTaskId connectorTaskId;
+private final ErrorHandlingMetrics errorHandlingMetrics;
 
 private KafkaProducer kafkaProducer;
-private ErrorHandlingMetrics errorHandlingMetrics;
 
 public static DeadLetterQueueReporter createAndSetup(WorkerConfig 
workerConfig,
  ConnectorTaskId id,
- SinkConnectorConfig 
sinkConfi

[kafka] branch 2.0 updated: KAFKA-7068: Handle null config values during transform (KIP-297)

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.0 by this push:
 new 6041220  KAFKA-7068: Handle null config values during transform 
(KIP-297)
6041220 is described below

commit 6041220e900f5a9e427a749ceb204c3717a97668
Author: Robert Yokota 
AuthorDate: Sun Jun 17 12:12:11 2018 -0700

KAFKA-7068: Handle null config values during transform (KIP-297)

Fix NPE when processing null config values during transform.

Author: Robert Yokota 

Reviewers: Magesh Nandakumar , Ewen 
Cheslack-Postava 

Closes #5241 from rayokota/KIP-297-null-config-values

(cherry picked from commit d06da1b7f424ebad16ea5eca11b58b7c2ca3fa34)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../kafka/common/config/ConfigTransformer.java | 15 -
 .../kafka/common/config/ConfigTransformerTest.java | 26 +-
 2 files changed, 35 insertions(+), 6 deletions(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
index 7e21a32..f5a3737 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
@@ -80,11 +80,13 @@ public class ConfigTransformer {
 
 // Collect the variables from the given configs that need 
transformation
 for (Map.Entry config : configs.entrySet()) {
-List vars = getVars(config.getKey(), 
config.getValue(), DEFAULT_PATTERN);
-for (ConfigVariable var : vars) {
-Map> keysByPath = 
keysByProvider.computeIfAbsent(var.providerName, k -> new HashMap<>());
-Set keys = keysByPath.computeIfAbsent(var.path, k -> 
new HashSet<>());
-keys.add(var.variable);
+if (config.getValue() != null) {
+List vars = getVars(config.getKey(), 
config.getValue(), DEFAULT_PATTERN);
+for (ConfigVariable var : vars) {
+Map> keysByPath = 
keysByProvider.computeIfAbsent(var.providerName, k -> new HashMap<>());
+Set keys = keysByPath.computeIfAbsent(var.path, k 
-> new HashSet<>());
+keys.add(var.variable);
+}
 }
 }
 
@@ -131,6 +133,9 @@ public class ConfigTransformer {
 private static String replace(Map>> lookupsByProvider,
   String value,
   Pattern pattern) {
+if (value == null) {
+return null;
+}
 Matcher matcher = pattern.matcher(value);
 StringBuilder builder = new StringBuilder();
 int i = 0;
diff --git 
a/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
 
b/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
index d6bd3dc..e2b9f6b 100644
--- 
a/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
+++ 
b/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
@@ -26,6 +26,7 @@ import java.util.Map;
 import java.util.Set;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 public class ConfigTransformerTest {
@@ -37,6 +38,7 @@ public class ConfigTransformerTest {
 public static final String TEST_PATH = "testPath";
 public static final String TEST_RESULT = "testResult";
 public static final String TEST_RESULT_WITH_TTL = "testResultWithTTL";
+public static final String TEST_RESULT_NO_PATH = "testResultNoPath";
 
 private ConfigTransformer configTransformer;
 
@@ -84,6 +86,24 @@ public class ConfigTransformerTest {
 assertEquals("${test:testPath:testResult}", data.get(MY_KEY));
 }
 
+@Test
+public void testReplaceVariableNoPath() throws Exception {
+ConfigTransformerResult result = 
configTransformer.transform(Collections.singletonMap(MY_KEY, 
"${test:testKey}"));
+Map data = result.data();
+Map ttls = result.ttls();
+assertEquals(TEST_RESULT_NO_PATH, data.get(MY_KEY));
+assertTrue(ttls.isEmpty());
+}
+
+@Test
+public void testNullConfigValue() throws Exception {
+ConfigTransformerResult result = 
configTransformer.transform(Collections.singletonMap(MY_KEY, null));
+Map data = result.data();
+Map ttls = result.ttls();
+assertNull(data.get(MY_KEY));
+assertTrue(ttls.isEmpty());
+}
+
 public static class TestConfigProvider implements ConfigProvider {
 
 public void configure(Map configs) {

[kafka] branch trunk updated: KAFKA-7068: Handle null config values during transform (KIP-297)

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d06da1b  KAFKA-7068: Handle null config values during transform 
(KIP-297)
d06da1b is described below

commit d06da1b7f424ebad16ea5eca11b58b7c2ca3fa34
Author: Robert Yokota 
AuthorDate: Sun Jun 17 12:12:11 2018 -0700

KAFKA-7068: Handle null config values during transform (KIP-297)

Fix NPE when processing null config values during transform.

Author: Robert Yokota 

Reviewers: Magesh Nandakumar , Ewen 
Cheslack-Postava 

Closes #5241 from rayokota/KIP-297-null-config-values
---
 .../kafka/common/config/ConfigTransformer.java | 15 -
 .../kafka/common/config/ConfigTransformerTest.java | 26 +-
 2 files changed, 35 insertions(+), 6 deletions(-)

diff --git 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
index 7e21a32..f5a3737 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java
@@ -80,11 +80,13 @@ public class ConfigTransformer {
 
 // Collect the variables from the given configs that need 
transformation
 for (Map.Entry config : configs.entrySet()) {
-List vars = getVars(config.getKey(), 
config.getValue(), DEFAULT_PATTERN);
-for (ConfigVariable var : vars) {
-Map> keysByPath = 
keysByProvider.computeIfAbsent(var.providerName, k -> new HashMap<>());
-Set keys = keysByPath.computeIfAbsent(var.path, k -> 
new HashSet<>());
-keys.add(var.variable);
+if (config.getValue() != null) {
+List vars = getVars(config.getKey(), 
config.getValue(), DEFAULT_PATTERN);
+for (ConfigVariable var : vars) {
+Map> keysByPath = 
keysByProvider.computeIfAbsent(var.providerName, k -> new HashMap<>());
+Set keys = keysByPath.computeIfAbsent(var.path, k 
-> new HashSet<>());
+keys.add(var.variable);
+}
 }
 }
 
@@ -131,6 +133,9 @@ public class ConfigTransformer {
 private static String replace(Map>> lookupsByProvider,
   String value,
   Pattern pattern) {
+if (value == null) {
+return null;
+}
 Matcher matcher = pattern.matcher(value);
 StringBuilder builder = new StringBuilder();
 int i = 0;
diff --git 
a/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
 
b/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
index d6bd3dc..e2b9f6b 100644
--- 
a/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
+++ 
b/clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java
@@ -26,6 +26,7 @@ import java.util.Map;
 import java.util.Set;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 public class ConfigTransformerTest {
@@ -37,6 +38,7 @@ public class ConfigTransformerTest {
 public static final String TEST_PATH = "testPath";
 public static final String TEST_RESULT = "testResult";
 public static final String TEST_RESULT_WITH_TTL = "testResultWithTTL";
+public static final String TEST_RESULT_NO_PATH = "testResultNoPath";
 
 private ConfigTransformer configTransformer;
 
@@ -84,6 +86,24 @@ public class ConfigTransformerTest {
 assertEquals("${test:testPath:testResult}", data.get(MY_KEY));
 }
 
+@Test
+public void testReplaceVariableNoPath() throws Exception {
+ConfigTransformerResult result = 
configTransformer.transform(Collections.singletonMap(MY_KEY, 
"${test:testKey}"));
+Map data = result.data();
+Map ttls = result.ttls();
+assertEquals(TEST_RESULT_NO_PATH, data.get(MY_KEY));
+assertTrue(ttls.isEmpty());
+}
+
+@Test
+public void testNullConfigValue() throws Exception {
+ConfigTransformerResult result = 
configTransformer.transform(Collections.singletonMap(MY_KEY, null));
+Map data = result.data();
+Map ttls = result.ttls();
+assertNull(data.get(MY_KEY));
+assertTrue(ttls.isEmpty());
+}
+
 public static class TestConfigProvider implements ConfigProvider {
 
 public void configure(Map configs) {
@@ -96,7 +116,7 @@ public class ConfigTransformerTest {
 public ConfigData get(String path, Set keys) {
 

[kafka] branch 0.9.0 updated (1e98b2a -> 34ae29a)

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a change to branch 0.9.0
in repository https://gitbox.apache.org/repos/asf/kafka.git.


from 1e98b2a  MINOR: Added safe deserialization implementation
 add 34ae29a  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 0.10.0 updated: KAFKA-7058: Comparing schema default values using Objects#deepEquals()

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.0 by this push:
 new f385e11  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
f385e11 is described below

commit f385e113a2455cc9baa11783dcf045fce5cec567
Author: Gunnar Morling 
AuthorDate: Sat Jun 16 23:04:31 2018 -0700

KAFKA-7058: Comparing schema default values using Objects#deepEquals()

https://issues.apache.org/jira/browse/KAFKA-7058
* Summary of testing strategy: Added new unit test

Author: Gunnar Morling 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5225 from gunnarmorling/KAFKA-7058

(cherry picked from commit be846d833caade74f1d0536ecf9d540855cde758)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index d1fd9cd..08a0ea3 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -276,7 +276,7 @@ public class ConnectSchema implements Schema {
 ConnectSchema schema = (ConnectSchema) o;
 return Objects.equals(optional, schema.optional) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index f5c6e2f..e7dfa4c 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -270,6 +270,16 @@ public class ConnectSchemaTest {
 }
 
 @Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
+@Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas
 ConnectSchema s1 = new ConnectSchema(Schema.Type.MAP, false, null, 
null, null, null, null, null, SchemaBuilder.int8().build(), 
SchemaBuilder.int16().build());

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 0.10.1 updated: KAFKA-7058: Comparing schema default values using Objects#deepEquals()

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.1 by this push:
 new 0f3affc  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
0f3affc is described below

commit 0f3affc0f40751dc8fd064b36b6e859728f63e37
Author: Gunnar Morling 
AuthorDate: Sat Jun 16 23:04:31 2018 -0700

KAFKA-7058: Comparing schema default values using Objects#deepEquals()

https://issues.apache.org/jira/browse/KAFKA-7058
* Summary of testing strategy: Added new unit test

Author: Gunnar Morling 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5225 from gunnarmorling/KAFKA-7058

(cherry picked from commit be846d833caade74f1d0536ecf9d540855cde758)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index d1fd9cd..08a0ea3 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -276,7 +276,7 @@ public class ConnectSchema implements Schema {
 ConnectSchema schema = (ConnectSchema) o;
 return Objects.equals(optional, schema.optional) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index f5c6e2f..e7dfa4c 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -270,6 +270,16 @@ public class ConnectSchemaTest {
 }
 
 @Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
+@Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas
 ConnectSchema s1 = new ConnectSchema(Schema.Type.MAP, false, null, 
null, null, null, null, null, SchemaBuilder.int8().build(), 
SchemaBuilder.int16().build());

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 0.10.2 updated: KAFKA-7058: Comparing schema default values using Objects#deepEquals()

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.10.2
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.10.2 by this push:
 new 08c4650  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
08c4650 is described below

commit 08c465028d057ac23cdfe6d57641fe40240359dd
Author: Gunnar Morling 
AuthorDate: Sat Jun 16 23:04:31 2018 -0700

KAFKA-7058: Comparing schema default values using Objects#deepEquals()

https://issues.apache.org/jira/browse/KAFKA-7058
* Summary of testing strategy: Added new unit test

Author: Gunnar Morling 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5225 from gunnarmorling/KAFKA-7058

(cherry picked from commit be846d833caade74f1d0536ecf9d540855cde758)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index d1fd9cd..08a0ea3 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -276,7 +276,7 @@ public class ConnectSchema implements Schema {
 ConnectSchema schema = (ConnectSchema) o;
 return Objects.equals(optional, schema.optional) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index f5c6e2f..e7dfa4c 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -270,6 +270,16 @@ public class ConnectSchemaTest {
 }
 
 @Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
+@Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas
 ConnectSchema s1 = new ConnectSchema(Schema.Type.MAP, false, null, 
null, null, null, null, null, SchemaBuilder.int8().build(), 
SchemaBuilder.int16().build());

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 0.11.0 updated: KAFKA-7058: Comparing schema default values using Objects#deepEquals()

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 0.11.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/0.11.0 by this push:
 new 95fbb2e  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
95fbb2e is described below

commit 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
Author: Gunnar Morling 
AuthorDate: Sat Jun 16 23:04:31 2018 -0700

KAFKA-7058: Comparing schema default values using Objects#deepEquals()

https://issues.apache.org/jira/browse/KAFKA-7058
* Summary of testing strategy: Added new unit test

Author: Gunnar Morling 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5225 from gunnarmorling/KAFKA-7058

(cherry picked from commit be846d833caade74f1d0536ecf9d540855cde758)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index 651b2ee..30917fc 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -284,7 +284,7 @@ public class ConnectSchema implements Schema {
 ConnectSchema schema = (ConnectSchema) o;
 return Objects.equals(optional, schema.optional) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index 339ef23..048784e 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -269,6 +269,16 @@ public class ConnectSchemaTest {
 }
 
 @Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
+@Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas
 ConnectSchema s1 = new ConnectSchema(Schema.Type.MAP, false, null, 
null, null, null, null, null, SchemaBuilder.int8().build(), 
SchemaBuilder.int16().build());

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 1.0 updated: KAFKA-7058: Comparing schema default values using Objects#deepEquals()

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
 new 737bf43  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
737bf43 is described below

commit 737bf43bb4e78d2d7a0ee53c27527b479972ebf8
Author: Gunnar Morling 
AuthorDate: Sat Jun 16 23:04:31 2018 -0700

KAFKA-7058: Comparing schema default values using Objects#deepEquals()

https://issues.apache.org/jira/browse/KAFKA-7058
* Summary of testing strategy: Added new unit test

Author: Gunnar Morling 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5225 from gunnarmorling/KAFKA-7058

(cherry picked from commit be846d833caade74f1d0536ecf9d540855cde758)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index 651b2ee..30917fc 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -284,7 +284,7 @@ public class ConnectSchema implements Schema {
 ConnectSchema schema = (ConnectSchema) o;
 return Objects.equals(optional, schema.optional) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index 339ef23..048784e 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -269,6 +269,16 @@ public class ConnectSchemaTest {
 }
 
 @Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
+@Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas
 ConnectSchema s1 = new ConnectSchema(Schema.Type.MAP, false, null, 
null, null, null, null, null, SchemaBuilder.int8().build(), 
SchemaBuilder.int16().build());

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 1.1 updated: KAFKA-7058: Comparing schema default values using Objects#deepEquals()

2018-06-17 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new 1ed1dae  KAFKA-7058: Comparing schema default values using 
Objects#deepEquals()
1ed1dae is described below

commit 1ed1daefbc2d72e9b501b94d8c99e874b89f1137
Author: Gunnar Morling 
AuthorDate: Sat Jun 16 23:04:31 2018 -0700

KAFKA-7058: Comparing schema default values using Objects#deepEquals()

https://issues.apache.org/jira/browse/KAFKA-7058
* Summary of testing strategy: Added new unit test

Author: Gunnar Morling 

Reviewers: Randall Hauch , Ewen Cheslack-Postava 


Closes #5225 from gunnarmorling/KAFKA-7058

(cherry picked from commit be846d833caade74f1d0536ecf9d540855cde758)
Signed-off-by: Ewen Cheslack-Postava 
---
 .../main/java/org/apache/kafka/connect/data/ConnectSchema.java |  2 +-
 .../java/org/apache/kafka/connect/data/ConnectSchemaTest.java  | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index 85357fe..a59b468 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -290,7 +290,7 @@ public class ConnectSchema implements Schema {
 Objects.equals(name, schema.name) &&
 Objects.equals(doc, schema.doc) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index 339ef23..048784e 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -269,6 +269,16 @@ public class ConnectSchemaTest {
 }
 
 @Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
+@Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas
 ConnectSchema s1 = new ConnectSchema(Schema.Type.MAP, false, null, 
null, null, null, null, null, SchemaBuilder.int8().build(), 
SchemaBuilder.int16().build());

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


[kafka] branch 1.1 updated: KAFKA-7047: Added SimpleHeaderConverter to plugin isolation whitelist

2018-06-16 Thread ewencp
This is an automated email from the ASF dual-hosted git repository.

ewencp pushed a commit to branch 1.1
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.1 by this push:
 new bca1eaf  KAFKA-7047: Added SimpleHeaderConverter to plugin isolation 
whitelist
bca1eaf is described below

commit bca1eaff8c8c2fb55952c66be70dce34f88461c0
Author: Randall Hauch 
AuthorDate: Sat Jun 16 22:23:20 2018 -0700

KAFKA-7047: Added SimpleHeaderConverter to plugin isolation whitelist

This was originally missed when headers were added as part of KIP-145 in AK 
1.1. An additional unit test was added in line with the StringConverter.

This should be backported to the AK `1.1` branch so that it is included in 
the next bugfix release. The `SimpleHeaderConverter` class that we're 
referencing was first added in the 1.1.0 release, so there's no reason to 
backport earlier.

Author: Randall Hauch 

Reviewers: Ewen Cheslack-Postava 

Closes #5204 from rhauch/kafka-7047
---
 .../java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java   | 1 +
 .../org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java| 3 +++
 2 files changed, 4 insertions(+)

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
index d490bde..182cdfc 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
@@ -128,6 +128,7 @@ public class PluginUtils {
 + "|file\\..*"
 + "|converters\\..*"
 + "|storage\\.StringConverter"
++ "|storage\\.SimpleHeaderConverter"
 + ")$";
 
 private static final DirectoryStream.Filter PLUGIN_PATH_FILTER = new 
DirectoryStream
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
index 4bc6e15..7233c6c 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
@@ -146,6 +146,9 @@ public class PluginUtilsTest {
 assertTrue(PluginUtils.shouldLoadInIsolation(
 "org.apache.kafka.connect.storage.StringConverter")
 );
+assertTrue(PluginUtils.shouldLoadInIsolation(
+"org.apache.kafka.connect.storage.SimpleHeaderConverter")
+);
 }
 
 @Test

-- 
To stop receiving notification emails like this one, please contact
ewe...@apache.org.


  1   2   3   4   5   6   7   8   >