Build failed in Jenkins: kafka-trunk-jdk8 #3562

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[colin] KAFKA-7466: Add IncrementalAlterConfigs API (KIP-339) (#6247)

[mjsax] MINOR: fixed missing close of Iterator, used try-with-resource where

--
[...truncated 2.38 MB...]

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical STARTED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectWithDefaultValue 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectWithDefaultValue 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
dateToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
dateToConnectOptionalWithDefaultValue PASSED


[jira] [Created] (KAFKA-8246) refactor topic/group instance id validation condition

2019-04-16 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8246:
--

 Summary: refactor topic/group instance id validation condition
 Key: KAFKA-8246
 URL: https://issues.apache.org/jira/browse/KAFKA-8246
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7466) Implement KIP-339: Create a new IncrementalAlterConfigs API

2019-04-16 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7466.
--
   Resolution: Fixed
 Reviewer: Colin P. McCabe
Fix Version/s: 2.3.0

> Implement KIP-339: Create a new IncrementalAlterConfigs API
> ---
>
> Key: KAFKA-7466
> URL: https://issues.apache.org/jira/browse/KAFKA-7466
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.3.0
>
>
> Implement KIP-339: Create a new IncrementalAlterConfigs API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.0-jdk8 #249

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: fixed missing close of Iterator, used try-with-resource where

[matthias] MINOR: code cleanup TopologyTestDriverTest (#6504)

--
[...truncated 898.23 KB...]
kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson PASSED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange STARTED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testLogDirGetters STARTED

kafka.zk.KafkaZkClientTest > testLogDirGetters PASSED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > 

Re: [DISCUSS] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-16 Thread John Roesler
Thanks all, for your comments!

I'm on board with the consensus, so it sounds like your KIP is fine as-is,
Maarten. Would you like to start the vote thread?

(just look at the mailing list for other examples)

Thanks,
-John

On Sat, Apr 13, 2019 at 5:26 PM Matthias J. Sax 
wrote:

> > Are you sure the users are aware that with `withLoggingDisabled()`, they
> > might lose data during failover?
>
> I hope so :D
>
> Of course, we can always improve the JavaDocs.
>
>
> -Matthias
>
>
> On 4/12/19 2:48 PM, Bill Bejeck wrote:
> > Thanks for the KIP Maarten.
> >
> > I also agree that keeping the `withLoggingDisabled()` and
> > `withLoggingEnabled(Map)` methods is the better option.
> >
> > When it comes to educating the users on the downside of disabling
> logging,
> > IMHO I think a comment in the JavaDoc should be sufficient.
> >
> > -Bill
> >
> > On Fri, Apr 12, 2019 at 3:59 PM Bruno Cadonna 
> wrote:
> >
> >> Matthias,
> >>
> >> Are you sure the users are aware that with `withLoggingDisabled()`, they
> >> might lose data during failover?
> >>
> >> OK, we maybe do not necessarily need a WARN log. However, I would at
> least
> >> add a comment like in `StoreBuilder`,ie,
> >>
> >> /**
> >> * Disable the changelog for store built by this {@link StoreBuilder}.
> >> * This will turn off fault-tolerance for your store.
> >> * By default the changelog is enabled.
> >> * @return this
> >> */
> >> StoreBuilder withLoggingDisabled();
> >>
> >> What do you think?
> >>
> >> Best,
> >> Bruno
> >>
> >> On Thu, Apr 11, 2019 at 12:04 AM Matthias J. Sax  >
> >> wrote:
> >>
> >>> I think that the current proposal to add `withLoggingDisabled()` and
> >>> `withLoggingEnabled(Map)` should be the best option.
> >>>
> >>> IMHO there is no reason to add a WARN log. We also don't have a WARN
> log
> >>> when people disable logging on regular stores. As Bruno mentioned, this
> >>> might also lead to data loss, so I don't see why we should treat
> >>> suppress() different to other stores.
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 4/10/19 3:36 PM, Bruno Cadonna wrote:
>  Hi Marteen and John,
> 
>  I would opt for option 1 with an additional log message on INFO or
> WARN
>  level, since the log file is the place where you would look first to
>  understand what went wrong. I would also not adjust it when
> persistence
>  stores are available for suppress.
> 
>  I would not go for option 2 or 3, because IIUC, with
>  `withLoggingDisabled()` also persistent state stores do not guarantee
> >> not
>  to loose records. Persisting state stores is basically a way to
> >> optimize
>  recovery in certain cases. The changelog topic is the component that
>  guarantees no data loss. So regarding data loss, in my opinion,
> >> disabling
>  logging on the suppression buffer is not different from disabling
> >> logging
>  on other state stores. Please correct me if I am wrong.
> 
>  Best,
>  Bruno
> 
>  On Wed, Apr 10, 2019 at 12:12 PM John Roesler 
> >> wrote:
> 
> > Thanks for the update and comments, Maarten. It would be interesting
> >> to
> > hear what others think as well.
> > -John
> >
> > On Thu, Apr 4, 2019 at 2:43 PM Maarten Duijn 
> >>> wrote:
> >
> >> Thank you for the explanation regarding the internals, I have edited
> >>> the
> >> KIP accordingly and updated the Javadoc. About the possible data
> loss
> > when
> >> altering changelog config, I think we can improve by doing (one of)
> >> the
> >> following.
> >>
> >> 1) Add a warning in the comments that clearly states what might
> >> happen
> >> when change logging is disabled and adjust it when persistent stores
> >>> are
> >> added.
> >>
> >> 2) Change `withLoggingDisabled` to `minimizeLogging`. Instead of
> > disabling
> >> logging, a call to this method minimizes the topic size by
> >> aggressively
> >> removing the records emitted downstream by the suppress operator. I
> > believe
> >> this can be achieved by setting `delete.retention.ms=0` in the
> topic
> >> config.
> >>
> >> 3) Remove `withLoggingDisabled` from the proposal.
> >>
> >> 4) Leave both methods as-proposed, as you indicated, this is in line
> >>> with
> >> the other parts of the Streams API
> >>
> >> A user might want to disable logging when downstream is not a Kafka
> >>> topic
> >> but some other service that does not benefit from
> >> atleast-once-delivery
> > of
> >> the suppressed records in case of failover or rebalance.
> >> Seeing as it might cause data loss, the methods should not be used
> > lightly
> >> and I think some comments are warranted. Personally, I rely purely
> on
> > Kafka
> >> to prevent data loss even when a store persisted locally, so when
> >>> support
> >> is added for persistent suppression, I feel the comments may stay.
> >>
> >> Maarten
> >>
> 

Build failed in Jenkins: kafka-1.1-jdk7 #259

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: code cleanup TopologyTestDriverTest (#6504)

--
[...truncated 1.51 MB...]
:405:
 warning: [deprecation] count(SessionWindows,StateStoreSupplier) 
in KGroupedStream has been deprecated
public KTable, Long> count(final SessionWindows sessionWindows,
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:399:
 warning: [deprecation] count(SessionWindows) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows) 
{
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:391:
 warning: [deprecation] count(SessionWindows,String) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows, 
final String queryableStoreName) {
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:303:
 warning: [deprecation] count(Windows,StateStoreSupplier) in 
KGroupedStream has been deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method 
count(Windows,StateStoreSupplier)
K extends Object declared in interface KGroupedStream
:297:
 warning: [deprecation] count(Windows) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows) {
^
  where W,K are type-variables:
W extends Window declared in method count(Windows)
K extends Object declared in interface KGroupedStream
:289:
 warning: [deprecation] count(Windows,String) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method count(Windows,String)
K extends Object declared in interface KGroupedStream
:272:
 warning: [deprecation] count(StateStoreSupplier) in 
KGroupedStream has been deprecated
public KTable count(final 
org.apache.kafka.streams.processor.StateStoreSupplier 
storeSupplier) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:260:
 warning: [deprecation] count(String) in KGroupedStream has been deprecated
public KTable count(final String queryableStoreName) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:174:
 warning: [deprecation] punctuate(long) in Processor has been deprecated
public void punctuate(long timestamp) {
^
:110:
 warning: [deprecation] schedule(long) in ProcessorContext has been deprecated
public void schedule(final long interval) {
^
:826:
 warning: [deprecation] leftJoin(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream leftJoin(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared 

Jenkins build is back to normal : kafka-trunk-jdk11 #441

2019-04-16 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.1-jdk8 #163

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: fixed missing close of Iterator, used try-with-resource where

--
[...truncated 762.76 KB...]

> Task :connect:file:spotbugsMain

> Task :connect:runtime:compileJava
:154:
 warning: [unchecked] unchecked method invocation: method doPrivileged in class 
AccessController is applied to given types
return (PluginClassLoader) AccessController.doPrivileged(
^
  required: PrivilegedAction
  found: 
  where T is a type-variable:
T extends Object declared in method doPrivileged(PrivilegedAction)
:155:
 warning: [unchecked] unchecked conversion
new PrivilegedAction() {
^
  required: PrivilegedAction
  found:
  where T is a type-variable:
T extends Object declared in method doPrivileged(PrivilegedAction)
:338:
 warning: [unchecked] unchecked cast
result.add(new PluginDesc<>((Class) 
pluginImpl.getClass(), versionFor(pluginImpl), loader));

^
  required: Class
  found:Class
  where T is a type-variable:
T extends Object declared in method 
getServiceLoaderPluginDesc(Class,ClassLoader)
  where CAP#1 is a fresh type-variable:
CAP#1 extends Object from capture of ? extends Object
:65:
 warning: [unchecked] unchecked method invocation: method doPrivileged in class 
AccessController is applied to given types
return (DelegatingClassLoader) AccessController.doPrivileged(
^
  required: PrivilegedAction
  found: 
  where T is a type-variable:
T extends Object declared in method doPrivileged(PrivilegedAction)
:66:
 warning: [unchecked] unchecked conversion
new PrivilegedAction() {
^
  required: PrivilegedAction
  found:
  where T is a type-variable:
T extends Object declared in method doPrivileged(PrivilegedAction)
:106:
 warning: [deprecation] INTERNAL_KEY_CONVERTER_CLASS_CONFIG in WorkerConfig has 
been deprecated
return 
classPropertyName.equals(WorkerConfig.INTERNAL_KEY_CONVERTER_CLASS_CONFIG)
^
:107:
 warning: [deprecation] INTERNAL_VALUE_CONVERTER_CLASS_CONFIG in WorkerConfig 
has been deprecated
|| 
classPropertyName.equals(WorkerConfig.INTERNAL_VALUE_CONVERTER_CLASS_CONFIG);
^
:247:
 warning: [deprecation] INTERNAL_KEY_CONVERTER_CLASS_CONFIG in WorkerConfig has 
been deprecated
 || 
WorkerConfig.INTERNAL_KEY_CONVERTER_CLASS_CONFIG.equals(classPropertyName);
^
:168:
 warning: [unchecked] unchecked cast
final List transformAliases = (List) 
value;
 ^
  required: List
  found:Object
:253:
 warning: [unchecked] unchecked conversion
transformation = getClass(prefix + 
"type").asSubclass(Transformation.class).newInstance();

   ^
  required: Transformation
  found:CAP#1
  where R is a type-variable:
R extends ConnectRecord declared in method transformations()
  where CAP#1 is a fresh type-variable:
CAP#1 extends Transformation from capture of 

Build failed in Jenkins: kafka-trunk-jdk8 #3561

2019-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 4145, done.
remote: Counting objects:   0% (1/4145)   remote: Counting objects:   
1% (42/4145)   remote: Counting objects:   2% (83/4145)   
remote: Counting objects:   3% (125/4145)   remote: Counting objects:   
4% (166/4145)   remote: Counting objects:   5% (208/4145)   
remote: Counting objects:   6% (249/4145)   remote: Counting objects:   
7% (291/4145)   remote: Counting objects:   8% (332/4145)   
remote: Counting objects:   9% (374/4145)   remote: Counting objects:  
10% (415/4145)   remote: Counting objects:  11% (456/4145)   
remote: Counting objects:  12% (498/4145)   remote: Counting objects:  
13% (539/4145)   remote: Counting objects:  14% (581/4145)   
remote: Counting objects:  15% (622/4145)   remote: Counting objects:  
16% (664/4145)   remote: Counting objects:  17% (705/4145)   
remote: Counting objects:  18% (747/4145)   remote: Counting objects:  
19% (788/4145)   remote: Counting objects:  20% (829/4145)   
remote: Counting objects:  21% (871/4145)   remote: Counting objects:  
22% (912/4145)   remote: Counting objects:  23% (954/4145)   
remote: Counting objects:  24% (995/4145)   remote: Counting objects:  
25% (1037/4145)   remote: Counting objects:  26% (1078/4145)   
remote: Counting objects:  27% (1120/4145)   remote: Counting objects:  
28% (1161/4145)   remote: Counting objects:  29% (1203/4145)   
remote: Counting objects:  30% (1244/4145)   remote: Counting objects:  
31% (1285/4145)   remote: Counting objects:  32% (1327/4145)   
remote: Counting objects:  33% (1368/4145)   remote: Counting objects:  
34% (1410/4145)   remote: Counting objects:  35% (1451/4145)   
remote: Counting objects:  36% (1493/4145)   remote: Counting objects:  
37% (1534/4145)   remote: Counting objects:  38% (1576/4145)   
remote: Counting objects:  39% (1617/4145)   remote: Counting objects:  
40% (1658/4145)   remote: Counting objects:  41% (1700/4145)   
remote: Counting objects:  42% (1741/4145)   remote: Counting objects:  
43% (1783/4145)   remote: Counting objects:  44% (1824/4145)   
remote: Counting objects:  45% (1866/4145)   remote: Counting objects:  
46% (1907/4145)   remote: Counting objects:  47% (1949/4145)   
remote: Counting objects:  48% (1990/4145)   remote: Counting objects:  
49% (2032/4145)   remote: Counting objects:  50% (2073/4145)   
remote: Counting objects:  51% 

Build failed in Jenkins: kafka-trunk-jdk8 #3560

2019-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 4145, done.
remote: Counting objects:   0% (1/4145)   remote: Counting objects:   
1% (42/4145)   remote: Counting objects:   2% (83/4145)   
remote: Counting objects:   3% (125/4145)   remote: Counting objects:   
4% (166/4145)   remote: Counting objects:   5% (208/4145)   
remote: Counting objects:   6% (249/4145)   remote: Counting objects:   
7% (291/4145)   remote: Counting objects:   8% (332/4145)   
remote: Counting objects:   9% (374/4145)   remote: Counting objects:  
10% (415/4145)   remote: Counting objects:  11% (456/4145)   
remote: Counting objects:  12% (498/4145)   remote: Counting objects:  
13% (539/4145)   remote: Counting objects:  14% (581/4145)   
remote: Counting objects:  15% (622/4145)   remote: Counting objects:  
16% (664/4145)   remote: Counting objects:  17% (705/4145)   
remote: Counting objects:  18% (747/4145)   remote: Counting objects:  
19% (788/4145)   remote: Counting objects:  20% (829/4145)   
remote: Counting objects:  21% (871/4145)   remote: Counting objects:  
22% (912/4145)   remote: Counting objects:  23% (954/4145)   
remote: Counting objects:  24% (995/4145)   remote: Counting objects:  
25% (1037/4145)   remote: Counting objects:  26% (1078/4145)   
remote: Counting objects:  27% (1120/4145)   remote: Counting objects:  
28% (1161/4145)   remote: Counting objects:  29% (1203/4145)   
remote: Counting objects:  30% (1244/4145)   remote: Counting objects:  
31% (1285/4145)   remote: Counting objects:  32% (1327/4145)   
remote: Counting objects:  33% (1368/4145)   remote: Counting objects:  
34% (1410/4145)   remote: Counting objects:  35% (1451/4145)   
remote: Counting objects:  36% (1493/4145)   remote: Counting objects:  
37% (1534/4145)   remote: Counting objects:  38% (1576/4145)   
remote: Counting objects:  39% (1617/4145)   remote: Counting objects:  
40% (1658/4145)   remote: Counting objects:  41% (1700/4145)   
remote: Counting objects:  42% (1741/4145)   remote: Counting objects:  
43% (1783/4145)   remote: Counting objects:  44% (1824/4145)   
remote: Counting objects:  45% (1866/4145)   remote: Counting objects:  
46% (1907/4145)   remote: Counting objects:  47% (1949/4145)   
remote: Counting objects:  48% (1990/4145)   remote: Counting objects:  
49% (2032/4145)   remote: Counting objects:  50% (2073/4145)   
remote: 

Build failed in Jenkins: kafka-1.1-jdk7 #258

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: fixed missing close of Iterator, used try-with-resource where

--
[...truncated 1.51 MB...]
public void print(final Serde keySerde,
^
  where K,V are type-variables:
K extends Object declared in interface KTable
V extends Object declared in interface KTable
:331:
 warning: [deprecation] print(String) in KTable has been deprecated
public void print(final String label) {
^
:325:
 warning: [deprecation] print() in KTable has been deprecated
public void print() {
^
:316:
 warning: [deprecation] mapValues(ValueMapper,Serde,StateStoreSupplier) in KTable has been deprecated
public   KTable mapValues(final ValueMapper mapper,
   ^
  where VR,V,K are type-variables:
VR extends Object declared in method mapValues(ValueMapper,Serde,StateStoreSupplier)
V extends Object declared in interface KTable
K extends Object declared in interface KTable
:307:
 warning: [deprecation] mapValues(ValueMapper,Serde,String) in KTable has been deprecated
public  KTable mapValues(final ValueMapper mapper,
  ^
  where VR,V,K are type-variables:
VR extends Object declared in method mapValues(ValueMapper,Serde,String)
V extends Object declared in interface KTable
K extends Object declared in interface KTable
:232:
 warning: [deprecation] filterNot(Predicate,String) in 
KTable has been deprecated
public KTable filterNot(final Predicate 
predicate,
^
  where K,V are type-variables:
K extends Object declared in interface KTable
V extends Object declared in interface KTable
:243:
 warning: [deprecation] filterNot(Predicate,StateStoreSupplier) in KTable has been deprecated
public KTable filterNot(final Predicate 
predicate,
^
  where K,V are type-variables:
K extends Object declared in interface KTable
V extends Object declared in interface KTable
:211:
 warning: [deprecation] filter(Predicate,StateStoreSupplier) in KTable has been deprecated
public KTable filter(final Predicate predicate,
^
  where K,V are type-variables:
K extends Object declared in interface KTable
V extends Object declared in interface KTable
:200:
 warning: [deprecation] filter(Predicate,String) in KTable 
has been deprecated
public KTable filter(final Predicate predicate,
^
  where K,V are type-variables:
K extends Object declared in interface KTable
V extends Object declared in interface KTable
:826:
 warning: [deprecation] leftJoin(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream leftJoin(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:751:
 warning: [deprecation] join(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream join(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method join(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared in method join(KTable,ValueJoiner,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream

Re: KIP-457: Add DISCONNECTED state to Kafka Streams

2019-04-16 Thread Richard Yu
Hi all,

Considering that this is a simple KIP, I would probably start the voting
tomorrow.
I think it would be good if we could get this in fast.

On Tue, Apr 16, 2019 at 3:31 PM Richard Yu 
wrote:

> Oh, I probably misunderstood the difference between DISCONNECTED and DEAD.
> I will update the KIP accordingly.
> Thanks for pointing that out!
>
>
> On Tue, Apr 16, 2019 at 3:13 PM Matthias J. Sax 
> wrote:
>
>> Thanks for the initiative.
>>
>> In the motivation you mention that you want to use DISCONNECT to
>> indicate that the application was killed.
>>
>> What is the difference to existing state DEAD?
>>
>> Also, the backing JIRA seems to have a different motivation to add a
>> DISCONNECT state. There, the Kafka Streams application itself is
>> healthy, but it cannot connect to the brokers. It seems reasonable to
>> add a DISCONNECT for this case though.
>>
>>
>>
>> -Matthias
>>
>>
>>
>> On 4/16/19 9:30 AM, Richard Yu wrote:
>> > Hi all,
>> >
>> > I like to propose a small KIP on adding a new state to
>> KafkaStreams#state().
>> > It is very simple, so this should pass relatively quickly!
>> > Here is the discussion link:
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-457%3A+Add+DISCONNECTED+status+to+Kafka+Streams
>> >
>> > Cheers,
>> > Richard
>> >
>>
>>


Re: [DISCUSSION] KIP-421: Automatically resolve external configurations.

2019-04-16 Thread Colin McCabe
Thanks, Tejal.  Looks good overall.

> This KIP will enable all the components broker, connect, producer, consumer, 
> admin client,
> and so forth.  to automatically resolve the external configurations.

It would be nice to spell out that these components all automatically resolve 
external configurations after this KIP is implemented, not just that they have 
that ability.

Does the broker reload external configurations even if they are not configured 
via KIP-226 dynamic configs?

best,
Colin


On Mon, Apr 15, 2019, at 22:24, Tejal Adsul wrote:
> Hi All,
> 
> I have updated the KIP to address the comments in the discussion. I 
> have added the flow as to how  dynamic config values will be  resolved. 
> Please could you’ll review the updated changes and let me know your 
> feedback.
> 
> Thanks,
> Tejal
> 
> On 2019/03/21 20:38:54, Tejal Adsul  wrote: 
> > I have addressed the comments 1 and 2 in the KIP.> 
> > 3. The example is a bit misleading with the password in it. I have modified 
> > it. We basically wanted to show that you cam pass any additional parameters 
> > required by the config provider> 
> > 4. Yes  all the public config classes (ProducerConfig, ConsumerConfig, 
> > ConnectorConfig etc.) will> > 
> > >be extended to optionally use the new AbstractConfig constructors?>> 
> > 
> > 
> > On 2019/03/14 11:49:46, Rajini Sivaram  wrote: > 
> > > Hi Tejal,> > 
> > > > 
> > > Thanks for the updates. A few comments:> > 
> > > > 
> > > > 
> > >1. In the standard KIP template, we have two sections `Public> > 
> > >Interfaces` and `Proposed Changes`. Can you split the section 
> > > `Proposal`> > 
> > >into two so that public interface changes are more obvious?> > 
> > >2. Under `Public Interfaces`, can you separate out interface changes 
> > > and> > 
> > >new configurations since the config changes are sort of lost in the 
> > > text?> > 
> > >In particular, I think this KIP is proposing to reserve the config 
> > > name> > 
> > >`config.providers` as well as all config names starting with> > 
> > >`config.providers.` to resolve configs.> > 
> > >3. The example looks a bit odd to me. It looks like we are removing> > 
> > >local passwords like truststore password from a client config and 
> > > instead> > 
> > >adding a master password like vault password in cleartext into the 
> > > file.> > 
> > >Perhaps the intention is that the vault password won't be in the file 
> > > for a> > 
> > >vault provider?> > 
> > >4. The example instantiates AbstractConfig. I am not familiar with 
> > > the> > 
> > >usage of this class in Connect, but is the intention that all the 
> > > public> > 
> > >config classes (ProducerConfig, ConsumerConfig, ConnectorConfig etc.) 
> > > will> > 
> > >be extended to optionally use the new AbstractConfig constructors?> > 
> > > > 
> > > Regards,> > 
> > > > 
> > > Rajini> > 
> > > > 
> > > > 
> > > On Mon, Mar 11, 2019 at 5:49 PM Tejal Adsul  wrote:> 
> > > > 
> > > > 
> > > > Hi Folks,> > 
> > > >> > 
> > > > I have accommodated most of the review comments for> > 
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-421%3A+Support+resolving+externalized+secrets+in+AbstractConfig>
> > > >  > 
> > > > . Reopening the thread for further discussion. Please let me know your> 
> > > > > 
> > > > thoughts on it.> > 
> > > >> > 
> > > > Thanks,> > 
> > > > Tejal> > 
> > > >> > 
> > > > On 2019/01/25 19:11:07, "Colin McCabe"  wrote:> > 
> > > > > On Fri, Jan 25, 2019, at 09:12, Andy Coates wrote:>> > 
> > > > > > > Further, if we're worried about confusion about how to)>> > 
> > > > > > load the two files, we could have a constructor that does that> > 
> > > > default>> > 
> > > > > > pattern for you.>> > 
> > > > > > >> > 
> > > > > > Yeah, I don't really see the need for this two step / two file> > 
> > > > approach. I>> > 
> > > > > > think the config providers should be listed in the main property 
> > > > > > file,> > 
> > > > not>> > 
> > > > > > some secondary file, and we should avoid backwards compatibility> > 
> > > > issues by,>> > 
> > > > > > as Ewan says, having a new constructor, (deprecating the old), 
> > > > > > that> > 
> > > > allows>> > 
> > > > > > the functionality to be turned on/off.>> > 
> > > > >> > 
> > > > > +1.  In the case of the Kafka broker, it really seems like we should 
> > > > > put> > 
> > > > the config providers in the main config file. >> > 
> > > > >  It's more complex to have multiple configuration files, and it 
> > > > > doesn't> > 
> > > > seem to add any value.>> > 
> > > > >> > 
> > > > > In the case of other components like Connect, I don't have a strong> 
> > > > > > 
> > > > opinion.  We can discuss this on a component-by-component basis.  
> > > > Clearly> > 
> > > > not all components manage configuration exactly the same way, and that> 
> > > > > 
> > > > difference might motivate different strategies here.>> > 
> > > > >> > 
> > > > > > >> > 
> > > > 

[jira] [Created] (KAFKA-8245) Flaky Test DeleteConsumerGroupsTest#testDeleteCmdAllGroups

2019-04-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8245:
--

 Summary: Flaky Test DeleteConsumerGroupsTest#testDeleteCmdAllGroups
 Key: KAFKA-8245
 URL: https://issues.apache.org/jira/browse/KAFKA-8245
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3781/testReport/junit/kafka.admin/DeleteConsumerGroupsTest/testDeleteCmdAllGroups/]
{quote}java.lang.AssertionError: The group did become empty as expected. at 
kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.admin.DeleteConsumerGroupsTest.testDeleteCmdAllGroups(DeleteConsumerGroupsTest.scala:148){quote}
STDOUT
{quote}Error: Deletion of some consumer groups failed: * Group 'test.group' 
could not be deleted due to: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.GroupNotEmptyException: The group is not empty. 
Error: Deletion of some consumer groups failed: * Group 'missing.group' could 
not be deleted due to: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does not 
exist. [2019-04-16 09:42:02,316] WARN Unable to read additional data from 
client sessionid 0x104f958dba3, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376) Deletion of requested consumer 
groups ('test.group') was successful. Error: Deletion of some consumer groups 
failed: * Group 'missing.group' could not be deleted due to: 
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does not 
exist. These consumer groups were deleted successfully: 'test.group'{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8244) Flaky Test GroupAuthorizerIntegrationTest#shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessDuringSend

2019-04-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8244:
--

 Summary: Flaky Test 
GroupAuthorizerIntegrationTest#shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessDuringSend
 Key: KAFKA-8244
 URL: https://issues.apache.org/jira/browse/KAFKA-8244
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1
Reporter: Matthias J. Sax
 Fix For: 2.1.2


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/162/tests]
{quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 3000ms.{quote}
STDOUT:
{quote}[2019-04-16 23:05:48,957] ERROR [Consumer clientId=consumer-1106, 
groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
authorized to access topics: [Topic authorization failed.] 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812)
[2019-04-16 23:05:48,957] ERROR [Consumer clientId=consumer-1106, 
groupId=my-group] Not authorized to commit to topics [topic] 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850)
[2019-04-16 23:06:24,650] ERROR [KafkaApi-0] Error when handling request: 
clientId=broker-0-txn-marker-sender, correlationId=0, api=WRITE_TXN_MARKERS, 
body=\{transaction_markers=[{producer_id=0,producer_epoch=0,transaction_result=false,topics=[{topic=topic,partitions=[0]}],coordinator_epoch=0}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:43243-127.0.0.1:55848-1, 
session=Session(Group:testGroup,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-04-16 23:06:24,651] ERROR [TransactionCoordinator id=0] Uncaught error in 
request completion: (org.apache.kafka.clients.NetworkClient:559)
java.lang.IllegalStateException: Unexpected error 
org.apache.kafka.common.errors.ClusterAuthorizationException while sending txn 
marker for transactional.id
at 
kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandler.$anonfun$onComplete$14(TransactionMarkerRequestCompletionHandler.scala:175)
at 
scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:788)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:787)
at 
kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandler.$anonfun$onComplete$12(TransactionMarkerRequestCompletionHandler.scala:133)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at 
kafka.coordinator.transaction.TransactionMetadata.inLock(TransactionMetadata.scala:172)
at 
kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandler.$anonfun$onComplete$8(TransactionMarkerRequestCompletionHandler.scala:133)
at 
kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandler.$anonfun$onComplete$8$adapted(TransactionMarkerRequestCompletionHandler.scala:92)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandler.onComplete(TransactionMarkerRequestCompletionHandler.scala:92)
at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
at kafka.common.InterBrokerSendThread.doWork(InterBrokerSendThread.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2019-04-16 23:06:33,460] ERROR [Consumer clientId=consumer-, 
groupId=my-group] Offset commit failed on partition topic-0 at offset 0: Not 
authorized to access group: Group authorization failed. 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812)
Error: Consumer group 'my-group' does not exist.
[2019-04-16 23:07:43,178] WARN fsync-ing the write ahead log in SyncThread:0 
took 1603ms which will adversely effect operation latency. See the ZooKeeper 
troubleshooting guide (org.apache.zookeeper.server.persistence.FileTxnLog:338)
[2019-04-16 

Build failed in Jenkins: kafka-2.1-jdk8 #162

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8209: Wrong link for KStreams DSL in core concepts doc (#6564)

[bill] KAFKA-8210: Fix link for streams table duality (#6573)

--
[...truncated 923.99 KB...]
kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartString STARTED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartString PASSED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartWithEmbeddedSeparators 
STARTED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartWithEmbeddedSeparators 
PASSED

kafka.security.auth.ResourceTest > 
shouldThrowOnTwoPartStringWithUnknownResourceType STARTED

kafka.security.auth.ResourceTest > 
shouldThrowOnTwoPartStringWithUnknownResourceType PASSED

kafka.security.auth.ResourceTest > shouldThrowOnBadResourceTypeSeparator STARTED

kafka.security.auth.ResourceTest > shouldThrowOnBadResourceTypeSeparator PASSED

kafka.security.auth.ResourceTest > shouldParseThreePartString STARTED

kafka.security.auth.ResourceTest > shouldParseThreePartString PASSED

kafka.security.auth.ResourceTest > shouldRoundTripViaString STARTED

kafka.security.auth.ResourceTest > shouldRoundTripViaString PASSED

kafka.security.auth.ResourceTest > shouldParseThreePartWithEmbeddedSeparators 
STARTED

kafka.security.auth.ResourceTest > shouldParseThreePartWithEmbeddedSeparators 
PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #440

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7778: document scala suppress API (#6586)

--
[...truncated 2.38 MB...]

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > identity STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > identity PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > slice STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > slice PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED


Re: KIP-457: Add DISCONNECTED state to Kafka Streams

2019-04-16 Thread Richard Yu
Oh, I probably misunderstood the difference between DISCONNECTED and DEAD.
I will update the KIP accordingly.
Thanks for pointing that out!


On Tue, Apr 16, 2019 at 3:13 PM Matthias J. Sax 
wrote:

> Thanks for the initiative.
>
> In the motivation you mention that you want to use DISCONNECT to
> indicate that the application was killed.
>
> What is the difference to existing state DEAD?
>
> Also, the backing JIRA seems to have a different motivation to add a
> DISCONNECT state. There, the Kafka Streams application itself is
> healthy, but it cannot connect to the brokers. It seems reasonable to
> add a DISCONNECT for this case though.
>
>
>
> -Matthias
>
>
>
> On 4/16/19 9:30 AM, Richard Yu wrote:
> > Hi all,
> >
> > I like to propose a small KIP on adding a new state to
> KafkaStreams#state().
> > It is very simple, so this should pass relatively quickly!
> > Here is the discussion link:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-457%3A+Add+DISCONNECTED+status+to+Kafka+Streams
> >
> > Cheers,
> > Richard
> >
>
>


Re: [DISCUSS] KIP-450: Sliding Window Aggregations in the DSL

2019-04-16 Thread Matthias J. Sax
Thanks Sophie!


Regarding (4), I am in favor to support both. Not sure if we can reuse
existing window store (with enabling to store duplicates) for this case
or not though, or if we need to design a new store to keep all raw records?

Btw: for holistic aggregations, like media, we would need to support a
different store layout for existing aggregations (time-window,
session-window), too. Thus, if we add support for this, we might be able
to kill two birds with one stone. Of course, we would still need new
APIs for existing aggregations to allow users to pick between both cases.

I only bring this up, because it might make sense to design the store in
a way such that we can use it for all cases.


About (3): atm we support wall-clock time via the corresponding
`WallclockTimestampeExtractor`. Maybe Bill can elaborate a little bit
more what he has in mind exactly, and why using this extractor would not
meet the requirements for processing-time sliding windows?


-Matthias


On 4/16/19 10:16 AM, Guozhang Wang wrote:
> Regarding 4): yes I agree with you that invertibility is not a common
> property for agg-functions. Just to be clear about our current APIs: for
> stream.aggregate we only require a single Adder function, whereas for
> table.aggregate we require both Adder and Subtractor, but these are not
> used to leverage any properties just that the incoming table changelog
> stream may contain "tombstones" and hence we need to negate the effect of
> the previous record that has been deleted by this tombstone.
> 
> What I'm proposing is exactly having two APIs, one for Adder only (like
> other Streams aggregations) and one for Subtractor + Adder (for agg
> functions users think are invertible) for efficiency. Some other frameworks
> (e.g. Spark) have similar options for users and will recommend using the
> latter so that some optimization in implementation can be done.
> 
> 
> Guozhang
> 
> On Mon, Apr 15, 2019 at 12:29 PM Sophie Blee-Goldman 
> wrote:
> 
>> Thanks for the feedback Matthias and Bill. After discussing offline we
>> realized the type of windows I originally had in mind were quite different,
>> and I agree now that the semantics outlined by Matthias are the direction
>> to go in here. I will update the KIP accordingly with the new semantics
>> (and corresponding design) and restart the discussion from there.
>>
>> In the meantime, to respond to some other points:
>>
>> 1) API:
>>
>> I propose adding only the one class -- public class SlidingWindows extends
>> Windows {} --  so I do not believe we need any new Serdes? It
>> will still be a fixed size TimeWindow, but handled a bit differently. I've
>> updated the KIP to state explicitly all of the classes/methods being added
>>
>> 2) Zero grace period
>>
>> The "zero grace period" was essentially just consequence of my original
>> definition for sliding windows; with the new semantics we can (and should)
>> allow for a nonzero grace period
>>
>> 3) Wall-clock time
>>
>> Hm, I had not considered this yet but it may be a good idea to keep in mind
>> while rethinking the design. To clarify, we don't support wall-clock based
>> aggregations with hopping or tumbling windows though (yet?)
>>
>> 4) Commutative vs associative vs invertible aggregations
>>
>> I agree that it's reasonable to assume commutativity and associativity, but
>> that's not the same as being subtractable -- that requires invertibility,
>> which is broken by a lot of very simple functions and is not, I think, ok
>> to assume. However we could consider adding a separate API which also takes
>> a subtractor and corresponds to a completely different implementation. We
>> could also consider an API that takes a function that aggregates two
>> aggregates together in addition to the existing aggregator (which
>> aggregates a single value with an existing aggregate) WDYT?
>>
>>
>>
>>
>> On Thu, Apr 11, 2019 at 1:13 AM Matthias J. Sax 
>> wrote:
>>
>>> Thanks for the KIP Sophie. Couple of comments:
>>>
>>> It's a little unclear to me, what public API you propose. It seems you
>>> want to add
>>>
 public class SlidingWindow extends TimeWindow {}
>>>
>>> and
>>>
 public class SlidingWindows extends TimeWindows {} // or maybe `extends
>>> Windows`
>>>
>>> If yes, should we add corresponding public Serdes classes?
>>>
>>> Also, can you list all newly added classes/methods explicitly in the
>> wiki?
>>>
>>>
>>> About the semantics of the operator.
>>>
 "Only one single window is defined at a time,"
>>>
>>> Should this be "one window per key" instead?
>>>
>>> I agree that both window boundaries should be inclusive. However, I am
>>> not sure about:
>>>
 At most one record is forwarded when new data arrives
>>>
>>> (1) For what case, no output would be produced?
>>>
>>> (2) I think, if we advance in time, it can also happen that we emit
>>> multiple records. If a window "slides" (not "hops"), we cannot just
>>> advance it to the current record stream time but would need to emit 

Re: KIP-457: Add DISCONNECTED state to Kafka Streams

2019-04-16 Thread Matthias J. Sax
Thanks for the initiative.

In the motivation you mention that you want to use DISCONNECT to
indicate that the application was killed.

What is the difference to existing state DEAD?

Also, the backing JIRA seems to have a different motivation to add a
DISCONNECT state. There, the Kafka Streams application itself is
healthy, but it cannot connect to the brokers. It seems reasonable to
add a DISCONNECT for this case though.



-Matthias



On 4/16/19 9:30 AM, Richard Yu wrote:
> Hi all,
> 
> I like to propose a small KIP on adding a new state to KafkaStreams#state().
> It is very simple, so this should pass relatively quickly!
> Here is the discussion link:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-457%3A+Add+DISCONNECTED+status+to+Kafka+Streams
> 
> Cheers,
> Richard
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-8243) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-04-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8243:
--

 Summary: Flaky Test 
RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
 Key: KAFKA-8243
 URL: https://issues.apache.org/jira/browse/KAFKA-8243
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.0.1
Reporter: Matthias J. Sax
 Fix For: 2.0.2


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/248/tests]
{quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
Throttle time metrics for produce quota not updated: Client 
small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
0.018339216078392163 throttleTime 1000.0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:403)
at 
kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:401)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
at kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:401)
at 
kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:127){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #3558

2019-04-16 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.1-jdk8 #161

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8208: Change paper link directly to ASM (#6572)

--
[...truncated 255.85 KB...]
kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent PASSED

kafka.server.LogDirFailureTest > testIOExceptionDuringLogRoll STARTED

kafka.server.LogDirFailureTest > testIOExceptionDuringLogRoll PASSED

kafka.server.LogDirFailureTest > testIOExceptionDuringCheckpoint STARTED

kafka.server.LogDirFailureTest > testIOExceptionDuringCheckpoint PASSED

kafka.server.LogDirFailureTest > 
brokerWithOldInterBrokerProtocolShouldHaltOnLogDirFailure STARTED

kafka.server.LogDirFailureTest > 
brokerWithOldInterBrokerProtocolShouldHaltOnLogDirFailure PASSED

kafka.server.LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower STARTED

kafka.server.LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower PASSED

kafka.server.DelegationTokenRequestsWithDisableTokenFeatureTest > 
testDelegationTokenRequests STARTED

kafka.server.DelegationTokenRequestsWithDisableTokenFeatureTest > 
testDelegationTokenRequests PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldFetchLeaderEpochSecondTimeIfLeaderRepliesWithEpochNotKnownToFollower 
STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldFetchLeaderEpochSecondTimeIfLeaderRepliesWithEpochNotKnownToFollower 
PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldFetchLeaderEpochOnFirstFetchOnlyIfLeaderEpochKnownToBoth STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldFetchLeaderEpochOnFirstFetchOnlyIfLeaderEpochKnownToBoth PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldTruncateToInitialFetchOffsetIfLeaderReturnsUndefinedOffset STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldTruncateToInitialFetchOffsetIfLeaderReturnsUndefinedOffset PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldPollIndefinitelyIfLeaderReturnsAnyException STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldPollIndefinitelyIfLeaderReturnsAnyException PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldTruncateToOffsetSpecifiedInEpochOffsetResponse STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldTruncateToOffsetSpecifiedInEpochOffsetResponse PASSED

kafka.server.ReplicaFetcherThreadTest > shouldHandleExceptionFromBlockingSend 
STARTED

kafka.server.ReplicaFetcherThreadTest > shouldHandleExceptionFromBlockingSend 
PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldSendLatestRequestVersionsByDefault STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldSendLatestRequestVersionsByDefault PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldTruncateToOffsetSpecifiedInEpochOffsetResponseIfFollowerHasNoMoreEpochs 
STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldTruncateToOffsetSpecifiedInEpochOffsetResponseIfFollowerHasNoMoreEpochs 
PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldFetchLeaderEpochRequestIfLastEpochDefinedForSomePartitions STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldFetchLeaderEpochRequestIfLastEpochDefinedForSomePartitions PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldUseLeaderEndOffsetIfInterBrokerVersionBelow20 STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldUseLeaderEndOffsetIfInterBrokerVersionBelow20 PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldMovePartitionsOutOfTruncatingLogState STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldMovePartitionsOutOfTruncatingLogState PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldCatchExceptionFromBlockingSendWhenShuttingDownReplicaFetcherThread STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldCatchExceptionFromBlockingSendWhenShuttingDownReplicaFetcherThread PASSED

kafka.server.ReplicaFetcherThreadTest > 
shouldFilterPartitionsMadeLeaderDuringLeaderEpochRequest STARTED

kafka.server.ReplicaFetcherThreadTest > 
shouldFilterPartitionsMadeLeaderDuringLeaderEpochRequest PASSED

kafka.server.KafkaMetricReporterExceptionHandlingTest > 
testBothReportersAreInvoked STARTED

kafka.server.KafkaMetricReporterExceptionHandlingTest > 
testBothReportersAreInvoked PASSED

kafka.server.DelayedFetchTest > testFetchWithFencedEpoch STARTED

kafka.server.DelayedFetchTest > testFetchWithFencedEpoch PASSED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > 

Build failed in Jenkins: kafka-2.0-jdk8 #248

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8210: Fix link for streams table duality (#6573)

--
[...truncated 437.53 KB...]

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED


Jenkins build is back to normal : kafka-trunk-jdk11 #439

2019-04-16 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8242) Exception in ReplicaFetcher blocks replication of all other partitions

2019-04-16 Thread Nevins Bartolomeo (JIRA)
Nevins Bartolomeo created KAFKA-8242:


 Summary: Exception in ReplicaFetcher blocks replication of all 
other partitions
 Key: KAFKA-8242
 URL: https://issues.apache.org/jira/browse/KAFKA-8242
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Nevins Bartolomeo


We're seeing the following exception in our replication threads. 
{code:java}
[2019-04-16 14:14:39,724] ERROR [ReplicaFetcher replicaId=15, leaderId=8, 
fetcherId=0] Error due to (kafka.server.ReplicaFetcherThread)
kafka.common.KafkaException: Error processing data for partition testtopic-123 
offset 9880379
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:204)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:169)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:166)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:166)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:164)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Caused by: 
org.apache.kafka.common.errors.TransactionCoordinatorFencedException: Invalid 
coordinator epoch: 27 (zombie), 31 (current)
{code}
While this is an issue itself the larger issue is that this exception kills the 
replication threads so no other partitions get replicated to this broker. That 
a single corrupt partition can affect the availability of multiple topics is a 
great concern to us.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-450: Sliding Window Aggregations in the DSL

2019-04-16 Thread Guozhang Wang
Regarding 4): yes I agree with you that invertibility is not a common
property for agg-functions. Just to be clear about our current APIs: for
stream.aggregate we only require a single Adder function, whereas for
table.aggregate we require both Adder and Subtractor, but these are not
used to leverage any properties just that the incoming table changelog
stream may contain "tombstones" and hence we need to negate the effect of
the previous record that has been deleted by this tombstone.

What I'm proposing is exactly having two APIs, one for Adder only (like
other Streams aggregations) and one for Subtractor + Adder (for agg
functions users think are invertible) for efficiency. Some other frameworks
(e.g. Spark) have similar options for users and will recommend using the
latter so that some optimization in implementation can be done.


Guozhang

On Mon, Apr 15, 2019 at 12:29 PM Sophie Blee-Goldman 
wrote:

> Thanks for the feedback Matthias and Bill. After discussing offline we
> realized the type of windows I originally had in mind were quite different,
> and I agree now that the semantics outlined by Matthias are the direction
> to go in here. I will update the KIP accordingly with the new semantics
> (and corresponding design) and restart the discussion from there.
>
> In the meantime, to respond to some other points:
>
> 1) API:
>
> I propose adding only the one class -- public class SlidingWindows extends
> Windows {} --  so I do not believe we need any new Serdes? It
> will still be a fixed size TimeWindow, but handled a bit differently. I've
> updated the KIP to state explicitly all of the classes/methods being added
>
> 2) Zero grace period
>
> The "zero grace period" was essentially just consequence of my original
> definition for sliding windows; with the new semantics we can (and should)
> allow for a nonzero grace period
>
> 3) Wall-clock time
>
> Hm, I had not considered this yet but it may be a good idea to keep in mind
> while rethinking the design. To clarify, we don't support wall-clock based
> aggregations with hopping or tumbling windows though (yet?)
>
> 4) Commutative vs associative vs invertible aggregations
>
> I agree that it's reasonable to assume commutativity and associativity, but
> that's not the same as being subtractable -- that requires invertibility,
> which is broken by a lot of very simple functions and is not, I think, ok
> to assume. However we could consider adding a separate API which also takes
> a subtractor and corresponds to a completely different implementation. We
> could also consider an API that takes a function that aggregates two
> aggregates together in addition to the existing aggregator (which
> aggregates a single value with an existing aggregate) WDYT?
>
>
>
>
> On Thu, Apr 11, 2019 at 1:13 AM Matthias J. Sax 
> wrote:
>
> > Thanks for the KIP Sophie. Couple of comments:
> >
> > It's a little unclear to me, what public API you propose. It seems you
> > want to add
> >
> > > public class SlidingWindow extends TimeWindow {}
> >
> > and
> >
> > > public class SlidingWindows extends TimeWindows {} // or maybe `extends
> > Windows`
> >
> > If yes, should we add corresponding public Serdes classes?
> >
> > Also, can you list all newly added classes/methods explicitly in the
> wiki?
> >
> >
> > About the semantics of the operator.
> >
> > > "Only one single window is defined at a time,"
> >
> > Should this be "one window per key" instead?
> >
> > I agree that both window boundaries should be inclusive. However, I am
> > not sure about:
> >
> > > At most one record is forwarded when new data arrives
> >
> > (1) For what case, no output would be produced?
> >
> > (2) I think, if we advance in time, it can also happen that we emit
> > multiple records. If a window "slides" (not "hops"), we cannot just
> > advance it to the current record stream time but would need to emit more
> > result if records expire before the current input record is added. For
> > example, consider a window with size 5ms, and the following ts (all
> > records have the same key):
> >
> > 1 2 3 10 11
> >
> > This should result in windows:
> >
> > [1]
> > [1,2]
> > [1,2,3]
> > [2,3]
> > [3]
> > [10]
> > [10,11]
> >
> > Ie, when the record with ts=10 is processed, it will trigger the
> > computation of [2,3], [3] and [10].
> >
> >
> > About out-of-order handling: I am wondering, if the current design that
> > does not allow any grace period is too restrictive. Can you elaborate
> > more on the motivation for this suggestions?
> >
> >
> > Can you give more details about the "simple design"? Atm, it's not clear
> > to me how it works. I though we always need to store all raw values. If
> > we only store the current aggregate, would we end up with the same
> > inefficient solution as using a hopping window with advance 1ms?
> >
> >
> > For the O(sqrt(N)) proposal: can you maybe add an example with concrete
> > bucket sizes, window size etc. The current proposal is a little unclear
> > to me, atm.

Build failed in Jenkins: kafka-1.1-jdk7 #257

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8209: Wrong link for KStreams DSL in core concepts doc (#6564)

--
[...truncated 1.51 MB...]
W extends Window declared in method 
count(Windows,StateStoreSupplier)
K extends Object declared in interface KGroupedStream
:297:
 warning: [deprecation] count(Windows) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows) {
^
  where W,K are type-variables:
W extends Window declared in method count(Windows)
K extends Object declared in interface KGroupedStream
:289:
 warning: [deprecation] count(Windows,String) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method count(Windows,String)
K extends Object declared in interface KGroupedStream
:272:
 warning: [deprecation] count(StateStoreSupplier) in 
KGroupedStream has been deprecated
public KTable count(final 
org.apache.kafka.streams.processor.StateStoreSupplier 
storeSupplier) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:260:
 warning: [deprecation] count(String) in KGroupedStream has been deprecated
public KTable count(final String queryableStoreName) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:174:
 warning: [deprecation] punctuate(long) in Processor has been deprecated
public void punctuate(long timestamp) {
^
:110:
 warning: [deprecation] schedule(long) in ProcessorContext has been deprecated
public void schedule(final long interval) {
^
:826:
 warning: [deprecation] leftJoin(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream leftJoin(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:751:
 warning: [deprecation] join(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream join(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method join(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared in method join(KTable,ValueJoiner,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:593:
 warning: [deprecation] outerJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde) in KStream has 
been deprecated
public  KStream outerJoin(final KStream other,
 ^
  where VO,VR,K,V are type-variables:
VO extends Object declared in method 
outerJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
VR extends Object declared in method 
outerJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:695:
 warning: [deprecation] leftJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde) in KStream has 
been deprecated
public  

KIP-457: Add DISCONNECTED state to Kafka Streams

2019-04-16 Thread Richard Yu
Hi all,

I like to propose a small KIP on adding a new state to KafkaStreams#state().
It is very simple, so this should pass relatively quickly!
Here is the discussion link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-457%3A+Add+DISCONNECTED+status+to+Kafka+Streams

Cheers,
Richard


[jira] [Resolved] (KAFKA-7875) Add KStream#flatTransformValues

2019-04-16 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7875.

Resolution: Fixed

> Add KStream#flatTransformValues
> ---
>
> Key: KAFKA-7875
> URL: https://issues.apache.org/jira/browse/KAFKA-7875
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: kip
>
> Part of KIP-313: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-313%3A+Add+KStream.flatTransform+and+KStream.flatTransformValues]
>  
> Compare https://issues.apache.org/jira/browse/KAFKA-4217



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-328: Ability to suppress updates for KTables

2019-04-16 Thread John Roesler
Hi again, all,

Casey Green pointed out that we overlooked the scala api when we added
Suppress. He was kind enough to send
https://github.com/apache/kafka/pull/6314 to correct this, and we also
updated the KIP. Since it's essentially just copying the existing Java API
over to Scala, we didn't create a new KIP.

Note that we don't plan to treat this as a bug, and therefore don't
currently plan to backport the Scala Suppress API to 2.1 or 2.2.

Thanks,
-John

On Fri, Nov 16, 2018 at 3:54 PM Bill Bejeck  wrote:

> Hi John,
>
> Thanks for the update, I'm +1 on changes and my +1 vote stands.
>
> -Bill
>
> On Fri, Nov 16, 2018 at 4:19 PM John Roesler  wrote:
>
> > Hi all, sorry to do this again, but during review of the code to add the
> > metrics proposed in this KIP, the reviewers and I noticed some
> > inconsistencies and drawbacks of the metrics I proposed in the KIP.
> >
> > Here's the diff:
> >
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=24=23
> >
> > * The proposed metrics were all INFO level, but they would be updated on
> > every record, creating a performance concern. If we can refactor the
> > metrics framework in the future to resolve this concern, we may move the
> > metrics back to INFO level.
> > * having separate metrics for memory and disk buffers is unnecessarily
> > complex. The main utility is determining how close the buffer is to the
> > configured limit, which makes a single metric more useful. I've combined
> > them into one "suppression-buffer-size-*" metric.
> > * The "intermediate-result-suppression-*" metric would be equivalent to
> the
> > "process-*" metric which is already available on the ProcessorNode. I've
> > removed it from the KIP.
> > * The "suppression-mem-buffer-evict-*" metric had been proposed as a
> buffer
> > metric, but it makes more sense as a processor node metric, since its
> > counterpart is the "process-*" metric. I've replaced it with a processor
> > node metric, "suppression-emit-*"
> >
> > Let me know if you want to recast votes in response to this change.
> >
> > -John
> >
> > On Thu, Oct 4, 2018 at 11:26 AM John Roesler  wrote:
> >
> > > Update: Here's a link to the documented eviction behavior:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables#KIP-328:AbilitytosuppressupdatesforKTables-BufferEvictionBehavior(akaSuppressEmitBehavior)
> > >
> > > On Thu, Oct 4, 2018 at 11:12 AM John Roesler 
> wrote:
> > >
> > >> Hello again, all,
> > >>
> > >> During review, we realized that there is a relationship between this
> > >> (KIP-328) and KIP-372.
> > >>
> > >> KIP-372 proposed to allow naming *all* internal topics, and KIP-328
> adds
> > >> a new internal topic (the changelog for the suppression buffer).
> > >>
> > >> However, we didn't consider this relationship in either KIP
> discussion,
> > >> possibly since they were discussed and accepted concurrently.
> > >>
> > >> I have updated KIP-328 to effectively "merge" the two KIPs by adding a
> > >> `withName` builder to Suppressed in the style of the other builders
> > added
> > >> in KIP-372:
> > >>
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=20=19
> > >> .
> > >>
> > >> I think this should be uncontroversial, but as always, let me know of
> > any
> > >> objections you may have.
> > >>
> > >>
> > >> Also, note that I'll be updating the KIP to document the exact buffer
> > >> eviction behavior. I previously treated this as an internal
> > implementation
> > >> detail, but after consideration, I think users would want to know the
> > >> eviction semantics, especially if they are debugging their
> applications
> > and
> > >> scrutinizing the sequence of emitted records.
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >> On Thu, Sep 20, 2018 at 5:34 PM John Roesler 
> wrote:
> > >>
> > >>> Hello all,
> > >>>
> > >>> During review of https://github.com/apache/kafka/pull/5567 for
> > KIP-328,
> > >>> the reviewers raised many good suggestions for the API.
> > >>>
> > >>> The basic design of the suppress operation remains the same, but the
> > >>> config object is (in my opinion) far more ergonomic with their
> > >>> suggestions.
> > >>>
> > >>> I have updated the KIP to reflect the new config (
> > >>>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables#KIP-328:AbilitytosuppressupdatesforKTables-NewSuppressOperator
> > >>> )
> > >>>
> > >>> Please let me know if anyone wishes to change their vote, and we call
> > >>> for a recast.
> > >>>
> > >>> Thanks,
> > >>> -John
> > >>>
> > >>> On Thu, Aug 23, 2018 at 12:54 PM Matthias J. Sax <
> > matth...@confluent.io>
> > >>> wrote:
> > >>>
> >  It seems nobody has any objections against the change.
> > 
> >  That's for the KIP improvement. I'll go ahead and merge the PR.
> > 
> > 
> >  -Matthias
> > 
> >  On 8/21/18 2:44 PM, John Roesler 

Re: [DISCUSS] KIP-449: Add connector contexts to Connect worker logs

2019-04-16 Thread Jeremy Custenborder
Chris is correct the examples are mixed but it's pretty easy to
follow. From what I have gathered it looks like manipulation of the
context would be handled by the framework and not necessarily for the
connector developer. I'm not sure what benefit a connector developer
would have to manipulate the connector context further. If we head
down the path of allowing developers to extend the context, I would
prefer the output format of %X{connector.context} to be key value.
Something like "connector=myconnector task=0"

The state of the corresponding pull request looks really good as is. I
would be fine with merging it as is or expanding it to write the
context as key value.

On Mon, Apr 15, 2019 at 12:55 PM Chris Egerton  wrote:
>
> Hi Randall,
>
> Thanks for the KIP. Debugging Connect workers definitely becomes harder as
> the number of connectors and tasks increases, and your proposal would
> simplify the process of sifting through logs and finding relevant
> information faster and more accurately.
>
> I have a couple small comments:
>
> First--I believe the example snippet in your KIP under the "Public
> Interfaces" header is inaccurate:
> `[my-connector|worker]` - used on log messages where the Connect worker is
> validating the configuration for or starting/stopping the
> "local-file-source" connector via the SourceConnector / SinkConnector
> implementation methods.
> `[my-connector|task-0]` - used on log messages where the Connect worker is
> executing task 0 of the "local-file-source" connector, including calling
> any of the SourceTask / SinkTask implementation methods, processing the
> messages for/from the task, and calling the task's producer/consumer.
> `[my-connector|task-0|offsets]` - used on log messages where the Connect
> worker is committing source offsets for task 0 of the "local-file-source"
> connector.
> The sample contexts mention "my-connector" but their explanations say that
> they correspond to "local-file-source"; shouldn't the two align?
>
> Second--I'm unclear on whether we expect (or want to encourage) developers
> to manipulate the "connector.context" MDC key themselves, from with
> connectors, transforms, etc. If we want to encourage this (in order to make
> debugging even easier, which would align with the intent behind this KIP),
> we may want to expose the LoggingContext class in the Connect API package
> and expand on it so that users can set the context themselves. This would
> be especially helpful in connectors with multithreaded logic. However, if
> that would expand the scope of this KIP too much I think we could afford to
> address that later.
>
> Cheers,
>
> Chris


Re: Request KIP Permissions

2019-04-16 Thread Bill Bejeck
You're all set now!  I look forward to your KIP.

Thanks,
Bill

On Tue, Apr 16, 2019 at 1:15 AM Jukka Karvanen 
wrote:

> Hi,
>
> Could you please grant me write access to KIP proposals?
> I am planning to make KIP for KAFKA-8233: Helper class to make it simpler
> to write test logic with TopologyTestDriver
> Wiki ID: jkarvanen
>
> Best regards,
> Jukka Karvanen
>


Build failed in Jenkins: kafka-trunk-jdk8 #3557

2019-04-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7778: Add KTable.suppress to Scala API (#6314)

[vahid.hashemian] KAFKA-7471: Multiple Consumer Group Management Feature (#5726)

--
[...truncated 2.03 MB...]

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldRollSegments PASSED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldUpdateSegmentFileNameFromOldColonFormatToNewFormat STARTED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldUpdateSegmentFileNameFromOldColonFormatToNewFormat PASSED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldGetSegmentNameFromId STARTED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldGetSegmentNameFromId PASSED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldClearSegmentsOnClose STARTED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldClearSegmentsOnClose PASSED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldGetSegmentForTimestamp STARTED

org.apache.kafka.streams.state.internals.TimetampedSegmentsTest > 
shouldGetSegmentForTimestamp PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNPEIfFromKeyIsNull STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNPEIfFromKeyIsNull PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldNotGetValueFromOtherStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldNotGetValueFromOtherStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldReturnEmptyIteratorIfNoData STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldReturnEmptyIteratorIfNoData PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNPEIfToKeyIsNull STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNPEIfToKeyIsNull PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowInvalidStateStoreExceptionIfSessionFetchThrows STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowInvalidStateStoreExceptionIfSessionFetchThrows PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldFindValueForKeyWhenMultiStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldFindValueForKeyWhenMultiStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNPEIfKeyIsNull STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNPEIfKeyIsNull PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNullPointerExceptionIfFetchingNullKey STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowNullPointerExceptionIfFetchingNullKey PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldFetchResulstFromUnderlyingSessionStore STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldFetchResulstFromUnderlyingSessionStore PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowInvalidStateStoreExceptionOnRebalance STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldThrowInvalidStateStoreExceptionOnRebalance PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldFetchKeyRangeAcrossStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlySessionStoreTest > 
shouldFetchKeyRangeAcrossStores PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfOnlyCacheItemsAndAllDeleted STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfOnlyCacheItemsAndAllDeleted PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfAllCachedItemsDeleted STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfAllCachedItemsDeleted PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldIterateOverRange STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldIterateOverRange PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > 

[jira] [Resolved] (KAFKA-8235) NoSuchElementException when restoring state after a clean shutdown of a Kafka Streams application

2019-04-16 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8235.

Resolution: Invalid

Closing this as "invalid" because the reported issue is part of a PR that it 
not merged yet.

I linked this ticket on the PR so it gets fixed before the PR is merged. Thanks 
for pointing it out. Next time, just comment on the PR. Thanks!

> NoSuchElementException when restoring state after a clean shutdown of a Kafka 
> Streams application
> -
>
> Key: KAFKA-8235
> URL: https://issues.apache.org/jira/browse/KAFKA-8235
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.3.0
> Environment: Linux, CentOS 7, Java 1.8.0_191, 50 partitions per 
> topic, replication factor 3
>Reporter: Andrew Klopper
>Priority: Major
>
> While performing a larger scale test of a new Kafka Streams application that 
> performs aggregation and suppression, we have discovered that we are unable 
> to restart the application after a clean shutdown. The error that is logged 
> is:
> {code:java}
> [rollupd-5ee1997b-8302-4e88-b3db-a74c8b805a6c-StreamThread-1] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
> [rollupd-5ee1997b-8302-4e88-b3db-a74c8b805a6c-StreamThread-1] Encountered the 
> following error during processing:
> java.util.NoSuchElementException
> at java.util.TreeMap.key(TreeMap.java:1327)
> at java.util.TreeMap.firstKey(TreeMap.java:290)
> at 
> org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.restoreBatch(InMemoryTimeOrderedKeyValueBuffer.java:288)
> at 
> org.apache.kafka.streams.processor.internals.CompositeRestoreListener.restoreBatch(CompositeRestoreListener.java:89)
> at 
> org.apache.kafka.streams.processor.internals.StateRestorer.restore(StateRestorer.java:92)
> at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.processNext(StoreChangelogReader.java:317)
> at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:92)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:328)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:866)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:804)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:773)
> {code}
> The issue doesn't seem to occur for small amounts of data, but it doesn't 
> take a particularly large amount of data to trigger the problem either.
> Any assistance would be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8241) Dynamic update of keystore fails on listener without truststore

2019-04-16 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-8241:
-

 Summary: Dynamic update of keystore fails on listener without 
truststore
 Key: KAFKA-8241
 URL: https://issues.apache.org/jira/browse/KAFKA-8241
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1, 2.2.0, 2.0.1, 1.1.1
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.3.0, 2.2.1


Validation of dynamically updated keystores and truststores assumes that both 
are present. On brokers with only keystores and no truststore configured, 
dynamic update fails with NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8240) Source.equals() can fail with NPE

2019-04-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8240:
--

 Summary: Source.equals() can fail with NPE
 Key: KAFKA-8240
 URL: https://issues.apache.org/jira/browse/KAFKA-8240
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1, 2.2.0
Reporter: Matthias J. Sax


Reported on an PR: 
[https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]

InternalTopologyBuilder#Source.equals() might fail with NPE if 
`topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8239) Flaky Test PlaintextConsumerTest#testAutoCommitIntercept

2019-04-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8239:
--

 Summary: Flaky Test PlaintextConsumerTest#testAutoCommitIntercept
 Key: KAFKA-8239
 URL: https://issues.apache.org/jira/browse/KAFKA-8239
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3755/testReport/junit/kafka.api/PlaintextConsumerTest/testAutoCommitIntercept/]
{quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
org.junit.Assert.assertTrue(Assert.java:53) at 
kafka.api.PlaintextConsumerTest.testAutoCommitIntercept(PlaintextConsumerTest.scala:1084)
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)