[jira] [Resolved] (KAFKA-8161) Comma conflict when run script bin/kafka-configs.sh with config 'follower.replication.throttled.replicas'

2019-04-10 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-8161.
---
Resolution: Not A Problem

> Comma conflict when run script  bin/kafka-configs.sh with config 
> 'follower.replication.throttled.replicas'
> --
>
> Key: KAFKA-8161
> URL: https://issues.apache.org/jira/browse/KAFKA-8161
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.2.1
>Reporter: Haiping
>Priority: Minor
>
> when executing config command,it suggest  that 
> follower.replication.throttled.replicas  must match for format 
> [partitionId],[brokerId]:[partitionId],[brokerId]:[partitionId],[brokerId] 
> etc. but when config like that, it run with the following error:
> bin/kafka-configs.sh --entity-type topics --entity-name topic-test1  
> --zookeeper  127.0.0.1:2181/kafka --add-config 
> 'follower.replication.throttled.replicas=0,1:1,2' --alter
> Error while executing config command requirement failed: Invalid entity 
> config: all configs to be added must be in the format "key=val".
>  java.lang.IllegalArgumentException: requirement failed: Invalid entity 
> config: all configs to be added must be in the format "key=val".
>      at scala.Predef$.require(Predef.scala:224)
>      at 
> kafka.admin.ConfigCommand$.parseConfigsToBeAdded(ConfigCommand.scala:162)
>      at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:81)
>      at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:68)
>      at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> It seem that comma has been the separator of both replicas 
> {color:#33}such as{color} ([partitionId],[brokerId])  and keys such as 
> (key=val,key=val).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8215) Limit memory usage of RocksDB

2019-04-10 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8215:
--

 Summary: Limit memory usage of RocksDB
 Key: KAFKA-8215
 URL: https://issues.apache.org/jira/browse/KAFKA-8215
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman


The memory usage of Streams is currently unbounded in part because of RocksDB, 
which consumes memory on a per-instance basis. Each instance (ie each 
persistent state store) will have its own write buffer, index blocks, and block 
cache. The size of these can be configured individually, but there is currently 
no way for a Streams app to limit the total memory available across instances. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams

2019-04-10 Thread Jason Gustafson
Hi Guozhang and Boyang,

Thanks for the KIP. A few comments/questions below:

1. More of a nitpick, but `onPartitionsEmigrated` is not a very clear name.
How about `onPartitionsEvicted`? Or even perhaps `onMembershipLost`?
2. For `onPartitionsEmigrated`, how will we maintain compatibility with the
old behavior? Seems like we might need an extension of
`ConsumerRebalanceListener` with the new method. Otherwise we won't know if
the application is expecting the old behavior.
3. Just making sure I understand this, but the reason we need the error
code in the assignment is that the revoked partitions might be empty for
some members and non-empty for others. We want all members to rejoin
quickly even if they have no revoked partitions. Is that right?
4. I wanted to suggest an alternative approach for dealing with
compatibility and the upgrade problem. In fact, the consumer already has a
mechanism to change the assignment logic. Users can provide multiple
PartitionAssignor implementations in the `partition.assignment.strategy`
configuration. The coordinator will only select one which is supported by
all members of the group. Rather than adding the new `rebalance.protocol`
config, could we not reuse this mechanism? To support this, we would
basically create new assignor implementations. For example,
CooperativeRoundRobin instead of the usual RoundRobin. I think the benefit
is that it is quite a bit easier to reason about the upgrade state when not
all consumers have been updated. We are guaranteed that all members are
following the same logic. My feeling is that this will be a less error
prone solution since it depends less on state outside the system (i.e. the
respective `rebalance.protocol` configurations for all members in the group
and binary compatibility). The downside is that it will take more effort
for PartitionAssignor implementations to get a benefit from this improved
logic. But it's really hard to say that the new assignment logic would be
compatible with a custom assignor in any case.
5. Where does the new ProtocolVersion come from in the new JoinGroup
schema? I guess we need a new API on the PartitionAssignor interface?

Thanks,
Jason


On Mon, Apr 8, 2019 at 9:39 PM Boyang Chen  wrote:

>
> Thanks for the review Matthias! My 2-cent on the rebalance delay is that
> it is a rather fixed trade-off between
>
> task availability and resource shuffling. If we eventually trigger
> rebalance after rolling bounce, certain consumer
>
> setup is still faced with global shuffles, for example member.id ranking
> based round robin strategy, as rejoining dynamic
>
> members will be assigned with new member.id which reorders the
> assignment. So I think the primary goal of incremental
>
> rebalancing is still improving the cluster availability during rebalance,
> because it didn't revoke any partition during this
>
> process. Also, the perk is minimum configuration requirement :)
>
>
> Best,
>
> Boyang
>
> 
> From: Matthias J. Sax 
> Sent: Tuesday, April 9, 2019 7:47 AM
> To: dev
> Subject: Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams
>
> Thank for the KIP, Boyang and Guozhang!
>
>
> I made an initial pass and have some questions/comments. One high level
> comment: it seems that the KIP "mixes" plain consumer and Kafka Streams
> use case a little bit (at least in the presentation). It might be
> helpful to separate both cases clearly, or maybe limit the scope to
> plain consumer only.
>
>
>
> 10) For `PartitionAssignor.Assignment`: It seems we need a new method
> `List revokedPartitions()` ?
>
>
>
> 20) In Section "Consumer Coordinator Algorithm"
>
> Bullet point "1a)": If the subscription changes and a topic is
> removed from the subscription, why do we not revoke the partitions?
>
> Bullet point "1a)": What happens is a topic is deleted (or a
> partition is removed/deleted from a topic)? Should we call the new
> `onPartitionsEmigrated()` callback for this case?
>
> Bullet point "2b)" Should we update the `PartitionAssignor`
> interface to pass in the "old assignment" as third parameter into
> `assign()`?
>
>
>
> 30) Rebalance delay (as used in KIP-415): Could a rebalance delay
> subsume KIP-345? Configuring static members is rather complicated, and I
> am wondering if a rebalance delay would be sufficient?
>
>
>
> 40) Quote: "otherwise the we would fall into the case 3.b) forever."
>
> What is "case 3.b" ?
>
>
>
> 50) Section "Looking into the Future"
>
> Nit: the new "ProtocolVersion" field is missing in the first line
> describing "JoinGroupRequest"
>
> > This can also help saving "version probing" cost on Streams as well.
>
> How does this relate to Kafka Streams "version probing" implementation?
> How can we exploit the new `ProtocolVersion` in Streams to improve
> "version probing" ? I have a rough idea, but would like to hear more
> details.
>
>
>
> 60) Section "Recommended Upgrade Procedure"
>
> > Set the `stream.rebalancing.mode` to 

Re: [DISCUSS] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-10 Thread Bruno Cadonna
Hi Marteen and John,

I would opt for option 1 with an additional log message on INFO or WARN
level, since the log file is the place where you would look first to
understand what went wrong. I would also not adjust it when persistence
stores are available for suppress.

I would not go for option 2 or 3, because IIUC, with
`withLoggingDisabled()` also persistent state stores do not guarantee not
to loose records. Persisting state stores is basically a way to optimize
recovery in certain cases. The changelog topic is the component that
guarantees no data loss. So regarding data loss, in my opinion, disabling
logging on the suppression buffer is not different from disabling logging
on other state stores. Please correct me if I am wrong.

Best,
Bruno

On Wed, Apr 10, 2019 at 12:12 PM John Roesler  wrote:

> Thanks for the update and comments, Maarten. It would be interesting to
> hear what others think as well.
> -John
>
> On Thu, Apr 4, 2019 at 2:43 PM Maarten Duijn  wrote:
>
> > Thank you for the explanation regarding the internals, I have edited the
> > KIP accordingly and updated the Javadoc. About the possible data loss
> when
> > altering changelog config, I think we can improve by doing (one of) the
> > following.
> >
> > 1) Add a warning in the comments that clearly states what might happen
> > when change logging is disabled and adjust it when persistent stores are
> > added.
> >
> > 2) Change `withLoggingDisabled` to `minimizeLogging`. Instead of
> disabling
> > logging, a call to this method minimizes the topic size by aggressively
> > removing the records emitted downstream by the suppress operator. I
> believe
> > this can be achieved by setting `delete.retention.ms=0` in the topic
> > config.
> >
> > 3) Remove `withLoggingDisabled` from the proposal.
> >
> > 4) Leave both methods as-proposed, as you indicated, this is in line with
> > the other parts of the Streams API
> >
> > A user might want to disable logging when downstream is not a Kafka topic
> > but some other service that does not benefit from atleast-once-delivery
> of
> > the suppressed records in case of failover or rebalance.
> > Seeing as it might cause data loss, the methods should not be used
> lightly
> > and I think some comments are warranted. Personally, I rely purely on
> Kafka
> > to prevent data loss even when a store persisted locally, so when support
> > is added for persistent suppression, I feel the comments may stay.
> >
> > Maarten
> >
>


[jira] [Created] (KAFKA-8214) Handling RecordTooLargeException in the main thread

2019-04-10 Thread Mohan Parthasarathy (JIRA)
Mohan Parthasarathy created KAFKA-8214:
--

 Summary: Handling RecordTooLargeException in the main thread
 Key: KAFKA-8214
 URL: https://issues.apache.org/jira/browse/KAFKA-8214
 Project: Kafka
  Issue Type: Bug
 Environment: Kafka version 1.0.2
Reporter: Mohan Parthasarathy


How can we handle this exception in the main application ? If this task incurs 
this exception, then it does not commit the offset and hence it goes in a loop 
after that. This happens during aggregation process. We already have a limit on 
the message size of the topic which is 15 MB.


org.apache.kafka.streams.errors.StreamsException: Exception caught in process. 
taskId=2_6, processor=KSTREAM-SOURCE-16, 
topic=r-detection-KSTREAM-AGGREGATE-STATE-STORE-12-repartition, 
partition=6, offset=2049
    at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:367)

   
    at 
org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:104)
   
    at 
org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:413)

     
    at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:862)

   
    at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)

   
    at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)

 

Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_6] Abort 
sending since an error caught with a previous record (key 
fe80::a112:a206:bc15:8e86::743c:160:c0be:9e66&0 value [B@20dced9e 
timestamp 1554238297629) to topic 
-detection-KSTREAM-AGGREGATE-STATE-STORE-12-changelog due to 
org.apache.kafka.common.errors.RecordTooLargeException: The message is 15728866 
bytes when serialized which is larger than the maximum request size you have 
configured with the max.request.size configuration. 
     
    at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:133)
     
    at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:50)
   
    at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:192)
  
    at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:915)  

   
    at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:841)

   
    at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:162)

    
    at 
org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)

    
    at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:66)
  
    at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:31)
  
    at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:100)
    
    at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.access$000(CachingKeyValueStore.java:38)
    
 
    at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore$1.apply(CachingKeyValueStore.java:83)

    
    at 

[jira] [Created] (KAFKA-8213) KStreams interactive query documentation typo

2019-04-10 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8213:
---

 Summary: KStreams interactive query documentation typo
 Key: KAFKA-8213
 URL: https://issues.apache.org/jira/browse/KAFKA-8213
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Michael Drogalis


In [the Interactive Queries 
docs|https://kafka.apache.org/10/documentation/streams/developer-guide/interactive-queries.html#querying-remote-state-stores-for-the-entire-app],
 we have a minor typo:

Actual: You can use the corresponding local data in other parts of your 
application code, as long as it doesn’t required calling the Kafka Streams API.
Expected: You can use the corresponding local data in other parts of your 
application code, as long as it doesn’t require calling the Kafka Streams API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8212) KStreams documentation Maven artifact table is cut off

2019-04-10 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8212:
---

 Summary: KStreams documentation Maven artifact table is cut off
 Key: KAFKA-8212
 URL: https://issues.apache.org/jira/browse/KAFKA-8212
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Michael Drogalis
 Attachments: Screen Shot 2019-04-10 at 2.04.09 PM.png

In the [Writing a Streams Application 
doc|https://kafka.apache.org/21/documentation/streams/developer-guide/write-streams.html],
 the section "LIBRARIES AND MAVEN ARTIFACTS" has a table that lists out the 
Maven artifacts. The items in the group ID overflow and are cut off by the 
table column, even on a very large monitor.

Note that "artifact ID" seems to have its word break property set correctly. 
See the attached image.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3531

2019-04-10 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] [MINOR] Guard against crashing on invalid key range queries (#6521)

--
[...truncated 1.59 MB...]
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = 

[jira] [Created] (KAFKA-8211) Flaky Test: ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan

2019-04-10 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8211:
--

 Summary: Flaky Test: 
ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan
 Key: KAFKA-8211
 URL: https://issues.apache.org/jira/browse/KAFKA-8211
 Project: Kafka
  Issue Type: Bug
  Components: admin, clients, unit tests
Affects Versions: 2.3.0
Reporter: Bill Bejeck
 Fix For: 2.3.0


Failed in build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20778/]

 
{noformat}
Error Message
java.lang.AssertionError: Expected that consumer group has consumed all 
messages from topic/partition.
Stacktrace
java.lang.AssertionError: Expected that consumer group has consumed all 
messages from topic/partition.
at kafka.utils.TestUtils$.fail(TestUtils.scala:381)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791)
at 
kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
at 
kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
at 
kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan(ResetConsumerGroupOffsetTest.scala:323)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)

Re: [VOTE] KIP-445: In-memory session store

2019-04-10 Thread Guozhang Wang
+1 (binding).

On Tue, Apr 9, 2019 at 5:46 PM Bill Bejeck  wrote:

> Thanks for the KIP Sophie.
>
> +1(binding)
>
> -Bill
>
> On Tue, Apr 9, 2019 at 12:14 AM Matthias J. Sax 
> wrote:
>
> > Thanks for the KIP Sophie!
> >
> > +1 (binding)
> >
> >
> > -Matthias
> >
> > On 4/8/19 5:26 PM, Sophie Blee-Goldman wrote:
> > > Hello all,
> > >
> > > There has been a positive reception so I'd like to call for a vote on
> > > KIP-445, augmenting our session store options with an in-memory
> version.
> > > This would round out our store API to offer in-memory and persistent
> > > versions of all three types of stores.
> > >
> > > KIP:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-445%3A+In-memory+Session+Store
> > > PR: https://github.com/apache/kafka/pull/6525
> > > JIRA: https://issues.apache.org/jira/browse/KAFKA-8029
> > >
> > > This would also open up the possibility of migrating some of the
> > > unit/integration tests to in-memory stores to speed things up a bit ;)
> > >
> > > Cheers,
> > > Sophie
> > >
> >
> >
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-8210) Missing link for KStreams in Streams DSL docs

2019-04-10 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8210:
---

 Summary: Missing link for KStreams in Streams DSL docs
 Key: KAFKA-8210
 URL: https://issues.apache.org/jira/browse/KAFKA-8210
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Michael Drogalis


In [the Streams DSL 
docs|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html],
 there is some text under the KTable section that reads: "We have already seen 
an example of a changelog stream in the section streams_concepts_duality."

"streams_concepts_duality" seems to indicate that it should be a link, but it 
is not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8209) Wrong link for KStreams DSL in Core Concepts doc

2019-04-10 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8209:
---

 Summary: Wrong link for KStreams DSL in Core Concepts doc
 Key: KAFKA-8209
 URL: https://issues.apache.org/jira/browse/KAFKA-8209
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Michael Drogalis


In the [core concepts 
doc|https://kafka.apache.org/21/documentation/streams/core-concepts], there is 
a link in the "States" section for "Kafka Streams DSL". It points to the wrong 
link.

Actual: 
https://kafka.apache.org/21/documentation/streams/developer-guide/#streams_dsl
Expected: 
https://kafka.apache.org/21/documentation/streams/developer-guide/dsl-api.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8208) Broken link for out-of-order data in KStreams Core Concepts doc

2019-04-10 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8208:
---

 Summary: Broken link for out-of-order data in KStreams Core 
Concepts doc
 Key: KAFKA-8208
 URL: https://issues.apache.org/jira/browse/KAFKA-8208
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Michael Drogalis


In the [core concepts 
doc|https://kafka.apache.org/21/documentation/streams/core-concepts], there is 
a link in the "Out-of-Order Handling" section for "out-of-order data". It 404's 
to https://kafka.apache.org/21/documentation/streams/tbd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2019-04-10 Thread Colin McCabe
Hi Rajini,

That is a good point.  We want to keep the "transactionality" of updating 
several configs for the same ConfigResource at once.  SSL configs are a good 
example of this-- it often wouldn't make sense to change just one at once.  How 
about an input like Map> and a result like: 
Map?

best,
Colin

On Mon, Apr 8, 2019, at 04:48, Rajini Sivaram wrote:
> Hi Colin,
> 
> I am not sure the API proposed in the KIP fits with the type of updates we
> support. The old API with Map fits better and we
> need to find a way to do something similar while still retaining the old
> one.
> 
> Each request should specify a collection of updates for each
> ConfigResource with
> results returned per-ConfigResource since that is our unit of atomicity. We
> guarantee that we never do a partial update of a collection of configs for
> a ConfigResource from a single request and hence we should only have one
> input with a collection of updates and one result for each ConfigResource,
> making the model obvious in the API and docs. We need something similar to
> the existing method with Map, but need to change the
> method signature so that it can co-exist with the old method,
> 
> On Mon, Oct 1, 2018 at 8:35 PM Colin McCabe  wrote:
> 
> > Hi all,
> >
> > With 3 binding +1s from myself, Ismael, and Gwen, the vote passes.
> >
> > Thanks, all.
> > Colin
> >
> >
> > On Fri, Sep 28, 2018, at 09:43, Colin McCabe wrote:
> > > Hi all,
> > >
> > > Thanks for the discussion.  I'm going to close the vote later today if
> > > there are no more comments.
> > >
> > > cheers,
> > > Colin
> > >
> > >
> > > On Mon, Sep 24, 2018, at 22:33, Colin McCabe wrote:
> > > > +1 (binding)
> > > >
> > > > Colin
> > > >
> > > >
> > > > On Mon, Sep 24, 2018, at 17:49, Ismael Juma wrote:
> > > > > Thanks Colin. I think this is much needed and I'm +1 (binding)
> > > > > on fixing> this issue. However, I have a few minor suggestions:
> > > > >
> > > > > 1. Overload alterConfigs instead of creating a new method name. This
> > > > >gives> us both the short name and a path for removal of the
> > deprecated
> > > > > overload.> 2. Did we consider Add/Remove instead of Append/Subtract?
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe
> > > > >  wrote:>
> > > > > > Hi all,
> > > > > >
> > > > > > I would like to start voting on KIP-339, which creates a new
> > > > > > IncrementalAlterConfigs API.
> > > > > >
> > > > > > The KIP is described here:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API>
> > >
> > > > > > Previous discussion:
> > > > > > https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3530

2019-04-10 Thread Apache Jenkins Server
See 

--
[...truncated 545 B...]
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
remote: Enumerating objects: 3889, done.
remote: Counting objects:   0% (1/3889)   remote: Counting objects:   
1% (39/3889)   remote: Counting objects:   2% (78/3889)   
remote: Counting objects:   3% (117/3889)   remote: Counting objects:   
4% (156/3889)   remote: Counting objects:   5% (195/3889)   
remote: Counting objects:   6% (234/3889)   remote: Counting objects:   
7% (273/3889)   remote: Counting objects:   8% (312/3889)   
remote: Counting objects:   9% (351/3889)   remote: Counting objects:  
10% (389/3889)   remote: Counting objects:  11% (428/3889)   
remote: Counting objects:  12% (467/3889)   remote: Counting objects:  
13% (506/3889)   remote: Counting objects:  14% (545/3889)   
remote: Counting objects:  15% (584/3889)   remote: Counting objects:  
16% (623/3889)   remote: Counting objects:  17% (662/3889)   
remote: Counting objects:  18% (701/3889)   remote: Counting objects:  
19% (739/3889)   remote: Counting objects:  20% (778/3889)   
remote: Counting objects:  21% (817/3889)   remote: Counting objects:  
22% (856/3889)   remote: Counting objects:  23% (895/3889)   
remote: Counting objects:  24% (934/3889)   remote: Counting objects:  
25% (973/3889)   remote: Counting objects:  26% (1012/3889)   
remote: Counting objects:  27% (1051/3889)   remote: Counting objects:  
28% (1089/3889)   remote: Counting objects:  29% (1128/3889)   
remote: Counting objects:  30% (1167/3889)   remote: Counting objects:  
31% (1206/3889)   remote: Counting objects:  32% (1245/3889)   
remote: Counting objects:  33% (1284/3889)   remote: Counting objects:  
34% (1323/3889)   remote: Counting objects:  35% (1362/3889)   
remote: Counting objects:  36% (1401/3889)   remote: Counting objects:  
37% (1439/3889)   remote: Counting objects:  38% (1478/3889)   
remote: Counting objects:  39% (1517/3889)   remote: Counting objects:  
40% (1556/3889)   remote: Counting objects:  41% (1595/3889)   
remote: Counting objects:  42% (1634/3889)   remote: Counting objects:  
43% (1673/3889)   remote: Counting objects:  44% (1712/3889)   
remote: Counting objects:  45% (1751/3889)   remote: Counting objects:  
46% (1789/3889)   remote: Counting objects:  47% (1828/3889)   
remote: Counting objects:  48% (1867/3889)   remote: Counting objects:  
49% (1906/3889)   remote: Counting objects:  50% (1945/3889)   
remote: Counting objects:  51% (1984/3889)   remote: Counting objects:  
52% (2023/3889)   remote: Counting objects:  53% (2062/3889)   
remote: Counting objects:  54% (2101/3889)   remote: Counting objects:  
55% (2139/3889)   remote: Counting objects:  56% (2178/3889)   

Re: [DISCUSS] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-10 Thread John Roesler
Thanks for the update and comments, Maarten. It would be interesting to
hear what others think as well.
-John

On Thu, Apr 4, 2019 at 2:43 PM Maarten Duijn  wrote:

> Thank you for the explanation regarding the internals, I have edited the
> KIP accordingly and updated the Javadoc. About the possible data loss when
> altering changelog config, I think we can improve by doing (one of) the
> following.
>
> 1) Add a warning in the comments that clearly states what might happen
> when change logging is disabled and adjust it when persistent stores are
> added.
>
> 2) Change `withLoggingDisabled` to `minimizeLogging`. Instead of disabling
> logging, a call to this method minimizes the topic size by aggressively
> removing the records emitted downstream by the suppress operator. I believe
> this can be achieved by setting `delete.retention.ms=0` in the topic
> config.
>
> 3) Remove `withLoggingDisabled` from the proposal.
>
> 4) Leave both methods as-proposed, as you indicated, this is in line with
> the other parts of the Streams API
>
> A user might want to disable logging when downstream is not a Kafka topic
> but some other service that does not benefit from atleast-once-delivery of
> the suppressed records in case of failover or rebalance.
> Seeing as it might cause data loss, the methods should not be used lightly
> and I think some comments are warranted. Personally, I rely purely on Kafka
> to prevent data loss even when a store persisted locally, so when support
> is added for persistent suppression, I feel the comments may stay.
>
> Maarten
>


Re: Request for contribution

2019-04-10 Thread Bruno Cadonna
Hi Manish,

Good to hear that you want to learn and contribute to Kafka.

The documentation and the project info site are great starting points

https://kafka.apache.org/project
https://kafka.apache.org/documentation/

To start contributing take a look at

https://kafka.apache.org/contributing

and search for a starter ticket that suits you in Jira under

https://issues.apache.org/jira/browse/KAFKA-8198?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open

To assign yourself a ticket you need to sign-up for a Jira account and send
your username to this list, so that you can be added as a contributor to
the project.

Hope that helped.

Best,
Bruno

On Wed, Apr 10, 2019 at 9:17 AM Manish G 
wrote:

> Hi,
>
> I am Manish, a Java programmer by profession.
>
> I want to learn and contribute to Kafka.
>
> Can you please guide me to appropriate links for it?
>
> With regards
> Manish
>


[jira] [Reopened] (KAFKA-7895) Ktable supress operator emitting more than one record for the same key per window

2019-04-10 Thread John Roesler (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler reopened KAFKA-7895:
-

> Ktable supress operator emitting more than one record for the same key per 
> window
> -
>
> Key: KAFKA-7895
> URL: https://issues.apache.org/jira/browse/KAFKA-7895
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.1.1
>Reporter: prasanthi
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.2.0, 2.1.2
>
>
> Hi, We are using kstreams to get the aggregated counts per vendor(key) within 
> a specified window.
> Here's how we configured the suppress operator to emit one final record per 
> key/window.
> {code:java}
> KTable, Long> windowedCount = groupedStream
>  .windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(ofMillis(5L)))
>  .count(Materialized.with(Serdes.Integer(),Serdes.Long()))
>  .suppress(Suppressed.untilWindowCloses(unbounded()));
> {code}
> But we are getting more than one record for the same key/window as shown 
> below.
> {code:java}
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1039
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1162
> [KTABLE-TOSTREAM-10]: [9@154906704/154906710], 6584
> [KTABLE-TOSTREAM-10]: [88@154906704/154906710], 107
> [KTABLE-TOSTREAM-10]: [108@154906704/154906710], 315
> [KTABLE-TOSTREAM-10]: [119@154906704/154906710], 119
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 746
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 809{code}
> Could you please take a look?
> Thanks
>  
>  
> Added by John:
> Acceptance Criteria:
>  * add suppress to system tests, such that it's exercised with crash/shutdown 
> recovery, rebalance, etc.
>  ** [https://github.com/apache/kafka/pull/6278]
>  * make sure that there's some system test coverage with caching disabled.
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7943
>  * test with tighter time bounds with windows of say 30 seconds and use 
> system time without adding any extra time for verification
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7944



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Request for contribution

2019-04-10 Thread Manish G
Hi,

I am Manish, a Java programmer by profession.

I want to learn and contribute to Kafka.

Can you please guide me to appropriate links for it?

With regards
Manish


[jira] [Created] (KAFKA-8207) StickyPartitionAssignor for KStream

2019-04-10 Thread neeraj (JIRA)
neeraj created KAFKA-8207:
-

 Summary: StickyPartitionAssignor for KStream
 Key: KAFKA-8207
 URL: https://issues.apache.org/jira/browse/KAFKA-8207
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.0.0
Reporter: neeraj


In KStreams I am not able to give a sticky partition assignor or my custom 
partition assignor.

Overriding the property while building stream does not work

streams props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, 
CustomAssignor.class.getName());

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #3529

2019-04-10 Thread Apache Jenkins Server
See 




Partition Strategy in Kafka Stream

2019-04-10 Thread Neeraj Bhatt
Hi

Which partition strategy Kafka stream uses? Can we change the partition
strategy in Kafka Stream as we can change in normal Kafka Consumer
streamsConfiguration.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,Collections.singletonList(ColombiaStrictStickyAssignor.class));
do not change the partition assignor

We want to write our custom partition assignor

Thanks


Jenkins build is back to normal : kafka-trunk-jdk11 #430

2019-04-10 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-419: Safely notify Kafka Connect SourceTask is stopped

2019-04-10 Thread Mickael Maison
+1 (non-binding)
Thanks for the KIP!

On Mon, Apr 8, 2019 at 8:07 PM Andrew Schofield
 wrote:
>
> Hi,
> I’d like to begin the voting thread for KIP-419. This is a minor KIP to add a 
> new stopped() method to the SourceTask interface in Kafka Connect. Its 
> purpose is to give the task a safe opportunity to clean up its resources, in 
> the knowledge that this is the final call to the task.
>
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-419%3A+Safely+notify+Kafka+Connect+SourceTask+is+stopped
> PR: https://github.com/apache/kafka/pull/6551
> JIRA: https://issues.apache.org/jira/browse/KAFKA-7841
>
> Thanks,
> Andrew Schofield
> IBM


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2019-04-10 Thread Rajini Sivaram
Thanks Tom.

Once you have updated the KIP to support broker config updates, it may be
good to start a new vote thread since the other one is quite old and
perhaps the KIP has changed since then.


On Wed, Apr 10, 2019 at 3:58 AM Tom Bentley  wrote:

> Hi Rajini,
>
> I'd be happy to do that. I'll try to get it done in the next few days.
>
> Although there's been quite a lot of interest this, the vote thread never
> got any binding +1, so it's been stuck in limbo for a long time. It would
> be great to get this moving again.
>
> Kind regards,
>
> Tom
>
> On Tue, Apr 9, 2019 at 3:04 PM Rajini Sivaram 
> wrote:
>
> > Hi Tom,
> >
> > Are you planning to extend this KIP to also include dynamic broker config
> > update (currently covered under AlterConfigPolicy)?
> >
> > May be worth sending another note to make progress on this KIP since it
> has
> > been around a while and reading through the threads, it looks like there
> > has been a lot of interest in it.
> >
> > Thank you,
> >
> > Rajini
> >
> >
> > On Wed, Jan 9, 2019 at 11:25 AM Tom Bentley 
> wrote:
> >
> > > Hi Anna and Mickael,
> > >
> > > Anna, did you have any comments about the points I made?
> > >
> > > Mickael, we really need the vote to be passed before there's even any
> > work
> > > to do. With the exception of Ismael, the KIP didn't seem to get the
> > > attention of any of the other committers.
> > >
> > > Kind regards,
> > >
> > > Tom
> > >
> > > On Thu, 13 Dec 2018 at 18:11, Tom Bentley 
> wrote:
> > >
> > > > Hi Anna,
> > > >
> > > > Firstly, let me apologise again about having missed your previous
> > emails
> > > > about this.
> > > >
> > > > Thank you for the feedback. You raise some valid points about
> > ambiguity.
> > > > The problem with pulling the metadata into CreateTopicRequest and
> > > > AlterTopicRequest is that you lose the benefit of being able to eaily
> > > write
> > > > a common policy across creation and alter cases. For example, with
> the
> > > > proposed design the policy maker could write code like this (forgive
> my
> > > > pseudo-Java)
> > > >
> > > > public void validateCreateTopic(requestMetadata, ...) {
> > > > commonPolicy(requestMetadata.requestedState());
> > > >   }
> > > >
> > > >   public void validateAlterTopic(requestMetadata, ...) {
> > > > commonPolicy(requestMetadata.requestedState());
> > > >   }
> > > >
> > > >   private void commonPolicy(RequestedTopicState requestedState) {
> > > > // ...
> > > >   }
> > > >
> > > > I think that's an important feature of the API because (I think) very
> > > > often the policy maker is interested in defining the universe of
> > > prohibited
> > > > configurations without really caring about whether the request is a
> > > create
> > > > or an alter. Having a single RequestedTopicState for both create and
> > > > alter means they can do that trivially in one place. Having different
> > > > methods in the two Request classes prevents this and forces the
> policy
> > > > maker to pick apart the different requestState objects before calling
> > any
> > > > common method(s).
> > > >
> > > > I think my intention at the time (and it's many months ago now, so I
> > > might
> > > > not have remembered fully) was that RequestedTopicState would
> basically
> > > > represent what the topic would look like after the requested changes
> > were
> > > > applied (I accept this isn't how it's Javadoc'd in the KIP), rather
> > than
> > > > representing the request itself. Thus if the request changed the
> > > assignment
> > > > of some of the partitions and the policy maker was interested in
> > > precisely
> > > > which partitions would be changed, and how, they would indeed have to
> > > > compute that for themselves by looking up the current topic state
> from
> > > the
> > > > cluster state and seeing how they differed. Indeed they'd have to do
> > this
> > > > diff even to figure out that the user was requesting a change to the
> > > topic
> > > > assigned (or similarly for topic config, etc). To me this is
> acceptable
> > > > because I think most people writing such policies are just interested
> > in
> > > > defining what is not allowed, so giving them a representation of the
> > > > proposed topic state which they can readily check against is the most
> > > > direct API. In this interpretation generatedReplicaAssignment() would
> > > > just be some extra metadata annotating whether any difference between
> > the
> > > > current and proposed states was directly from the user, or generated
> on
> > > the
> > > > broker. You're right that it's ambiguous when the request didn't
> > actually
> > > > change the assignment but I didn't envisage policy makers using it
> > except
> > > > when the assignments differed anyway. To me it would be acceptable to
> > > > Javadoc this.
> > > >
> > > > Given this interpretation of RequestedTopicState as "what the topic
> > would
> > > > look like after the requested changes were applied" can you see any
> > other
> > > > 

Re: [VOTE] KIP-360: Improve handling of unknown producer when using EOS

2019-04-10 Thread Magnus Edenhill
+1 (non-binding)


Den ons 10 apr. 2019 kl 02:38 skrev Guozhang Wang :

> +1 (binding). Thanks for the written KIP! The approach lgtm.
>
> One minor thing: the name of "last epoch" maybe a bit misleading (although
> it is for internal usage only and will not be exposed to users) for future
> developers, how about rename it to "required_epoch" and if it is set to
> "-1" it means "not required, hence not checks"?
>
> Guozhang
>
> On Tue, Apr 9, 2019 at 5:02 PM Jason Gustafson  wrote:
>
> > Bump (for Guozhang)
> >
> > On Mon, Apr 8, 2019 at 8:55 AM Jason Gustafson 
> wrote:
> >
> >> Hi All,
> >>
> >> I'd like to start a vote on KIP-360:
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-360%3A+Improve+handling+of+unknown+producer
> >> .
> >>
> >> +1 from me (duh)
> >>
> >> Thanks,
> >> Jason
> >>
> >
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk8 #3528

2019-04-10 Thread Apache Jenkins Server
See 

--
[...truncated 545 B...]
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
remote: Enumerating objects: 3863, done.
remote: Counting objects:   0% (1/3863)   remote: Counting objects:   
1% (39/3863)   remote: Counting objects:   2% (78/3863)   
remote: Counting objects:   3% (116/3863)   remote: Counting objects:   
4% (155/3863)   remote: Counting objects:   5% (194/3863)   
remote: Counting objects:   6% (232/3863)   remote: Counting objects:   
7% (271/3863)   remote: Counting objects:   8% (310/3863)   
remote: Counting objects:   9% (348/3863)   remote: Counting objects:  
10% (387/3863)   remote: Counting objects:  11% (425/3863)   
remote: Counting objects:  12% (464/3863)   remote: Counting objects:  
13% (503/3863)   remote: Counting objects:  14% (541/3863)   
remote: Counting objects:  15% (580/3863)   remote: Counting objects:  
16% (619/3863)   remote: Counting objects:  17% (657/3863)   
remote: Counting objects:  18% (696/3863)   remote: Counting objects:  
19% (734/3863)   remote: Counting objects:  20% (773/3863)   
remote: Counting objects:  21% (812/3863)   remote: Counting objects:  
22% (850/3863)   remote: Counting objects:  23% (889/3863)   
remote: Counting objects:  24% (928/3863)   remote: Counting objects:  
25% (966/3863)   remote: Counting objects:  26% (1005/3863)   
remote: Counting objects:  27% (1044/3863)   remote: Counting objects:  
28% (1082/3863)   remote: Counting objects:  29% (1121/3863)   
remote: Counting objects:  30% (1159/3863)   remote: Counting objects:  
31% (1198/3863)   remote: Counting objects:  32% (1237/3863)   
remote: Counting objects:  33% (1275/3863)   remote: Counting objects:  
34% (1314/3863)   remote: Counting objects:  35% (1353/3863)   
remote: Counting objects:  36% (1391/3863)   remote: Counting objects:  
37% (1430/3863)   remote: Counting objects:  38% (1468/3863)   
remote: Counting objects:  39% (1507/3863)   remote: Counting objects:  
40% (1546/3863)   remote: Counting objects:  41% (1584/3863)   
remote: Counting objects:  42% (1623/3863)   remote: Counting objects:  
43% (1662/3863)   remote: Counting objects:  44% (1700/3863)   
remote: Counting objects:  45% (1739/3863)   remote: Counting objects:  
46% (1777/3863)   remote: Counting objects:  47% (1816/3863)   
remote: Counting objects:  48% (1855/3863)   remote: Counting objects:  
49% (1893/3863)   remote: Counting objects:  50% (1932/3863)   
remote: Counting objects:  51% (1971/3863)   remote: Counting objects:  
52% (2009/3863)   remote: Counting objects:  53% (2048/3863)   
remote: Counting objects:  54% (2087/3863)   remote: Counting objects:  
55% (2125/3863)   remote: Counting objects:  56% (2164/3863)