Jenkins build is back to normal : kafka-trunk-jdk8 #3507

2019-04-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #416

2019-04-02 Thread Apache Jenkins Server
See 


Changes:

[colin] KAFKA-8183: Add retries to WorkerUtils#verifyTopics (#6532)

[github] KAFKA-7190: KIP-443; Remove streams overrides on repartition topics

--
[...truncated 2.37 MB...]
org.apache.kafka.trogdor.common.JsonUtilTest > testOpenBraceComesFirst PASSED

org.apache.kafka.trogdor.common.TopologyTest > testAgentNodeNames STARTED

org.apache.kafka.trogdor.common.TopologyTest > testAgentNodeNames PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentCreateWorkers STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentCreateWorkers PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetUptime STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetUptime PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentStartShutdown STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentStartShutdown PASSED

org.apache.kafka.trogdor.agent.AgentTest > 
testCreateExpiredWorkerIsNotScheduled STARTED

org.apache.kafka.trogdor.agent.AgentTest > 
testCreateExpiredWorkerIsNotScheduled PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentProgrammaticShutdown STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentProgrammaticShutdown PASSED

org.apache.kafka.trogdor.agent.AgentTest > testDestroyWorkers STARTED

org.apache.kafka.trogdor.agent.AgentTest > testDestroyWorkers PASSED

org.apache.kafka.trogdor.agent.AgentTest > testKiboshFaults STARTED

org.apache.kafka.trogdor.agent.AgentTest > testKiboshFaults PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithTimeout STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithTimeout PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit PASSED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions STARTED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks PASSED

org.apache.kafka.trogdor.task.TaskSpecTest > testTaskSpecSerialization STARTED

org.apache.kafka.trogdor.task.TaskSpecTest > testTaskSpecSerialization PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes PASSED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle STARTED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithSomePartitions STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithSomePartitions PASSED


Re: [VOTE] KIP-443: Return to default segment.ms and segment.index.bytes in Streams repartition topics

2019-04-02 Thread Guozhang Wang
Hello folks,

I'm closing this voting thread now, thanks to all who have provided your
feedbacks!

Here's a quick tally:

Binding +1: 4 (Damian, Bill, Manikumar, Guozhang)
Non-binding +1: (John, Mickael).


Guozhang

On Fri, Mar 29, 2019 at 11:32 AM Guozhang Wang  wrote:

> Ah I see, my bad :) Yes that was the documented value in `TopicConfig`,
> and I agree we should just change that as well.
>
> Will update the KIP.
>
>
>
> On Fri, Mar 29, 2019 at 11:27 AM Mickael Maison 
> wrote:
>
>> Hi Guozhang,
>>
>> I know the KIP is about segments configuration but I'm talking about
>> retention.ms which is also explicitly set on repartition topics
>>
>> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RepartitionTopicConfig.java#L39
>> Streams is setting it to Long.MAX_VALUE, but -1 is the "documented"
>> way to disable the time limit. That's why I said "for consistency" as
>> in practice it's not going to change anything.
>>
>> On Fri, Mar 29, 2019 at 5:09 PM Guozhang Wang  wrote:
>> >
>> > Hello Mickael,
>> >
>> > segment.ms default value in TopicConfig is 7 days, I think this is a
>> > sufficient default value. Do you have any motivations to set it to -1?
>> >
>> >
>> > Guozhang
>> >
>> > On Fri, Mar 29, 2019 at 9:42 AM Mickael Maison <
>> mickael.mai...@gmail.com>
>> > wrote:
>> >
>> > > +1 (non binding)
>> > > For consistency, should we also set retention.ms to -1 instead of
>> > > Long.MAX_VALUE?
>> > >
>> > > On Fri, Mar 29, 2019 at 3:59 PM Manikumar 
>> > > wrote:
>> > > >
>> > > > +1 (binding)
>> > > >
>> > > > Thanks for the KIP.
>> > > >
>> > > > On Fri, Mar 29, 2019 at 9:04 PM Damian Guy 
>> wrote:
>> > > >
>> > > > > +1
>> > > > >
>> > > > > On Fri, 29 Mar 2019 at 01:59, John Roesler 
>> wrote:
>> > > > >
>> > > > > > +1 (nonbinding) from me.
>> > > > > >
>> > > > > > On Thu, Mar 28, 2019 at 7:08 PM Guozhang Wang <
>> wangg...@gmail.com>
>> > > > > wrote:
>> > > > > >
>> > > > > > > Hello folks,
>> > > > > > >
>> > > > > > > I'd like to directly start a voting thread on this simple KIP
>> to
>> > > change
>> > > > > > the
>> > > > > > > default override values for repartition topics:
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-443%3A+Return+to+default+segment.ms+and+segment.index.bytes+in+Streams+repartition+topics
>> > > > > > >
>> > > > > > > The related PR can be found here as well:
>> > > > > > > https://github.com/apache/kafka/pull/6511
>> > > > > > >
>> > > > > > > If you have any thoughts or feedbacks, they are more than
>> welcomed
>> > > as
>> > > > > > well.
>> > > > > > >
>> > > > > > >
>> > > > > > > -- Guozhang
>> > > > > > >
>> > > > > >
>> > > > >
>> > >
>> >
>> >
>> > --
>> > -- Guozhang
>>
>
>
> --
> -- Guozhang
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-8185) Controller becomes stale and not able to failover the leadership for the partitions

2019-04-02 Thread Kang H Lee (JIRA)
Kang H Lee created KAFKA-8185:
-

 Summary: Controller becomes stale and not able to failover the 
leadership for the partitions
 Key: KAFKA-8185
 URL: https://issues.apache.org/jira/browse/KAFKA-8185
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 1.1.1
Reporter: Kang H Lee
 Attachments: broker12.zip, broker9.zip, zookeeper.zip

Description:

After broker 9 went offline, all partitions led by it went offline. The 
controller attempted to move leadership but ran into an exception while doing 
so:
{code:java}
// [2019-03-26 01:23:34,114] ERROR [PartitionStateMachine controllerId=12] 
Error while moving some partitions to OnlinePartition state 
(kafka.controller.PartitionStateMachine)
java.util.NoSuchElementException: key not found: me-test-1
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
at 
kafka.controller.PartitionStateMachine$$anonfun$14.apply(PartitionStateMachine.scala:202)
at 
kafka.controller.PartitionStateMachine$$anonfun$14.apply(PartitionStateMachine.scala:202)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.controller.PartitionStateMachine.initializeLeaderAndIsrForPartitions(PartitionStateMachine.scala:202)
at 
kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:167)
at 
kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:116)
at 
kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:106)
at 
kafka.controller.KafkaController.kafka$controller$KafkaController$$onReplicasBecomeOffline(KafkaController.scala:437)
at 
kafka.controller.KafkaController.kafka$controller$KafkaController$$onBrokerFailure(KafkaController.scala:405)
at 
kafka.controller.KafkaController$BrokerChange$.process(KafkaController.scala:1246)
at 
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:69)
at 
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:69)
at 
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:69)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at 
kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:68)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
{code}
The controller was unable to move leadership of partitions led by broker 9 as a 
result. It's worth noting that the controller ran into the same exception when 
the broker came back up online. The controller thinks `me-test-1` is a new 
partition and when attempting to transition it to an online partition, it is 
unable to retrieve its replica assignment from 
ControllerContext#partitionReplicaAssignment. I need to look through the code 
to figure out if there's a race condition or situations where we remove the 
partition from ControllerContext#partitionReplicaAssignment but might still 
leave it in PartitionStateMachine#partitionState.

They had to change the controller to recover from the offline status.

Sequential event:

* Broker 9 got restated in between : 2019-03-26 01:22:54,236 - 2019-03-26 
01:27:30,967: This was unclean shutdown.
* From 2019-03-26 01:27:30,967, broker 9 was rebuilding indexes. Broker 9 
wasn't able to process data at this moment.
* At 2019-03-26 01:29:36,741, broker 9 was starting to load replica.
* [2019-03-26 01:29:36,202] ERROR [KafkaApi-9] Number of alive brokers '0' does 
not meet the required replication factor '3' for the offsets topic (configured 
via 'offsets.topic.replication.factor'). This error can be ignored if the 
cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
* At 2019-03-26 01:29:37,270, broker 9 started report offline partitions.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8184) Update IQ docs to include session stores

2019-04-02 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8184:
--

 Summary: Update IQ docs to include session stores
 Key: KAFKA-8184
 URL: https://issues.apache.org/jira/browse/KAFKA-8184
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


The Interactive Queries docs are out of date, and currently only cover 
queryable key-value and window stores. Session stores can also be queried and 
should be included on this page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-449: Add connector contexts to Connect worker logs

2019-04-02 Thread Randall Hauch
I've been working on https://github.com/apache/kafka/pull/5743 for a while,
but there were a number of comment, suggestions, and mild concerns on the
PR. One of those comments was that maybe changing the Connect log content
in this way probably warrants a KIP. So here it is:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs

I've also updated my PR reflect the KIP. Please reply with comments and/or
feedback.

Best regards,

Randall


[DISCUSS] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-02 Thread Maarten Duijn
Kafka Streams currently does not allow configuring the internal changelog
topic created by KTable.suppress. This KIP introduces a design for adding
topic configurations to the suppress API.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress



[DISCUSS] KIP-448: Add State Stores Unit Test Support to Kafka Streams Test Utils

2019-04-02 Thread Yishun Guan
Hi All,

I like to start a discussion on KIP-448
(https://cwiki.apache.org/confluence/x/SAeZBg). It is about adding
Mock state stores and relevant components for testing purposes.

Here is the JIRA: https://issues.apache.org/jira/browse/KAFKA-6460

This is a rough KIP draft, review and comment are appreciated. It
seems to be tricky and some requirements and details are still needed
to be discussed.

Thanks,
Yishun


Re: [DISCUSS] KIP-440: Extend Connect Converter to support headers

2019-04-02 Thread Randall Hauch
Thanks for the submission, Yaroslav -- and for building on the suggestion
of Jeremy C in https://issues.apache.org/jira/browse/KAFKA-7273. This is a
nice and simple approach that is backward compatible.

The KIP looks good so far, but I do have two specific suggestions to make
things just a bit more explicit. First, both the "Public API" and "Proposed
Changes" sections could be more explicit that the methods in the proposal
are being added; as it's currently written a reader must infer that.
Second, the "Proposed Changes" section needs to more clearly specify that
the WorkerSourceTask will now use the new fromConnectData method with the
headers instead of the existing method, and that the WorkerSinkTask will
now use the toConnectData method with the headers instead of the existing
method.

Best regards,

Randall


On Mon, Mar 11, 2019 at 11:01 PM Yaroslav Tkachenko 
wrote:

> Hello,
>
> I'd like to propose a KIP that extends Kafka Connect Converter interface:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-440%3A+Extend+Connect+Converter+to+support+headers
>
> Thanks for considering!
>
> --
> Yaroslav Tkachenko
> sap1ens.com
>


[jira] [Resolved] (KAFKA-6758) Default "" consumer group tracks committed offsets, but is otherwise not a real group

2019-04-02 Thread David van Geest (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David van Geest resolved KAFKA-6758.

Resolution: Duplicate

Marking this as a duplicate of KAFKA-6774, which has been fixed.

> Default "" consumer group tracks committed offsets, but is otherwise not a 
> real group
> -
>
> Key: KAFKA-6758
> URL: https://issues.apache.org/jira/browse/KAFKA-6758
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.2
>Reporter: David van Geest
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> *To reproduce:*
>  * Use the default config for `group.id` of "" (the empty string)
>  * Use the default config for `enable.auto.commit` of `true`
>  * Use manually assigned partitions with `assign`
> *Actual (unexpected) behaviour:*
> Consumer offsets are stored for the "" group. Example:
> {{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
> --bootstrap-server localhost:9092 --describe --group ""}}
>  {{Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).}}
> {{Consumer group '' has no active members.}}
> {{TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID}}
>  {{my_topic 54 7859593 7865082 5489 - - -}}
>  {{my_topic 5 14252813 14266419 13606 - - -}}
>  {{my_topic 39 19099099 19122441 23342 - - -}}
>  {{my_topic 43 16434573 16449180 14607 - - -.}}
> 
>  
> However, the "" is not a real group. It doesn't show up with:
> {{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
> --bootstrap-server localhost:9092 --list}}
> You also can't do dynamic partition assignment with it - if you try to 
> `subscribe` when using the default "" group ID, you get:
> {{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
> configured groupId is invalid}}
> *Better behaviours:*
> (any of these would be preferable, in my opinion)
>  * Don't commit offsets with the "" group, and log a warning telling the user 
> that `enable.auto.commit = true` is meaningless in this situation. This is 
> what I would have expected.
>  * Don't have a default `group.id`. Some of my reading indicates that the new 
> consumer basically needs a `group.id` to function. If so, force users to 
> choose a group ID so that they're more aware of what will happen.
>  * Have a default `group.id` of `default`, and make it a real consumer group. 
> That is, it shows up in lists of groups, it has dynamic partitioning, etc.
> As a user, when I don't set `group.id` I expect that I'm not using consumer 
> groups. This is confirmed to me by listing the consumer groups on the broker 
> and not seeing anything. Therefore, I expect that there will be no offset 
> tracking in Kafka.
> In my specific application, I was wanting `auto.offset.reset` to kick in so 
> that a failed consumer would start at the `latest` offset. However, it 
> started at this unexpectedly stored offset instead.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-02 Thread Maarten Duijn
Kafka Streams currently does not allow configuring the internal changelog
topic created by KTable.suppress. This KIP introduces a design for adding
topic configurations to the suppress API.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress



[jira] [Created] (KAFKA-8183) Trogdor - ProduceBench should retry on UnknownTopicOrPartitionException during topic creation

2019-04-02 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-8183:
--

 Summary: Trogdor - ProduceBench should retry on 
UnknownTopicOrPartitionException during topic creation
 Key: KAFKA-8183
 URL: https://issues.apache.org/jira/browse/KAFKA-8183
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


There exists a race condition in the Trogdor produce bench worker code where 
`WorkerUtils#createTopics()` [notices the topic 
exists|https://github.com/apache/kafka/blob/4824dc994d7fc56b7540b643a78aadb4bdd0f14d/tools/src/main/java/org/apache/kafka/trogdor/common/WorkerUtils.java#L159]
 yet when it goes on to verify the topics, the DescribeTopics call throws an 
`UnknownTopicOrPartitionException`.

We should add sufficient retries such that this does not fail the task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8182) IllegalStateException in NetworkClient.initiateConnect when handling UnknownHostException thrown from ClusterConnectionStates.connecting

2019-04-02 Thread Mark Anderson (JIRA)
Mark Anderson created KAFKA-8182:


 Summary: IllegalStateException in NetworkClient.initiateConnect 
when handling UnknownHostException thrown from 
ClusterConnectionStates.connecting 
 Key: KAFKA-8182
 URL: https://issues.apache.org/jira/browse/KAFKA-8182
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 2.2.0
Reporter: Mark Anderson


When NetworkUtils.initiateConnect calls connectionStates.connecting an 
UnknownHostException can be thrown by ClientUtils.resolve when creating a new 
NodeConnectionState .

In the above case the nodeState map within ClusterConnectionStates will not 
contain an entry for the node ID.

The catch clause within NetworkUtils.initiateConnect immediately calls 
connectionStates.disconnected but this makes the assumption that a 
NodeConnectionState entry exists for the node ID. This assumption is incorrect 
when an UnknownHostException is thrown as described above and leads to an 
IllegalStateException like the following:
{noformat}
java.lang.IllegalStateException: No entry found for connection 2147483645|
  at 
org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:339)|
  at 
org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:143)|
  at 
org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:926)|
   at 
org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)|{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Add Jira permission and wiki permission

2019-04-02 Thread slim ouertani
Hi,

The account details are as follows:
Full Name Slim Ouertani
Email ouert...@gmail.com


Thanks,
Slim


On Mon, Apr 1, 2019 at 4:49 PM Bill Bejeck  wrote:

> Hi,
>
> You're already in Jira as a contributor, but I can't seem to find you in
> the Apache Confluence (https://cwiki.apache.org/confluence) can you
> confirm
> your account there?
>
> Thanks,
> Bill
>
> On Mon, Apr 1, 2019 at 1:42 AM slim ouertani  wrote:
>
> > Hello,
> >
> > user id: ouertani
> >
> > Thanks in advance.
> >
>


[jira] [Reopened] (KAFKA-4600) Consumer proceeds on when ConsumerRebalanceListener fails

2019-04-02 Thread Braedon Vickers (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Braedon Vickers reopened KAFKA-4600:


Hi [~guozhang],

I'm going to reopen this, as the issue I raised hasn't been addressed.

[~dana.powers] is correct - this issue is around exceptions thrown by the 
`ConsumerRebalanceListener` implementation itself, _not_ about failures in 
`SyncGroup`.

https://issues.apache.org/jira/browse/KAFKA-5154 is unrelated, and the [PR you 
referenced|https://github.com/apache/kafka/pull/3181] does not fix this issue.

As you can see from the code snipped posted above, the client still catches and 
squashes any exception (other than `WakeupException` and `InterruptException`) 
raised by `ConsumerRebalanceListener.onPartitionsAssigned()`, causing the issue 
described in this ticket.

Regards,
Braedon

> Consumer proceeds on when ConsumerRebalanceListener fails
> -
>
> Key: KAFKA-4600
> URL: https://issues.apache.org/jira/browse/KAFKA-4600
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.1
>Reporter: Braedon Vickers
>Priority: Major
> Fix For: 0.11.0.0
>
>
> One of the use cases for a ConsumerRebalanceListener is to load state 
> necessary for processing a partition when it is assigned. However, when 
> ConsumerRebalanceListener.onPartitionsAssigned() fails for some reason (i.e. 
> the state isn't loaded), the error is logged and the consumer proceeds on as 
> if nothing happened, happily consuming messages from the new partition. When 
> the state is relied upon for correct processing, this can be very bad, e.g. 
> data loss can occur.
> It would be better if the error was propagated up so it could be dealt with 
> normally. At the very least the assignment should fail so the consumer 
> doesn't see any messages from the new partitions, and the rebalance can be 
> reattempted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)