[jira] [Commented] (KAFKA-7970) Missing topic causes service shutdown without exception

2019-06-25 Thread Ashish Vyas (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872926#comment-16872926
 ] 

Ashish Vyas commented on KAFKA-7970:


[~guozhang] any ETA for this one?

> Missing topic causes service shutdown without exception
> ---
>
> Key: KAFKA-7970
> URL: https://issues.apache.org/jira/browse/KAFKA-7970
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Jonny Heavey
>Priority: Minor
>
> When launching a KafkaStreams application that depends on a topic that 
> doesn't exist, the streams application correctly logs an error such as:
> " is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application."
> The stream is then shutdown, however, no exception is thrown indicating that 
> an error has occurred.
> In our circumstances, we run our streams app inside a container. The streams 
> service is shutdown, but the process is not exited, meaning that the 
> container does not crash (reducing visibility of the issue).
> As no exception is thrown in the missing topic scenario described above, our 
> application code has no way to determine that something is wrong that would 
> then allow it to terminate the process.
>  
> Could the onPartitionsAssigned method in StreamThread.java throw an exception 
> when it decides to shutdown the stream (somewhere around line 264)?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8603) Document upgrade path

2019-06-25 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8603:
--

 Summary: Document upgrade path
 Key: KAFKA-8603
 URL: https://issues.apache.org/jira/browse/KAFKA-8603
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer, streams
Reporter: Sophie Blee-Goldman


Users need to follow a specific upgrade path in order to smoothly and safely 
perform live upgrade. We should very clearly document the steps needed to 
upgrade a Consumer and a Streams app (note they will be different)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8596) Kafka topic pre-creation error message needs to be passed to application as an exception

2019-06-25 Thread Ashish Vyas (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872806#comment-16872806
 ] 

Ashish Vyas commented on KAFKA-8596:


Thanks for looking into this. I do have uncaughtExceptionHandle registered (and 
have seen some exceptions once in a while after the stream is successfully 
started) but it doesn't get triggered for this case when the stream can not 
start because of non-existing topic. I don't have setStateListener registered. 

 

I think you can close this, but not as "not a problem", but may be as duplicate 
of this open issue - https://issues.apache.org/jira/browse/KAFKA-7970 ?

> Kafka topic pre-creation error message needs to be passed to application as 
> an exception
> 
>
> Key: KAFKA-8596
> URL: https://issues.apache.org/jira/browse/KAFKA-8596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Ashish Vyas
>Priority: Minor
>
> If i don't have a topic pre-created, I get an error log that reads "is 
> unknown yet during rebalance, please make sure they have been pre-created 
> before starting the Streams application." Ideally I expect an exception here 
> being thrown that I can catch in my application and decide what I want to do. 
>  
> Without this, my app keeps running and actual functionality doesn't work 
> making it time consuming to debug. I want to stop the application right at 
> this point.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8602) StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic

2019-06-25 Thread Bruno Cadonna (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-8602:
-
Description: 
StreamThread dies with the following exception:
{code:java}
java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
assigned any partitions
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
{code}
The reason is that the restore consumer is not subscribed to any topic. This 
happens when a StreamThread gets assigned standby tasks for sub-topologies with 
just state stores with disabled logging.

To reproduce the bug start two applications with one StreamThread and one 
standby replica each and the following topology. The input topic should have 
two partitions:
{code:java}
final StreamsBuilder builder = new StreamsBuilder();
final String stateStoreName = "myTransformState";
final StoreBuilder> keyValueStoreBuilder =
Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
Serdes.Integer(),
Serdes.Integer())
.withLoggingDisabled();
builder.addStateStore(keyValueStoreBuilder);
builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
.transform(() -> new Transformer>() {
private KeyValueStore state;

@SuppressWarnings("unchecked")
@Override
public void init(final ProcessorContext context) {
state = (KeyValueStore) 
context.getStateStore(stateStoreName);
}

@Override
public KeyValue transform(final Integer key, 
final Integer value) {
final KeyValue result = new KeyValue<>(key, 
value);
return result;
}

@Override
public void close() {}
}, stateStoreName)
.to(OUTPUT_TOPIC);
{code}
Both StreamThreads should die with the above exception.

The root cause is that standby tasks are created although all state stores of 
the sub-topology have logging disabled.  

  was:
StreamThread dies with the following exception:
{code:java}
java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
assigned any partitions
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
{code}
The reason is that the restore consumer is not subscribed to any topic. This 
happens when a StreamThread gets assigned standby tasks for sub-topologies with 
just state stores with disabled logging.

To reproduce the bug start two applications with one StreamThread and one 
standby replica each and the following topology. The input topic should have 
two partitions:
{code:java}
final StreamsBuilder builder = new StreamsBuilder();
final String stateStoreName = "myTransformState";
final StoreBuilder> keyValueStoreBuilder =
Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
Serdes.Integer(),
Serdes.Integer())
.withLoggingDisabled();
builder.addStateStore(keyValueStoreBuilder);
builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
.transform(() -> new Transformer>() {
private KeyValueStore state;

@SuppressWarnings("unchecked")
@Override
public void init(final ProcessorContext context) {
state = (KeyValueStore) 
context.getStateStore(stateStoreName);
}

@Override
public KeyValue transform(final Integer key, 
final Integer value) {
final KeyValue result = new KeyValue<>(key, 
value);
return result;
}

@Override
public void close() {}
}, stateStoreName)
.to(OUTPUT_TOPIC);
{code}
Both StreamThreads should die with the above exception.

The root cause is that standby tasks are created although all state stores of 
the sub-topology 

[jira] [Created] (KAFKA-8602) StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic

2019-06-25 Thread Bruno Cadonna (JIRA)
Bruno Cadonna created KAFKA-8602:


 Summary: StreamThread Dies Because Restore Consumer is not 
Subscribed to Any Topic
 Key: KAFKA-8602
 URL: https://issues.apache.org/jira/browse/KAFKA-8602
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1
Reporter: Bruno Cadonna


StreamThread dies with the following exception:
{code:java}
java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
assigned any partitions
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
{code}
The reason is that the restore consumer is not subscribed to any topic. This 
happens when a StreamThread gets assigned standby tasks for sub-topologies with 
just state stores with disabled logging.

To reproduce the bug start two applications with one StreamThread and one 
standby replica each and the following topology. The input topic should have 
two partitions:
{code:java}
final StreamsBuilder builder = new StreamsBuilder();
final String stateStoreName = "myTransformState";
final StoreBuilder> keyValueStoreBuilder =
Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
Serdes.Integer(),
Serdes.Integer())
.withLoggingDisabled();
builder.addStateStore(keyValueStoreBuilder);
builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
.transform(() -> new Transformer>() {
private KeyValueStore state;

@SuppressWarnings("unchecked")
@Override
public void init(final ProcessorContext context) {
state = (KeyValueStore) 
context.getStateStore(stateStoreName);
}

@Override
public KeyValue transform(final Integer key, 
final Integer value) {
final KeyValue result = new KeyValue<>(key, 
value);
return result;
}

@Override
public void close() {}
}, stateStoreName)
.to(OUTPUT_TOPIC);
{code}
Both StreamThreads should die with the above exception.

The root cause is that standby tasks are created although all state stores of 
the sub-topology have a logging disabled.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8602) StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic

2019-06-25 Thread Bruno Cadonna (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna reassigned KAFKA-8602:


Assignee: Bruno Cadonna

> StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic
> -
>
> Key: KAFKA-8602
> URL: https://issues.apache.org/jira/browse/KAFKA-8602
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Critical
>
> StreamThread dies with the following exception:
> {code:java}
> java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
> assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
> {code}
> The reason is that the restore consumer is not subscribed to any topic. This 
> happens when a StreamThread gets assigned standby tasks for sub-topologies 
> with just state stores with disabled logging.
> To reproduce the bug start two applications with one StreamThread and one 
> standby replica each and the following topology. The input topic should have 
> two partitions:
> {code:java}
> final StreamsBuilder builder = new StreamsBuilder();
> final String stateStoreName = "myTransformState";
> final StoreBuilder> keyValueStoreBuilder =
> 
> Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
> Serdes.Integer(),
> Serdes.Integer())
> .withLoggingDisabled();
> builder.addStateStore(keyValueStoreBuilder);
> builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
> .transform(() -> new Transformer Integer>>() {
> private KeyValueStore state;
> @SuppressWarnings("unchecked")
> @Override
> public void init(final ProcessorContext context) {
> state = (KeyValueStore) 
> context.getStateStore(stateStoreName);
> }
> @Override
> public KeyValue transform(final Integer key, 
> final Integer value) {
> final KeyValue result = new KeyValue<>(key, 
> value);
> return result;
> }
> @Override
> public void close() {}
> }, stateStoreName)
> .to(OUTPUT_TOPIC);
> {code}
> Both StreamThreads should die with the above exception.
> The root cause is that standby tasks are created although all state stores of 
> the sub-topology have a logging disabled.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8601) Producer Improvement: Sticky Partitioner

2019-06-25 Thread Justine Olshan (JIRA)
Justine Olshan created KAFKA-8601:
-

 Summary: Producer Improvement: Sticky Partitioner
 Key: KAFKA-8601
 URL: https://issues.apache.org/jira/browse/KAFKA-8601
 Project: Kafka
  Issue Type: Improvement
Reporter: Justine Olshan
Assignee: Justine Olshan


Currently the default partitioner uses a round-robin strategy to partition 
non-keyed values. The idea is to implement a "sticky partitioner" that chooses 
a partition for a topic and sends all records to that partition until the batch 
is sent. Then a new partition is chosen. This new partitioner will increase 
batching and decrease latency. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8596) Kafka topic pre-creation error message needs to be passed to application as an exception

2019-06-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872528#comment-16872528
 ] 

Matthias J. Sax commented on KAFKA-8596:


Throwing an exception is not possible, because the error occurs on a different 
thread, ie, not your "main" thread that calls `KafkaStreams#start()`.

However, you can register a uncaught exception handler callback as described in 
the docs: 
[https://kafka.apache.org/23/documentation/streams/developer-guide/write-streams.html]
{code:java}
KafkaStreams#setUncaughtExceptionHandler(...)
{code}
This allows you to get notified about the error in the main thread and you can 
react to it accordignly.

Please let us know if this works for you. I think this ticket should be closed 
as "not a problem" as the handler should provide the functionality you request.

> Kafka topic pre-creation error message needs to be passed to application as 
> an exception
> 
>
> Key: KAFKA-8596
> URL: https://issues.apache.org/jira/browse/KAFKA-8596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Ashish Vyas
>Priority: Minor
>
> If i don't have a topic pre-created, I get an error log that reads "is 
> unknown yet during rebalance, please make sure they have been pre-created 
> before starting the Streams application." Ideally I expect an exception here 
> being thrown that I can catch in my application and decide what I want to do. 
>  
> Without this, my app keeps running and actual functionality doesn't work 
> making it time consuming to debug. I want to stop the application right at 
> this point.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5115) Use bootstrap.servers to refresh metadata

2019-06-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872476#comment-16872476
 ] 

Sönke Liebau commented on KAFKA-5115:
-

Hi [~MiniMizer],

we've just discussed this today and while the change itself would be fairly 
simple, I believe there are a lot of areas that would need investigating / 
testing before this could be recommended for a production deployment.

Specifically everything around transactions and idempotent producers seem to me 
to be worth a dedicated look.

On the consumer side, the immediate concern I think is offsets, stored offsets 
might not create issues (but may also not work) - but anything cached inside 
the Fetcher cause havoc..

Bottom line: it is a good idea that I'd fully support, but probably needs more 
work than is immediately apparent.

> Use bootstrap.servers to refresh metadata
> -
>
> Key: KAFKA-5115
> URL: https://issues.apache.org/jira/browse/KAFKA-5115
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.2.0
>Reporter: Dan
>Priority: Major
>
> Currently, it seems that bootstrap.servers list is used only when the 
> producer starts, to discover the cluster, and subsequent metadata refreshes 
> go to the discovered brokers directly.
> We would like to use the bootstrap.servers list for metadata refresh to 
> support a failover mechanism by providing a VIP which can dynamically 
> redirect requests to a secondary Kafka cluster if the primary is down.
> Consider the following use case, where "kafka-cluster.local" is a VIP on a 
> load balancer with priority server pools that point to two different Kafka 
> clusters (so when all servers of cluster #1 are down, it automatically 
> redirects to servers from cluster #2).
> bootstrap.servers: kafka-cluster.local:9092
> 1) Producer starts, connects to kafka-cluster.local and discovers all servers 
> from cluster #1
> 2) Producer starts producing to cluster #1
> 3) cluster #1 goes down
> 4) Producer detects the failure, refreshes metadata from kafka-cluster.local 
> (which now returns nodes from cluster #2)
> 5) Producer starts producing to cluster #2
> 6) cluster #1 is brought back online, and kafka-cluster.local now points to 
> it again
> In the current state, it seems that the producer will never revert to cluster 
> #1 because it continues to refresh its metadata from the brokers of cluster 
> #2, even though kafka-cluster.local no longer points to that cluster.
> If we could force the metadata refresh to happen against 
> "kafka-cluster.local", it would enable automatic failover and failback 
> between the clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8600) Replace DescribeDelegationToken request/response with automated protocol

2019-06-25 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-8600:
-

 Summary: Replace DescribeDelegationToken request/response with 
automated protocol
 Key: KAFKA-8600
 URL: https://issues.apache.org/jira/browse/KAFKA-8600
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8599) Replace ExpireDelegationToken request/response with automated protocol

2019-06-25 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-8599:
-

 Summary: Replace ExpireDelegationToken request/response with 
automated protocol
 Key: KAFKA-8599
 URL: https://issues.apache.org/jira/browse/KAFKA-8599
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8598) Replace RenewDelegationToken request/response with automated protocol

2019-06-25 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-8598:
-

 Summary: Replace RenewDelegationToken request/response with 
automated protocol
 Key: KAFKA-8598
 URL: https://issues.apache.org/jira/browse/KAFKA-8598
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6945) Add support to allow users to acquire delegation tokens for other users

2019-06-25 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872474#comment-16872474
 ] 

Manikumar commented on KAFKA-6945:
--

[~viktorsomogyi] Currently I am not working on this JIRA. Please feel free to 
reassign the JIRA. Thanks! 
KIP link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-373%3A+Allow+users+to+create+delegation+tokens+for+other+users

> Add support to allow users to acquire delegation tokens for other users
> ---
>
> Key: KAFKA-6945
> URL: https://issues.apache.org/jira/browse/KAFKA-6945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>  Labels: needs-kip
>
> Currently, we only allow a user to create delegation token for that user 
> only. 
> We should allow users to acquire delegation tokens for other users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6945) Add support to allow users to acquire delegation tokens for other users

2019-06-25 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872411#comment-16872411
 ] 

Viktor Somogyi-Vass commented on KAFKA-6945:


[~omkreddy] do you mind if I reassign this to myself?

> Add support to allow users to acquire delegation tokens for other users
> ---
>
> Key: KAFKA-6945
> URL: https://issues.apache.org/jira/browse/KAFKA-6945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>  Labels: needs-kip
>
> Currently, we only allow a user to create delegation token for that user 
> only. 
> We should allow users to acquire delegation tokens for other users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8078) Flaky Test TableTableJoinIntegrationTest#testInnerInner

2019-06-25 Thread Khaireddine Rezgui (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khaireddine Rezgui reassigned KAFKA-8078:
-

Assignee: (was: Khaireddine Rezgui)

> Flaky Test TableTableJoinIntegrationTest#testInnerInner
> ---
>
> Key: KAFKA-8078
> URL: https://issues.apache.org/jira/browse/KAFKA-8078
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Never received expected final result.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:246)
> at 
> org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner(TableTableJoinIntegrationTest.java:196){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8597) Give access to the Dead Letter Queue APIs to Kafka Connect Developers

2019-06-25 Thread Andrea Santurbano (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrea Santurbano updated KAFKA-8597:
-
Description: 
Would be cool to have the chance to have access to the DLQ APIs in order to 
enable us (developers) to use that.

For instance, if someone uses JSON as message format with no schema and it's 
trying to import some data into a table, and the JSON contains a null value for 
a NON-NULL table field, so we want to move that event to the DLQ.

Thanks a lot!

  was:
Would be cool to have the chance to have access to the DLQ APIs give to enable 
us (developers) to use that.

For instance, if someone uses JSON as message format with no schema and it's 
trying to import some data into a table, and the JSON contains a null value for 
a NON-NULL table field, so we want to move that event to the DLQ.

Thanks a lot!


> Give access to the Dead Letter Queue APIs to Kafka Connect Developers
> -
>
> Key: KAFKA-8597
> URL: https://issues.apache.org/jira/browse/KAFKA-8597
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Andrea Santurbano
>Priority: Major
>
> Would be cool to have the chance to have access to the DLQ APIs in order to 
> enable us (developers) to use that.
> For instance, if someone uses JSON as message format with no schema and it's 
> trying to import some data into a table, and the JSON contains a null value 
> for a NON-NULL table field, so we want to move that event to the DLQ.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8390) Replace CreateDelegationToken request/response with automated protocol

2019-06-25 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8390.
--
   Resolution: Fixed
Fix Version/s: 2.4.0

> Replace CreateDelegationToken request/response with automated protocol
> --
>
> Key: KAFKA-8390
> URL: https://issues.apache.org/jira/browse/KAFKA-8390
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8390) Replace CreateDelegationToken request/response with automated protocol

2019-06-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872335#comment-16872335
 ] 

ASF GitHub Bot commented on KAFKA-8390:
---

omkreddy commented on pull request #6828: KAFKA-8390: Use automatic RPC 
generation in CreateDelegationToken
URL: https://github.com/apache/kafka/pull/6828
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace CreateDelegationToken request/response with automated protocol
> --
>
> Key: KAFKA-8390
> URL: https://issues.apache.org/jira/browse/KAFKA-8390
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-06-25 Thread muchl (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872190#comment-16872190
 ] 

muchl commented on KAFKA-7697:
--

[~rsivaram] The problem was fixed after the upgrade 2.1.1, but there was a new 
problem.I'm not sure if the two questions are related, but the logs they print 
when the problem occurs are similar.
A similar broker hangs was encountered in 2.1.1 . the problem cause broker 
crash in 2.1.0, but will automatically recovered in a few minutes in 2.1.1, and 
the cluster was unavailable during this time. 
I uploaded a log whose file name is 2.1.1-hangs.log  [^2.1.1-hangs.log] . When 
we find and log in to the server, the cluster was restored. All the stack 
information has not yet been obtained, but we can see that there is a problem 
from the logs of the broker and consumer. Could you give me some help,Thank you 
!

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: 2.1.1-hangs.log, 322.tdump, kafka.log, kafka_jstack.txt, 
> threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a threaddump.txt 
> from the last attempt on 2.1.0 that shows lots of kafka-request-handler- 
> threads trying to acquire the leaderIsrUpdateLock lock in 
> kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-06-25 Thread muchl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

muchl updated KAFKA-7697:
-
Attachment: 2.1.1-hangs.log

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: 2.1.1-hangs.log, 322.tdump, kafka.log, kafka_jstack.txt, 
> threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a threaddump.txt 
> from the last attempt on 2.1.0 that shows lots of kafka-request-handler- 
> threads trying to acquire the leaderIsrUpdateLock lock in 
> kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-06-25 Thread muchl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

muchl updated KAFKA-7697:
-
Attachment: (was: 70.107)

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: 322.tdump, kafka.log, kafka_jstack.txt, threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a threaddump.txt 
> from the last attempt on 2.1.0 that shows lots of kafka-request-handler- 
> threads trying to acquire the leaderIsrUpdateLock lock in 
> kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-06-25 Thread muchl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

muchl updated KAFKA-7697:
-
Attachment: 70.107

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: 322.tdump, 70.107, kafka.log, kafka_jstack.txt, 
> threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a threaddump.txt 
> from the last attempt on 2.1.0 that shows lots of kafka-request-handler- 
> threads trying to acquire the leaderIsrUpdateLock lock in 
> kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8597) Give access to the Dead Letter Queue APIs to Kafka Connect Developers

2019-06-25 Thread Andrea Santurbano (JIRA)
Andrea Santurbano created KAFKA-8597:


 Summary: Give access to the Dead Letter Queue APIs to Kafka 
Connect Developers
 Key: KAFKA-8597
 URL: https://issues.apache.org/jira/browse/KAFKA-8597
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Andrea Santurbano


Would be cool to have the chance to have access to the DLQ APIs give to enable 
us (developers) to use that.

For instance, if someone uses JSON as message format with no schema and it's 
trying to import some data into a table, and the JSON contains a null value for 
a NON-NULL table field, so we want to move that event to the DLQ.

Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8596) Kafka topic pre-creation error message needs to be passed to application as an exception

2019-06-25 Thread ASHISH M VYAS (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASHISH M VYAS updated KAFKA-8596:
-
Description: 
If i don't have a topic pre-created, I get an error log that reads "is unknown 
yet during rebalance, please make sure they have been pre-created before 
starting the Streams application." Ideally I expect an exception here being 
thrown that I can catch in my application and decide what I want to do. 

 

Without this, my app keeps running and actual functionality doesn't work making 
it time consuming to debug. I want to stop the application right at this point.

 

  was:
If i don't have a topic pre-created, I get an error log that reads "is unknown 
yet during rebalance, please make sure they have been pre-created before 
starting the Streams application." Ideally I expect an exception here being 
thrown that I can catch in my application and decide what I want to do. 

 

Without this, my app keeps running and actual functionality doesn't work making 
it time consuming to debug.

 


> Kafka topic pre-creation error message needs to be passed to application as 
> an exception
> 
>
> Key: KAFKA-8596
> URL: https://issues.apache.org/jira/browse/KAFKA-8596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: ASHISH M VYAS
>Priority: Minor
>
> If i don't have a topic pre-created, I get an error log that reads "is 
> unknown yet during rebalance, please make sure they have been pre-created 
> before starting the Streams application." Ideally I expect an exception here 
> being thrown that I can catch in my application and decide what I want to do. 
>  
> Without this, my app keeps running and actual functionality doesn't work 
> making it time consuming to debug. I want to stop the application right at 
> this point.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8596) Kafka topic pre-creation error message needs to be passed to application as an exception

2019-06-25 Thread ASHISH M VYAS (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASHISH M VYAS updated KAFKA-8596:
-
Description: 
If i don't have a topic pre-created, I get an error log that reads "is unknown 
yet during rebalance, please make sure they have been pre-created before 
starting the Streams application." Ideally I expect an exception here being 
thrown that I can catch in my application and decide what I want to do. 

 

Without this, my app keeps running and actual functionality doesn't work making 
it time consuming to debug.

 

  was:
If i don't have a topic pre-created, I get an error log that reads "is unknown 
yet during rebalance," + " please make sure they have been pre-created before 
starting the Streams application." Ideally I expect an exception here being 
thrown that I can catch in my application and decide what I want to do. 

 

Without this, my app keeps running and actual functionality doesn't work making 
it time consuming to debug.

 


> Kafka topic pre-creation error message needs to be passed to application as 
> an exception
> 
>
> Key: KAFKA-8596
> URL: https://issues.apache.org/jira/browse/KAFKA-8596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: ASHISH M VYAS
>Priority: Minor
>
> If i don't have a topic pre-created, I get an error log that reads "is 
> unknown yet during rebalance, please make sure they have been pre-created 
> before starting the Streams application." Ideally I expect an exception here 
> being thrown that I can catch in my application and decide what I want to do. 
>  
> Without this, my app keeps running and actual functionality doesn't work 
> making it time consuming to debug.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8596) Kafka topic pre-creation error message needs to be passed to application as an exception

2019-06-25 Thread ASHISH M VYAS (JIRA)
ASHISH M VYAS created KAFKA-8596:


 Summary: Kafka topic pre-creation error message needs to be passed 
to application as an exception
 Key: KAFKA-8596
 URL: https://issues.apache.org/jira/browse/KAFKA-8596
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.1.1
Reporter: ASHISH M VYAS


If i don't have a topic pre-created, I get an error log that reads "is unknown 
yet during rebalance," + " please make sure they have been pre-created before 
starting the Streams application." Ideally I expect an exception here being 
thrown that I can catch in my application and decide what I want to do. 

 

Without this, my app keeps running and actual functionality doesn't work making 
it time consuming to debug.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)