[jira] [Commented] (KAFKA-9221) Kafka REST Proxy wrongly converts quotes in message when sending json

2019-11-21 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979854#comment-16979854
 ] 

Matthias J. Sax commented on KAFKA-9221:


[~habdank] Not sure which REST proxy you are using. There are multiple 
implementations from different vendors. Please file this ticket against those.

> Kafka REST Proxy wrongly converts quotes in message when sending json
> -
>
> Key: KAFKA-9221
> URL: https://issues.apache.org/jira/browse/KAFKA-9221
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.3.0
> Environment: Linux redhat
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Major
>
> Kafka REST Proxy has a problem when sending/converting json files (e.g. 
> json.new) into Kafka protocol. For example JSON file:
> {code:java}
> {"records":[{"value":"rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
>  1337 1572276922"}]}
> {code}
> is sent using call to Kafka REST Proxy on localhost:8073:
> {code:java}
> curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H 
> "Accept: application/vnd.kafka.v2+json" --data @json.new  
> http://localhost:8073/topics/somple_topic -i 
> {code}
> in Kafka in some_topic we got:
> {code:java}
> "rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
>  1337 1572276922"
> {code}
> but expected is that message has no quotes:
> {code:java}
> rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
>  1337 1572276922
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9224) State store should not see uncommitted transaction result

2019-11-21 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979847#comment-16979847
 ] 

Matthias J. Sax commented on KAFKA-9224:


Seems to be a duplicate of https://issues.apache.org/jira/browse/KAFKA-8870 ?

> State store should not see uncommitted transaction result
> -
>
> Key: KAFKA-9224
> URL: https://issues.apache.org/jira/browse/KAFKA-9224
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Currently under EOS, the write to state store is not guaranteed to happen 
> after the ongoing transaction is finished. This means interactive query could 
> see uncommitted data within state store which is not ideal for users relying 
> on state stores for strong consistency. Ideally, we should have an option to 
> include state store commit as part of ongoing transaction, however an 
> immediate step towards a better reasoned system is to `write after 
> transaction commit`, which means we always buffer data within stream cache 
> for EOS until the ongoing transaction is committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9224) State store should not see uncommitted transaction result

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979819#comment-16979819
 ] 

ASF GitHub Bot commented on KAFKA-9224:
---

abbccdda commented on pull request #7737: KAFKA-9224 (EOS improvement): Flush 
state store after transaction commit
URL: https://github.com/apache/kafka/pull/7737
 
 
   This patch attempts to enforce a strict order to flush the state store only 
after ongoing transaction gets committed under EOS. Major changes include:
   
   - Rewrite `StreamTask.commit` to separate EOS and non EOS commit scenario
   - Under EOS,  we make the atomic operation in the following order: commit 
ongoing transaction, flush the store, begin another transaction. This means 
there is no intermediate data left within the cache and the ongoing 
modification to the state store only gets visible after the transaction gets 
committed.
   - Add a `bounded` config on the stream thread cache to reject further 
writes. This feature is important as under EOS the thread cache is the only 
container we are going to leverage. When the cache is almost full, it will 
begin throwing a `CacheFullException` to the caller.
   - In ProcessorContext, handle `CacheFullException` by requesting a task 
commit in next round. This should only be needed on ReadWrite access to state 
stores for general put operation, and avoiding further interruption to the 
application.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> State store should not see uncommitted transaction result
> -
>
> Key: KAFKA-9224
> URL: https://issues.apache.org/jira/browse/KAFKA-9224
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Currently under EOS, the write to state store is not guaranteed to happen 
> after the ongoing transaction is finished. This means interactive query could 
> see uncommitted data within state store which is not ideal for users relying 
> on state stores for strong consistency. Ideally, we should have an option to 
> include state store commit as part of ongoing transaction, however an 
> immediate step towards a better reasoned system is to `write after 
> transaction commit`, which means we always buffer data within stream cache 
> for EOS until the ongoing transaction is committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9224) State store should not see uncommitted transaction result

2019-11-21 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9224:
--

 Summary: State store should not see uncommitted transaction result
 Key: KAFKA-9224
 URL: https://issues.apache.org/jira/browse/KAFKA-9224
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen


Currently under EOS, the write to state store is not guaranteed to happen after 
the ongoing transaction is finished. This means interactive query could see 
uncommitted data within state store which is not ideal for users relying on 
state stores for strong consistency. Ideally, we should have an option to 
include state store commit as part of ongoing transaction, however an immediate 
step towards a better reasoned system is to `write after transaction commit`, 
which means we always buffer data within stream cache for EOS until the ongoing 
transaction is committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9202) serde in ConsoleConsumer with access to headers

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979800#comment-16979800
 ] 

ASF GitHub Bot commented on KAFKA-9202:
---

huxihx commented on pull request #7736: KAFKA-9202: serde in ConsoleConsumer 
with access to headers
URL: https://github.com/apache/kafka/pull/7736
 
 
   https://issues.apache.org/jira/browse/KAFKA-9202
   
   The Deserializer interface has two methods, one that gives access to the 
headers and one that does not. ConsoleConsumer.scala only calls the latter 
method. It would be nice if it were to call the default method that provides 
header access, so that custom serde that depends on headers becomes possible.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> serde in ConsoleConsumer with access to headers
> ---
>
> Key: KAFKA-9202
> URL: https://issues.apache.org/jira/browse/KAFKA-9202
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 2.3.0
>Reporter: Jorg Heymans
>Assignee: huxihx
>Priority: Major
>
> ML thread here : 
> [https://lists.apache.org/thread.html/ab8c3094945cb9f9312fd3614a5b4454f24756cfa1a702ef5c739c8f@%3Cusers.kafka.apache.org%3E]
>  
> The Deserializer interface has two methods, one that gives access to the 
> headers and one that does not. ConsoleConsumer.scala only calls the latter 
> method. It would be nice if it were to call the default method that provides 
> header access, so that custom serde that depends on headers becomes possible. 
> Currently it does this:
>  
> {code:java}
> deserializer.map(_.deserialize(topic, nonNullBytes).toString.
> getBytes(StandardCharsets.UTF_8)).getOrElse(nonNullBytes)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9202) serde in ConsoleConsumer with access to headers

2019-11-21 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9202:
-

Assignee: huxihx

> serde in ConsoleConsumer with access to headers
> ---
>
> Key: KAFKA-9202
> URL: https://issues.apache.org/jira/browse/KAFKA-9202
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 2.3.0
>Reporter: Jorg Heymans
>Assignee: huxihx
>Priority: Major
>
> ML thread here : 
> [https://lists.apache.org/thread.html/ab8c3094945cb9f9312fd3614a5b4454f24756cfa1a702ef5c739c8f@%3Cusers.kafka.apache.org%3E]
>  
> The Deserializer interface has two methods, one that gives access to the 
> headers and one that does not. ConsoleConsumer.scala only calls the latter 
> method. It would be nice if it were to call the default method that provides 
> header access, so that custom serde that depends on headers becomes possible. 
> Currently it does this:
>  
> {code:java}
> deserializer.map(_.deserialize(topic, nonNullBytes).toString.
> getBytes(StandardCharsets.UTF_8)).getOrElse(nonNullBytes)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9223) RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979779#comment-16979779
 ] 

ASF GitHub Bot commented on KAFKA-9223:
---

rhauch commented on pull request #7734: KAFKA-9223: Mask exit procedure in 
rebalance integration test to prevent call to System::exit
URL: https://github.com/apache/kafka/pull/7734
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit
> 
>
> Key: KAFKA-9223
> URL: https://issues.apache.org/jira/browse/KAFKA-9223
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0, 2.3.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> The RebalanceSourceConnectorsIntegrationTest causes builds to fail sometimes 
> by ungracefully shutting down its embedded Connect workers, which in turn 
> call System::exit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9217) Partial partition's log-end-offset is zero

2019-11-21 Thread lisen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979770#comment-16979770
 ] 

lisen commented on KAFKA-9217:
--

Hi [~huxi_2b], From the consumption data, this partition has data at the 
beginning, and the average data for each partition is 400222/10 = 4.

> Partial partition's log-end-offset is zero
> --
>
> Key: KAFKA-9217
> URL: https://issues.apache.org/jira/browse/KAFKA-9217
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.1
> Environment: kafka0.10.0.1
>Reporter: lisen
>Priority: Major
> Fix For: 0.10.0.1
>
> Attachments: Snipaste_2019-11-21_14-53-06.png, 
> Snipaste_2019-11-21_15-00-09.png
>
>
> The amount of data my consumers consume is 400222, But using the command to 
> view the consumption results is only 279789, The command view results are as 
> follows:
> !Snipaste_2019-11-21_15-00-09.png!
> The data result of partition 5 is
> !Snipaste_2019-11-21_14-53-06.png!
> I want to know if this is a kafka bug.Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9217) Partial partition's log-end-offset is zero

2019-11-21 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979764#comment-16979764
 ] 

huxihx commented on KAFKA-9217:
---

Is it possible that partition 5 has no records at all?

> Partial partition's log-end-offset is zero
> --
>
> Key: KAFKA-9217
> URL: https://issues.apache.org/jira/browse/KAFKA-9217
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.1
> Environment: kafka0.10.0.1
>Reporter: lisen
>Priority: Major
> Fix For: 0.10.0.1
>
> Attachments: Snipaste_2019-11-21_14-53-06.png, 
> Snipaste_2019-11-21_15-00-09.png
>
>
> The amount of data my consumers consume is 400222, But using the command to 
> view the consumption results is only 279789, The command view results are as 
> follows:
> !Snipaste_2019-11-21_15-00-09.png!
> The data result of partition 5 is
> !Snipaste_2019-11-21_14-53-06.png!
> I want to know if this is a kafka bug.Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9220) TimeoutException when using kafka-preferred-replica-election

2019-11-21 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979761#comment-16979761
 ] 

huxihx commented on KAFKA-9220:
---

That might need a small KIP:)

> TimeoutException when using kafka-preferred-replica-election
> 
>
> Key: KAFKA-9220
> URL: https://issues.apache.org/jira/browse/KAFKA-9220
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.3.0
>Reporter: Or Shemesh
>Priority: Major
>
> When running kafka-preferred-replica-election --bootstrap-server xxx:9092
> I'm getting this error:
> Timeout waiting for election resultsTimeout waiting for election 
> resultsException in thread "main" kafka.common.AdminCommandFailedException at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$AdminClientCommand.electPreferredLeaders(PreferredReplicaLeaderElectionCommand.scala:246)
>  at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$.run(PreferredReplicaLeaderElectionCommand.scala:78)
>  at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$.main(PreferredReplicaLeaderElectionCommand.scala:42)
>  at 
> kafka.admin.PreferredReplicaLeaderElectionCommand.main(PreferredReplicaLeaderElectionCommand.scala)Caused
>  by: org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.
>  
> Because we have a big cluster and getting all the data from the zookeeper is 
> taking more the 30 second.
>  
> After searching the code I saw that the 30 second is hard-coded can you 
> enable us to set the timeout as parameter?
> [https://github.com/confluentinc/kafka/blob/master/core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8815) Kafka broker blocked on I/O primitive

2019-11-21 Thread William Reynolds (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979742#comment-16979742
 ] 

William Reynolds commented on KAFKA-8815:
-

Hi Alexandre, I suspect we may have run into this but we didn't manage to get 
dumps like you did. Do you by any chance have the network in and out pattern 
after the blocking starts? Also what the network processor (type=SocketServer 
name=NetworkProcessorAvgIdlePercent) does after blocking?

> Kafka broker blocked on I/O primitive
> -
>
> Key: KAFKA-8815
> URL: https://issues.apache.org/jira/browse/KAFKA-8815
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 1.1.1
>Reporter: Alexandre Dupriez
>Priority: Major
>
> This JIRA is for tracking a problem we run into on a production cluster.
> *Scenario*
> Cluster of 15 brokers and an average ingress throughput of ~4 MB/s and egress 
> of ~4 MB/s per broker.
> Brokers are running on OpenJDK 8. They are configured with a heap size of 1 
> GB.
> There is around ~1,000 partition replicas per broker. Load is evenly 
> balanced. Each broker instance is under fair CPU load, but not overloaded 
> (50-60%). G1 is used for garbage collection and doesn't exhibit any pressure, 
> with mostly short young GC observed and an heap-after-GC usage of 70%.
> Replication factor is 3.
> *Symptom*
> One broker on the cluster suddenly became "unresponsive". Other brokers, 
> Zookeeper and producers/consumers requests were failing with timeouts. The 
> Kafka process, however, was still alive and doing some background work 
> (truncating logs and rolling segments) This lasted for hours. At some point, 
> several thread dumps were taken at few minutes interval. Most of the threads 
> were "blocked". Deadlock was ruled out. The most suspicious stack is the 
> following 
> {code:java}
> Thread 7801: (state = BLOCKED)
>  - sun.nio.ch.FileChannelImpl.write(java.nio.ByteBuffer) @bci=25, line=202 
> (Compiled frame)
>  - 
> org.apache.kafka.common.record.MemoryRecords.writeFullyTo(java.nio.channels.GatheringByteChannel)
>  @bci=24, line=93 (Compiled frame)
>  - 
> org.apache.kafka.common.record.FileRecords.append(org.apache.kafka.common.record.MemoryRecords)
>  @bci=5, line=152 (Compiled frame)
>  - kafka.log.LogSegment.append(long, long, long, long, 
> org.apache.kafka.common.record.MemoryRecords) @bci=82, line=136 (Compiled 
> frame)
>  - kafka.log.Log.$anonfun$append$2(kafka.log.Log, 
> org.apache.kafka.common.record.MemoryRecords, boolean, boolean, int, 
> java.lang.Object) @bci=1080, line=757 (Compiled frame)
>  - kafka.log.Log$$Lambda$614.apply() @bci=24 (Compiled frame)
>  - kafka.log.Log.maybeHandleIOException(scala.Function0, scala.Function0) 
> @bci=1, line=1696 (Compiled frame)
>  - kafka.log.Log.append(org.apache.kafka.common.record.MemoryRecords, 
> boolean, boolean, int) @bci=29, line=642 (Compiled frame)
>  - kafka.log.Log.appendAsLeader(org.apache.kafka.common.record.MemoryRecords, 
> int, boolean) @bci=5, line=612 (Compiled frame)
>  - 
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(kafka.cluster.Partition,
>  org.apache.kafka.common.record.MemoryRecords, boolean, int) @bci=148, 
> line=609 (Compiled frame)
>  - kafka.cluster.Partition$$Lambda$837.apply() @bci=16 (Compiled frame)
>  - kafka.utils.CoreUtils$.inLock(java.util.concurrent.locks.Lock, 
> scala.Function0) @bci=7, line=250 (Compiled frame)
>  - 
> kafka.utils.CoreUtils$.inReadLock(java.util.concurrent.locks.ReadWriteLock, 
> scala.Function0) @bci=8, line=256 (Compiled frame)
>  - 
> kafka.cluster.Partition.appendRecordsToLeader(org.apache.kafka.common.record.MemoryRecords,
>  boolean, int) @bci=16, line=597 (Compiled frame)
>  - 
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(kafka.server.ReplicaManager,
>  boolean, boolean, short, scala.Tuple2) @bci=295, line=739 (Compiled frame)
>  - kafka.server.ReplicaManager$$Lambda$836.apply(java.lang.Object) @bci=20 
> (Compiled frame)
>  - scala.collection.TraversableLike.$anonfun$map$1(scala.Function1, 
> scala.collection.mutable.Builder, java.lang.Object) @bci=3, line=234 
> (Compiled frame)
>  - scala.collection.TraversableLike$$Lambda$14.apply(java.lang.Object) @bci=9 
> (Compiled frame)
>  - scala.collection.mutable.HashMap.$anonfun$foreach$1(scala.Function1, 
> scala.collection.mutable.DefaultEntry) @bci=16, line=138 (Compiled frame)
>  - scala.collection.mutable.HashMap$$Lambda$31.apply(java.lang.Object) @bci=8 
> (Compiled frame)
>  - scala.collection.mutable.HashTable.foreachEntry(scala.Function1) @bci=39, 
> line=236 (Compiled frame)
>  - 
> scala.collection.mutable.HashTable.foreachEntry$(scala.collection.mutable.HashTable,
>  scala.Function1) @bci=2, line=229 (Compiled frame)
>  - scala.collection.mutable.HashMap.foreachEntry(scala.Function1) 

[jira] [Commented] (KAFKA-9223) RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979738#comment-16979738
 ] 

ASF GitHub Bot commented on KAFKA-9223:
---

C0urante commented on pull request #7734: KAFKA-9223: Mask exit procedure in 
rebalance integration test to prevent call to System::exit
URL: https://github.com/apache/kafka/pull/7734
 
 
   We've been encountering some build instability that appears to be due to the 
`RebalanceSourceConnectorsIntegrationTest` class. Somehow, that test is causing 
an ungraceful shutdown of one or more of its embedded Connect workers, which 
then in turn invoke `Exit::exit`. The Connect integration test framework has 
support for overriding the behavior of `Exit::exit` to prevent it from calling 
`System::exit`; the changes here use that feature to help bring back build 
stability.
   Now, if the embedded Connect worker fails to shut down gracefully, a warning 
is logged but the test still passes and (more importantly), `System::exit` is 
never invoked.
   
   If approved, this fix should be backported through to 2.3, when the test was 
introduced.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit
> 
>
> Key: KAFKA-9223
> URL: https://issues.apache.org/jira/browse/KAFKA-9223
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0, 2.3.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> The RebalanceSourceConnectorsIntegrationTest causes builds to fail sometimes 
> by ungracefully shutting down its embedded Connect workers, which in turn 
> call System::exit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9223) RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit

2019-11-21 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-9223:


 Summary: RebalanceSourceConnectorsIntegrationTest disrupting 
builds with System::exit
 Key: KAFKA-9223
 URL: https://issues.apache.org/jira/browse/KAFKA-9223
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.4.0, 2.3.2
Reporter: Chris Egerton
Assignee: Chris Egerton


The RebalanceSourceConnectorsIntegrationTest causes builds to fail sometimes by 
ungracefully shutting down its embedded Connect workers, which in turn call 
System::exit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9192) NullPointerException if field in schema not present in value

2019-11-21 Thread Lev Zemlyanov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979718#comment-16979718
 ] 

Lev Zemlyanov commented on KAFKA-9192:
--

I have reproduced this issue - if the value does not exist, `value.get()` will 
return `null` at line #701 which will result in `JsonNode jsonValue` being 
`null` and throwing the `NPE`.

 

I have issued a PR to fix this issue.

> NullPointerException if field in schema not present in value
> 
>
> Key: KAFKA-9192
> URL: https://issues.apache.org/jira/browse/KAFKA-9192
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Mark Tinsley
>Priority: Major
>
> Given a message:
> {code:java}
> {
>"schema":{
>   "type":"struct",
>   "fields":[
>  {
> "type":"string",
> "optional":true,
> "field":"abc"
>  }
>   ],
>   "optional":false,
>   "name":"foobar"
>},
>"payload":{
>}
> }
> {code}
> I would expect, given the field is optional, for the JsonConverter to still 
> process this value. 
> What happens is I get a null pointer exception, the stacktrace points to this 
> line: 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L701
>  called by 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L181
> Issue seems to be that we need to check and see if the jsonValue is null 
> before checking if the jsonValue has a null value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9192) NullPointerException if field in schema not present in value

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979717#comment-16979717
 ] 

ASF GitHub Bot commented on KAFKA-9192:
---

levzem commented on pull request #7733: KAFKA-9192: fix NPE when for converting 
optional json schema in structs
URL: https://github.com/apache/kafka/pull/7733
 
 
   resolves the bug https://issues.apache.org/jira/browse/KAFKA-9192
   
   line #701 will throw a `NPE` if `jsonValue` is `null`, if the schema was 
optional and the field never existed
   
   Signed-off-by: Lev Zemlyanov 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NullPointerException if field in schema not present in value
> 
>
> Key: KAFKA-9192
> URL: https://issues.apache.org/jira/browse/KAFKA-9192
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Mark Tinsley
>Priority: Major
>
> Given a message:
> {code:java}
> {
>"schema":{
>   "type":"struct",
>   "fields":[
>  {
> "type":"string",
> "optional":true,
> "field":"abc"
>  }
>   ],
>   "optional":false,
>   "name":"foobar"
>},
>"payload":{
>}
> }
> {code}
> I would expect, given the field is optional, for the JsonConverter to still 
> process this value. 
> What happens is I get a null pointer exception, the stacktrace points to this 
> line: 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L701
>  called by 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L181
> Issue seems to be that we need to check and see if the jsonValue is null 
> before checking if the jsonValue has a null value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9222) StreamPartitioner for internal repartition topics does not match defaults for to() operation

2019-11-21 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979700#comment-16979700
 ] 

John Roesler commented on KAFKA-9222:
-

Thanks for the report!

It does sound like a bug in the internal processor graph construction logic. We 
should be able to forward the knowledge that the stream is windowed through the 
topology and use the right partitioner for the repartition topic. 

Besides the case you point out, it’s not clear to me that the default 
partitioner would even partition windowed data correctly, so there might be 
other implications on correctness as well. 

> StreamPartitioner for internal repartition topics does not match defaults for 
> to() operation
> 
>
> Key: KAFKA-9222
> URL: https://issues.apache.org/jira/browse/KAFKA-9222
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.1
>Reporter: Michael Viamari
>Priority: Minor
>
> When a KStream has a Windowed key, different StreamPartitions are selected 
> depending on how the stream sink is generated.
> When using `KStream#to()`, the topology uses a `StreamSinkNode`, which 
> chooses a `WindowedStreamPartitioner` when no partitioner is provided when 
> creating a `SinkNode` for the topology.
> {code:java}
> KTable<> aggResult = inputStream.groupByKey().windowed(...).aggregate(...); 
> aggResult.toStream().to(aggStreamTopic)
> {code}
> When an internal repartition is created before a stateful operation, an 
> `OptimizableRepartitionNode` is used, which results in a `SinkNode` being 
> added to the topology. This node is created with a null partitioner, which 
> then would always use the Producer default partitioner. This becomes an issue 
> when attempting to join a windowed stream/ktable with a stream that was 
> mapped into a windowed key.
> {code:java}
> KTable<> windowedAgg = inputStream.groupByKey().windowed(...).aggregate(...); 
> windowedAgg.toStream().to(aggStreamTopic);
> KStream<> windowedStream = inputStream.map((k, v) -> {
> Map w = windows.windowsFor(v.getTimestamp());
> Window minW = getMinWindow(w.values());
> return KeyValue.pair(new Windowed<>(k, minW), v);
> });
> windowedStream.leftJoin(windowedAgg, );
> {code}
> The only work around I've found is to either use the default partitioner for 
> the `KStream#to()` operation, or to use `KStream.through()` for the 
> repartition operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9204) ReplaceField transformation fails when encountering tombstone event

2019-11-21 Thread Lev Zemlyanov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979662#comment-16979662
 ] 

Lev Zemlyanov commented on KAFKA-9204:
--

issued [https://github.com/apache/kafka/pull/7731] as a fix, similar to the fix 
for InsertField found in [https://github.com/apache/kafka/pull/6914]

this fix however passes the record through for both null keys and null values

> ReplaceField transformation fails when encountering tombstone event
> ---
>
> Key: KAFKA-9204
> URL: https://issues.apache.org/jira/browse/KAFKA-9204
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Georgios Kalogiros
>Priority: Major
>
> When applying the {{ReplaceField}} transformation to a tombstone event, an 
> exception is raised:
>  
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:506)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects 
> supported in absence of schema for [field replacement], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.applySchemaless(ReplaceField.java:134)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.apply(ReplaceField.java:127)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>   ... 14 more
> {code}
> There was a similar bug for the InsertField transformation that got merged in 
> recently:
> https://issues.apache.org/jira/browse/KAFKA-8523
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9204) ReplaceField transformation fails when encountering tombstone event

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979658#comment-16979658
 ] 

ASF GitHub Bot commented on KAFKA-9204:
---

levzem commented on pull request #7731: KAFKA-9204: allow ReplaceField SMT to 
handle tombstone records
URL: https://github.com/apache/kafka/pull/7731
 
 
   fixes https://issues.apache.org/jira/browse/KAFKA-9204
   
   this PR allows the ReplaceField SMT to handle null values and null keys by 
just passing them through
   
   Signed-off-by: Lev Zemlyanov 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ReplaceField transformation fails when encountering tombstone event
> ---
>
> Key: KAFKA-9204
> URL: https://issues.apache.org/jira/browse/KAFKA-9204
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Georgios Kalogiros
>Priority: Major
>
> When applying the {{ReplaceField}} transformation to a tombstone event, an 
> exception is raised:
>  
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:506)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects 
> supported in absence of schema for [field replacement], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.applySchemaless(ReplaceField.java:134)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.apply(ReplaceField.java:127)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>   ... 14 more
> {code}
> There was a similar bug for the InsertField transformation that got merged in 
> recently:
> https://issues.apache.org/jira/browse/KAFKA-8523
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-9218) MirrorMaker 2 can fail to create topics

2019-11-21 Thread Mickael Maison (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979188#comment-16979188
 ] 

Mickael Maison edited comment on KAFKA-9218 at 11/21/19 5:35 PM:
-

A similar issue would occur if a target topic was deleted, it would not be 
recreated by MM2 unless a new change in the source topics occurred


was (Author: ecomar):
A similar issue would occur if a target was deleted, it would not be recreated 
by MM2 unless a new change in the source topics occurred

> MirrorMaker 2 can fail to create topics
> ---
>
> Key: KAFKA-9218
> URL: https://issues.apache.org/jira/browse/KAFKA-9218
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>
> MirrorSourceConnector.refreshTopicPartitions() does not handle topic creation 
> failure correctly.
> If createTopicPartitions() fails to create a topic, the next time 
> refreshTopicPartitions() it will not retry the creation. The creation will 
> only be retried if another topic has been created in the source cluster. This 
> is because knownTopicPartitions is updated before the topic creation is 
> attempted and it's only refreshed if a new topic appears.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9222) StreamPartitioner for internal repartition topics does not match defaults for to() operation

2019-11-21 Thread Michael Viamari (Jira)
Michael Viamari created KAFKA-9222:
--

 Summary: StreamPartitioner for internal repartition topics does 
not match defaults for to() operation
 Key: KAFKA-9222
 URL: https://issues.apache.org/jira/browse/KAFKA-9222
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.1
Reporter: Michael Viamari


When a KStream has a Windowed key, different StreamPartitions are selected 
depending on how the stream sink is generated.

When using `KStream#to()`, the topology uses a `StreamSinkNode`, which chooses 
a `WindowedStreamPartitioner` when no partitioner is provided when creating a 
`SinkNode` for the topology.
{code:java}
KTable<> aggResult = inputStream.groupByKey().windowed(...).aggregate(...); 
aggResult.toStream().to(aggStreamTopic)
{code}
When an internal repartition is created before a stateful operation, an 
`OptimizableRepartitionNode` is used, which results in a `SinkNode` being added 
to the topology. This node is created with a null partitioner, which then would 
always use the Producer default partitioner. This becomes an issue when 
attempting to join a windowed stream/ktable with a stream that was mapped into 
a windowed key.
{code:java}
KTable<> windowedAgg = inputStream.groupByKey().windowed(...).aggregate(...); 
windowedAgg.toStream().to(aggStreamTopic);

KStream<> windowedStream = inputStream.map((k, v) -> {
Map w = windows.windowsFor(v.getTimestamp());
Window minW = getMinWindow(w.values());
return KeyValue.pair(new Windowed<>(k, minW), v);
});
windowedStream.leftJoin(windowedAgg, );
{code}
The only work around I've found is to either use the default partitioner for 
the `KStream#to()` operation, or to use `KStream.through()` for the repartition 
operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9222) StreamPartitioner for internal repartition topics does not match defaults for to() operation

2019-11-21 Thread Michael Viamari (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Viamari updated KAFKA-9222:
---
Priority: Minor  (was: Major)

> StreamPartitioner for internal repartition topics does not match defaults for 
> to() operation
> 
>
> Key: KAFKA-9222
> URL: https://issues.apache.org/jira/browse/KAFKA-9222
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.1
>Reporter: Michael Viamari
>Priority: Minor
>
> When a KStream has a Windowed key, different StreamPartitions are selected 
> depending on how the stream sink is generated.
> When using `KStream#to()`, the topology uses a `StreamSinkNode`, which 
> chooses a `WindowedStreamPartitioner` when no partitioner is provided when 
> creating a `SinkNode` for the topology.
> {code:java}
> KTable<> aggResult = inputStream.groupByKey().windowed(...).aggregate(...); 
> aggResult.toStream().to(aggStreamTopic)
> {code}
> When an internal repartition is created before a stateful operation, an 
> `OptimizableRepartitionNode` is used, which results in a `SinkNode` being 
> added to the topology. This node is created with a null partitioner, which 
> then would always use the Producer default partitioner. This becomes an issue 
> when attempting to join a windowed stream/ktable with a stream that was 
> mapped into a windowed key.
> {code:java}
> KTable<> windowedAgg = inputStream.groupByKey().windowed(...).aggregate(...); 
> windowedAgg.toStream().to(aggStreamTopic);
> KStream<> windowedStream = inputStream.map((k, v) -> {
> Map w = windows.windowsFor(v.getTimestamp());
> Window minW = getMinWindow(w.values());
> return KeyValue.pair(new Windowed<>(k, minW), v);
> });
> windowedStream.leftJoin(windowedAgg, );
> {code}
> The only work around I've found is to either use the default partitioner for 
> the `KStream#to()` operation, or to use `KStream.through()` for the 
> repartition operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-1714) more better bootstrapping of the gradle-wrapper.jar

2019-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-1714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979391#comment-16979391
 ] 

ASF GitHub Bot commented on KAFKA-1714:
---

ijuma commented on pull request #6031: KAFKA-1714: Fix gradle wrapper 
bootstrapping
URL: https://github.com/apache/kafka/pull/6031
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> more better bootstrapping of the gradle-wrapper.jar 
> 
>
> Key: KAFKA-1714
> URL: https://issues.apache.org/jira/browse/KAFKA-1714
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.0
>Reporter: Joe Stein
>Assignee: Grant Henke
>Priority: Major
>
> From https://issues.apache.org/jira/browse/KAFKA-1490 we moved out the 
> gradle-wrapper.jar for our source maintenance. This makes builds for folks 
> coming in the first step somewhat problematic.  A bootstrap step is required 
> if this could be somehow incorporated that would be great.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9221) Kafka REST Proxy wrongly converts quotes in message when sending json

2019-11-21 Thread Ryanne Dolan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryanne Dolan resolved KAFKA-9221.
-
Resolution: Invalid

not relevant to this project

> Kafka REST Proxy wrongly converts quotes in message when sending json
> -
>
> Key: KAFKA-9221
> URL: https://issues.apache.org/jira/browse/KAFKA-9221
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.3.0
> Environment: Linux redhat
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Major
>
> Kafka REST Proxy has a problem when sending/converting json files (e.g. 
> json.new) into Kafka protocol. For example JSON file:
> {code:java}
> {"records":[{"value":"rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
>  1337 1572276922"}]}
> {code}
> is sent using call to Kafka REST Proxy on localhost:8073:
> {code:java}
> curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H 
> "Accept: application/vnd.kafka.v2+json" --data @json.new  
> http://localhost:8073/topics/somple_topic -i 
> {code}
> in Kafka in some_topic we got:
> {code:java}
> "rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
>  1337 1572276922"
> {code}
> but expected is that message has no quotes:
> {code:java}
> rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
>  1337 1572276922
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9221) Kafka REST Proxy wrongly converts quotes in message when sending json

2019-11-21 Thread Seweryn Habdank-Wojewodzki (Jira)
Seweryn Habdank-Wojewodzki created KAFKA-9221:
-

 Summary: Kafka REST Proxy wrongly converts quotes in message when 
sending json
 Key: KAFKA-9221
 URL: https://issues.apache.org/jira/browse/KAFKA-9221
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 2.3.0
 Environment: Linux redhat
Reporter: Seweryn Habdank-Wojewodzki


Kafka REST Proxy has a problem when sending/converting json files (e.g. 
json.new) into Kafka protocol. For example JSON file:
{code:java}
{"records":[{"value":"rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
 1337 1572276922"}]}
{code}
is sent using call to Kafka REST Proxy on localhost:8073:
{code:java}
curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: 
application/vnd.kafka.v2+json" --data @json.new  
http://localhost:8073/topics/somple_topic -i 
{code}
in Kafka in some_topic we got:
{code:java}
"rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
 1337 1572276922"
{code}
but expected is that message has no quotes:
{code:java}
rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
 1337 1572276922
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9220) TimeoutException when using kafka-preferred-replica-election

2019-11-21 Thread Or Shemesh (Jira)
Or Shemesh created KAFKA-9220:
-

 Summary: TimeoutException when using 
kafka-preferred-replica-election
 Key: KAFKA-9220
 URL: https://issues.apache.org/jira/browse/KAFKA-9220
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.3.0
Reporter: Or Shemesh


When running kafka-preferred-replica-election --bootstrap-server xxx:9092

I'm getting this error:

Timeout waiting for election resultsTimeout waiting for election 
resultsException in thread "main" kafka.common.AdminCommandFailedException at 
kafka.admin.PreferredReplicaLeaderElectionCommand$AdminClientCommand.electPreferredLeaders(PreferredReplicaLeaderElectionCommand.scala:246)
 at 
kafka.admin.PreferredReplicaLeaderElectionCommand$.run(PreferredReplicaLeaderElectionCommand.scala:78)
 at 
kafka.admin.PreferredReplicaLeaderElectionCommand$.main(PreferredReplicaLeaderElectionCommand.scala:42)
 at 
kafka.admin.PreferredReplicaLeaderElectionCommand.main(PreferredReplicaLeaderElectionCommand.scala)Caused
 by: org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.

 

Because we have a big cluster and getting all the data from the zookeeper is 
taking more the 30 second.

 

After searching the code I saw that the 30 second is hard-coded can you enable 
us to set the timeout as parameter?

[https://github.com/confluentinc/kafka/blob/master/core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9218) MirrorMaker 2 can fail to create topics

2019-11-21 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979188#comment-16979188
 ] 

Edoardo Comar commented on KAFKA-9218:
--

A similar issue would occur if a target was deleted, it would not be recreated 
by MM2 unless a new change in the source topics occurred

> MirrorMaker 2 can fail to create topics
> ---
>
> Key: KAFKA-9218
> URL: https://issues.apache.org/jira/browse/KAFKA-9218
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>
> MirrorSourceConnector.refreshTopicPartitions() does not handle topic creation 
> failure correctly.
> If createTopicPartitions() fails to create a topic, the next time 
> refreshTopicPartitions() it will not retry the creation. The creation will 
> only be retried if another topic has been created in the source cluster. This 
> is because knownTopicPartitions is updated before the topic creation is 
> attempted and it's only refreshed if a new topic appears.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9219) NullPointerException when polling metrics from Kafka Connect

2019-11-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison updated KAFKA-9219:
--
Affects Version/s: 2.4.0

> NullPointerException when polling metrics from Kafka Connect
> 
>
> Key: KAFKA-9219
> URL: https://issues.apache.org/jira/browse/KAFKA-9219
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: Mickael Maison
>Priority: Major
>
> The following stack trace appears:
>  
> {code:java}
> [2019-11-05 23:56:57,909] WARN Error getting JMX attribute 'assigned-tasks' 
> (org.apache.kafka.common.metrics.JmxReporter:202)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.runtime.distributed.WorkerCoordinator$WorkerCoordinatorMetrics$2.measure(WorkerCoordinator.java:316)
>   at 
> org.apache.kafka.common.metrics.KafkaMetric.metricValue(KafkaMetric.java:66)
>   at 
> org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:190)
>   at 
> org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:705)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1449)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttributes(RMIConnectionImpl.java:675)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:835)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> [2019-11-05 23:57:02,821] INFO [Worker clientId=connect-1, 
> groupId=backup-mm2] Herder stopped 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
> [2019-11-05 23:57:02,821] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
> Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:609)
> [2019-11-05 23:57:07,822] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
> Herder stopped 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
> [2019-11-05 23:57:07,822] INFO Kafka MirrorMaker stopped. 
> (org.apache.kafka.connect.mirror.MirrorMaker:191)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9219) NullPointerException when polling metrics from Kafka Connect

2019-11-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison updated KAFKA-9219:
--
Component/s: KafkaConnect

> NullPointerException when polling metrics from Kafka Connect
> 
>
> Key: KAFKA-9219
> URL: https://issues.apache.org/jira/browse/KAFKA-9219
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Mickael Maison
>Priority: Major
>
> The following stack trace appears:
>  
> {code:java}
> [2019-11-05 23:56:57,909] WARN Error getting JMX attribute 'assigned-tasks' 
> (org.apache.kafka.common.metrics.JmxReporter:202)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.runtime.distributed.WorkerCoordinator$WorkerCoordinatorMetrics$2.measure(WorkerCoordinator.java:316)
>   at 
> org.apache.kafka.common.metrics.KafkaMetric.metricValue(KafkaMetric.java:66)
>   at 
> org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:190)
>   at 
> org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:705)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1449)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttributes(RMIConnectionImpl.java:675)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:835)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> [2019-11-05 23:57:02,821] INFO [Worker clientId=connect-1, 
> groupId=backup-mm2] Herder stopped 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
> [2019-11-05 23:57:02,821] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
> Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:609)
> [2019-11-05 23:57:07,822] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
> Herder stopped 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
> [2019-11-05 23:57:07,822] INFO Kafka MirrorMaker stopped. 
> (org.apache.kafka.connect.mirror.MirrorMaker:191)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9219) NullPointerException when polling metrics from Kafka Connect

2019-11-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison updated KAFKA-9219:
--
Description: 
The following stack trace appears:

 


{code:java}
[2019-11-05 23:56:57,909] WARN Error getting JMX attribute 'assigned-tasks' 
(org.apache.kafka.common.metrics.JmxReporter:202)
java.lang.NullPointerException
at 
org.apache.kafka.connect.runtime.distributed.WorkerCoordinator$WorkerCoordinatorMetrics$2.measure(WorkerCoordinator.java:316)
at 
org.apache.kafka.common.metrics.KafkaMetric.metricValue(KafkaMetric.java:66)
at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:190)
at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:705)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1449)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttributes(RMIConnectionImpl.java:675)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:835)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-11-05 23:57:02,821] INFO [Worker clientId=connect-1, groupId=backup-mm2] 
Herder stopped 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
[2019-11-05 23:57:02,821] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
Herder stopping 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:609)
[2019-11-05 23:57:07,822] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
Herder stopped 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
[2019-11-05 23:57:07,822] INFO Kafka MirrorMaker stopped. 
(org.apache.kafka.connect.mirror.MirrorMaker:191)
{code}


  was:
The following stack trace appears:

 

{{[2019-11-05 23:56:57,909] WARN Error getting JMX attribute 'assigned-tasks' 
(org.apache.kafka.common.metrics.JmxReporter:202)
java.lang.NullPointerException
at 
org.apache.kafka.connect.runtime.distributed.WorkerCoordinator$WorkerCoordinatorMetrics$2.measure(WorkerCoordinator.java:316)
at 
org.apache.kafka.common.metrics.KafkaMetric.metricValue(KafkaMetric.java:66)
at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:190)
at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:705)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1449)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttributes(RMIConnectionImpl.java:675)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Created] (KAFKA-9219) NullPointerException when polling metrics from Kafka Connect

2019-11-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-9219:
-

 Summary: NullPointerException when polling metrics from Kafka 
Connect
 Key: KAFKA-9219
 URL: https://issues.apache.org/jira/browse/KAFKA-9219
 Project: Kafka
  Issue Type: Bug
Reporter: Mickael Maison


The following stack trace appears:

 

{{[2019-11-05 23:56:57,909] WARN Error getting JMX attribute 'assigned-tasks' 
(org.apache.kafka.common.metrics.JmxReporter:202)
java.lang.NullPointerException
at 
org.apache.kafka.connect.runtime.distributed.WorkerCoordinator$WorkerCoordinatorMetrics$2.measure(WorkerCoordinator.java:316)
at 
org.apache.kafka.common.metrics.KafkaMetric.metricValue(KafkaMetric.java:66)
at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:190)
at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:705)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1449)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttributes(RMIConnectionImpl.java:675)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:835)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-11-05 23:57:02,821] INFO [Worker clientId=connect-1, groupId=backup-mm2] 
Herder stopped 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
[2019-11-05 23:57:02,821] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
Herder stopping 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:609)
[2019-11-05 23:57:07,822] INFO [Worker clientId=connect-2, groupId=cv-mm2] 
Herder stopped 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:629)
[2019-11-05 23:57:07,822] INFO Kafka MirrorMaker stopped. 
(org.apache.kafka.connect.mirror.MirrorMaker:191)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9218) MirrorMaker 2 can fail to create topics

2019-11-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-9218:
-

 Summary: MirrorMaker 2 can fail to create topics
 Key: KAFKA-9218
 URL: https://issues.apache.org/jira/browse/KAFKA-9218
 Project: Kafka
  Issue Type: Bug
Reporter: Mickael Maison
Assignee: Mickael Maison


MirrorSourceConnector.refreshTopicPartitions() does not handle topic creation 
failure correctly.
If createTopicPartitions() fails to create a topic, the next time 
refreshTopicPartitions() it will not retry the creation. The creation will only 
be retried if another topic has been created in the source cluster. This is 
because knownTopicPartitions is updated before the topic creation is attempted 
and it's only refreshed if a new topic appears.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9212) Keep receiving FENCED_LEADER_EPOCH while sending ListOffsetRequest

2019-11-21 Thread Yannick (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979153#comment-16979153
 ] 

Yannick commented on KAFKA-9212:


Here are leader-epoch-checkpoint on each broker ( 3 in total which are 1, 3 and 
4l)

 

Broker ID 4 ( the current partition leader during issue):

cat /var/lib/kafka/logs/connect_ls_config-0/leader-epoch-checkpoint
0
2
0 0
2 22

 

Broker ID 1 :

cat /var/lib/kafka/logs/connect_ls_config-0/leader-epoch-checkpoint
0
1
0 0

 

Broker ID 3:

cat /var/lib/kafka/logs/connect_ls_config-0/leader-epoch-checkpoint
0
1
0 0

 

 

And config topic comes from kafka connect worker default creation ( compacted 
topic) :

Topic:connect_ls_config PartitionCount:1 ReplicationFactor:3 
Configs:min.insync.replicas=2,cleanup.policy=compact,segment.bytes=1073741824,max.message.bytes=3000

 

> Keep receiving FENCED_LEADER_EPOCH while sending ListOffsetRequest
> --
>
> Key: KAFKA-9212
> URL: https://issues.apache.org/jira/browse/KAFKA-9212
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, offset manager
>Affects Versions: 2.3.0
> Environment: Linux
>Reporter: Yannick
>Priority: Critical
>
> When running Kafka connect s3 sink connector ( confluent 5.3.0), after one 
> broker got restarted (leaderEpoch updated at this point), the connect worker 
> crashed with the following error : 
> [2019-11-19 16:20:30,097] ERROR [Worker clientId=connect-1, 
> groupId=connect-ls] Uncaught exception in herder work thread, exiting: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:253)
>  org.apache.kafka.common.errors.TimeoutException: Failed to get offsets by 
> times in 30003ms
>  
> After investigation, it seems it's because it got fenced when sending 
> ListOffsetRequest in loop and then got timed out , as follows :
> [2019-11-19 16:20:30,020] DEBUG [Consumer clientId=consumer-3, 
> groupId=connect-ls] Sending ListOffsetRequest (type=ListOffsetRequest, 
> replicaId=-1, partitionTimestamps={connect_ls_config-0={timestamp: -1, 
> maxNumOffsets: 1, currentLeaderEpoch: Optional[1]}}, 
> isolationLevel=READ_UNCOMMITTED) to broker kafka6.fra2.internal:9092 (id: 4 
> rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:905)
> [2019-11-19 16:20:30,044] DEBUG [Consumer clientId=consumer-3, 
> groupId=connect-ls] Attempt to fetch offsets for partition 
> connect_ls_config-0 failed due to FENCED_LEADER_EPOCH, retrying. 
> (org.apache.kafka.clients.consumer.internals.Fetcher:985)
>  
> The above happens multiple times until timeout.
>  
> According to the debugs, the consumer always get a leaderEpoch of 1 for this 
> topic when starting up :
>  
>  [2019-11-19 13:27:30,802] DEBUG [Consumer clientId=consumer-3, 
> groupId=connect-ls] Updating last seen epoch from null to 1 for partition 
> connect_ls_config-0 (org.apache.kafka.clients.Metadata:178)
>   
>   
>  But according to our brokers log, the leaderEpoch should be 2, as follows :
>   
>  [2019-11-18 14:19:28,988] INFO [Partition connect_ls_config-0 broker=4] 
> connect_ls_config-0 starts at Leader Epoch 2 from offset 22. Previous Leader 
> Epoch was: 1 (kafka.cluster.Partition)
>   
>   
>  This make impossible to restart the worker as it will always get fenced and 
> then finally timeout.
>   
>  It is also impossible to consume with a 2.3 kafka-console-consumer as 
> follows :
>   
>  kafka-console-consumer --bootstrap-server BOOTSTRAPSERVER:9092 --topic 
> connect_ls_config --from-beginning 
>   
>  the above will just hang forever ( which is not expected cause there is 
> data) and we can see those debug messages :
> [2019-11-19 22:17:59,124] DEBUG [Consumer clientId=consumer-1, 
> groupId=console-consumer-3844] Attempt to fetch offsets for partition 
> connect_ls_config-0 failed due to FENCED_LEADER_EPOCH, retrying. 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
>   
>   
>  Interesting fact, if we do subscribe the same way with kafkacat (1.5.0) we 
> can consume without problem ( must be the way kafkacat is consuming ignoring 
> FENCED_LEADER_EPOCH):
>   
>  kafkacat -b BOOTSTRAPSERVER:9092 -t connect_ls_config -o beginning
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)