[jira] [Commented] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2019-07-03 Thread Ryanne Dolan (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878308#comment-16878308
 ] 

Ryanne Dolan commented on KAFKA-7500:
-

[~Munamala] Yes, the translateOffsets() works both ways. When K1 comes back 
online, you can "failback" from K2 to K1 once a checkpoint for TOPIC_GROUP is 
emitted upstream to K1. The checkpoint will have offsets for TOPIC1 on K1 
(translated from K1.TOPIC1 on K2). You can then seek() to skip over the records 
TOPIC_GROUP already consumed in K2.

The tricky part here is that you need to make sure MM2 is configured to emit 
checkpoints both from K1->K2 and K2->K1. Configure the whitelists like:

K1->K2.topics = TOPIC1, K2.TOPIC1
K2->K1.topics = TOPIC1, K1.TOPIC1
K1->K2.groups = TOPIC_GROUP
K2->K1.groups = TOPIC_GROUP

Otherwise, you won't see checkpoints for TOPIC1 going from K2 to K1.

Ryanne

> MirrorMaker 2.0 (KIP-382)
> -
>
> Key: KAFKA-7500
> URL: https://issues.apache.org/jira/browse/KAFKA-7500
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect, mirrormaker
>Reporter: Ryanne Dolan
>Priority: Minor
> Fix For: 2.4.0
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread CHARELS (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878277#comment-16878277
 ] 

CHARELS commented on KAFKA-8624:


We are sure that our producers' code and consumers' code are ok, But our 
version are different, most  producers's  client version range in 0.9.0 to 
0.11.1.0, our kafka version is 1.0.0. May be the difference of vesrion cause 
the error infromation:"ava.lang.IllegalArgumentException: Magic v0 does not 
support record headers". So, how should we solve it if the producers did not 
want to update their versions? shall the producers change the declare of header 
infomation?

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8522) Tombstones can survive forever

2019-07-03 Thread Jun Rao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878247#comment-16878247
 ] 

Jun Rao commented on KAFKA-8522:


Since we are returning the tombstone to the consumer, it may not be ideal to 
change it. Another option is to write out the cleaning offset and the 
corresponding cleaning time to a checkpoint file in the partition dir. The 
cleaner can use that checkpoint to determine the cleaning time for each 
tombstone. Once all tombstones before an offset are removed, the corresponding 
entry in the checkpoint file can be removed.

> Tombstones can survive forever
> --
>
> Key: KAFKA-8522
> URL: https://issues.apache.org/jira/browse/KAFKA-8522
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Reporter: Evelyn Bayes
>Priority: Minor
>
> This is a bit grey zone as to whether it's a "bug" but it is certainly 
> unintended behaviour.
>  
> Under specific conditions tombstones effectively survive forever:
>  * Small amount of throughput;
>  * min.cleanable.dirty.ratio near or at 0; and
>  * Other parameters at default.
> What  happens is all the data continuously gets cycled into the oldest 
> segment. Old records get compacted away, but the new records continuously 
> update the timestamp of the oldest segment reseting the countdown for 
> deleting tombstones.
> So tombstones build up in the oldest segment forever.
>  
> While you could "fix" this by reducing the segment size, this can be 
> undesirable as a sudden change in throughput could cause a dangerous number 
> of segments to be created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8628) Auto-generated Kafka RPC code should be able to use zero-copy ByteBuffers

2019-07-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878238#comment-16878238
 ] 

ASF GitHub Bot commented on KAFKA-8628:
---

cmccabe commented on pull request #7032: KAFKA-8628: Auto-generated Kafka RPC 
code should be able to use zero-copy ByteBuffers
URL: https://github.com/apache/kafka/pull/7032
 
 
   It is often more efficient to deal with large fields of type "bytes" as 
ByteBuffer objects, rather than Java arrays.  ByteBuffers can be used in a 
"zero-copy" fashion, where they simply serve as pointers into already allocated 
memory.  This avoids the need to copy the data, as would be necessary when 
using a regular Java byte array.
   
   There is a drawback to using zero-copy ByteBuffers, though: when they are 
used, the ByteBuffer from which we deserialized the request must remain valid 
until we're done with the request.  So the underlying buffer cannot be reused 
during this period.  Therefore, we don't want all bytes fields to show up as 
ByteBuffers.
   
   This change adds a "style" field to the RPC specifications which allows 
specifying that a "bytes" field should be deserialized as a zero-copy 
ByteBuffer rather than as a byte array.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Auto-generated Kafka RPC code should be able to use zero-copy ByteBuffers
> -
>
> Key: KAFKA-8628
> URL: https://issues.apache.org/jira/browse/KAFKA-8628
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> Auto-generated Kafka RPC code should be able to use zero-copy ByteBuffers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8628) Auto-generated Kafka RPC code should be able to use zero-copy ByteBuffers

2019-07-03 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8628:
--

 Summary: Auto-generated Kafka RPC code should be able to use 
zero-copy ByteBuffers
 Key: KAFKA-8628
 URL: https://issues.apache.org/jira/browse/KAFKA-8628
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Auto-generated Kafka RPC code should be able to use zero-copy ByteBuffers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2019-07-03 Thread Srikala (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878214#comment-16878214
 ] 

Srikala commented on KAFKA-7500:


[~ryannedolan],   Thanks for all your efforts on MM2.  I am also eagerly 
looking forward MM2  to be on 2.4.0.

Appreciate your input on the following scenario:

I have setup two clusters K1 and K2 and configured MM2 replication for TOPIC1. 
My consumer group is "TOPIC_GROUP". 
 
 1. To start with K1 cluster is active and "TOPIC_GROUP" is actively consuming 
TOPIC1 from K1. 
 2. Now, the failover to K2 is triggered and the "TOPIC_GROUP" starts consuming 
from K2. I have used RemoteClusterUtils.translateOffsets to seek to the correct 
offset for K1.TOPIC1.
 3. During failover, there were #N messages in TOPIC1 in K1 that were not 
consumed by "TOPIC_GROUP" but were replicated in K2 and consumed by 
"TOPIC_GROUP" in K2 as K1.TOPIC1

In this scenario per my testing, when K1 comes back online and TOPIC_GROUP 
restarts consuming from K1, the #N messages will be consumed again from TOPIC1. 
Is this expected? Is there a way to do consumer.seek on the TOPIC1 partitions 
in K1, based on the K1.TOPIC1 consumer offsets in K2?

Thanks!

> MirrorMaker 2.0 (KIP-382)
> -
>
> Key: KAFKA-7500
> URL: https://issues.apache.org/jira/browse/KAFKA-7500
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect, mirrormaker
>Reporter: Ryanne Dolan
>Priority: Minor
> Fix For: 2.4.0
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878159#comment-16878159
 ] 

ASF GitHub Bot commented on KAFKA-5998:
---

vvcephei commented on pull request #7027: KAFKA-5998: fix checkpointableOffsets 
handling
URL: https://github.com/apache/kafka/pull/7027
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: Bill Bejeck
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> 

[jira] [Created] (KAFKA-8627) Investigate batching on state restore

2019-07-03 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8627:
--

 Summary: Investigate batching on state restore
 Key: KAFKA-8627
 URL: https://issues.apache.org/jira/browse/KAFKA-8627
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman


Currently when rebuilding state from scratch, we form batches based on whatever 
is returned by poll() and write them to RocksDB. Given the structure of 
RocksDB, inserting large sorted batches gives the best performance when writing 
large amounts of data at once, so we should investigate the potential 
restore-time improvement of 

1) Larger batches – either by tuning the restore consumer to return larger 
amounts of data, buffering records into larger batches, or both

2) Sorting batches 

 

These two factors are likely to be coupled, so we should explore the 
performance gains/hits by varying both if possible (ie turn sorting on/off with 
a variety of batch sizes) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2019-07-03 Thread Del Bao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878060#comment-16878060
 ] 

Del Bao commented on KAFKA-4212:


Our use case is even simpler. We just want to prevent local state store grow 
without bound. True, there was a stackoverflow post answered by [~mjsax] that 
suggest use Windowed store to achieve it. But this makes our code very 
confusing because all these retention, window size settings, etc.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Priority: Major
>  Labels: api
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2019-07-03 Thread Elias Levy (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878027#comment-16878027
 ] 

Elias Levy commented on KAFKA-4212:
---

I am the originator of the issue.  And, yes, for our particular use case, there 
wasn't a need for strict coherency.  A best effort TTL service would have been 
sufficient.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Priority: Major
>  Labels: api
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2019-07-03 Thread James Ritt (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878024#comment-16878024
 ] 

James Ritt commented on KAFKA-4212:
---

Hi [~mjsax], thanks for your input! I'm not the original requestor, but my 
understanding of this JIRA is that it would be useful to have a TTL KV cache 
within streams: something I'd agree with, and it appears a commenter or two 
above also agree. The main draw for me is that the TTL cache, instead of 
growing unbounded, would then essentially mimic the underlying topic which we 
created with `cleanup.policy=delete` & `delete.retention.ms` set.

I very much could be wrong, but I think rocksDB and a topic setup with 
`cleanup.policy=delete` & `delete.retention.ms` both work off wall-clock time, 
so they would be congruent in that respect? And it's true that not being 
event-based could introduce discrepancies between the two (in particular, 
imagine the cache is configured with the same TTL as the topic but that the 
cache is offline for a couple hours, when the cache comes back online it will 
hold onto the values for an extra couple hours), but that could be fine as long 
as application semantics don't depend upon cache eviction.

An example might help: our current use case is to store a cache of revoked auth 
tokens. These tokens contain an expiration and are relatively short-lived, so 
we setup the containing topic with `delete.retention.ms` equal to their 
lifetime. We were then hoping to use Stream's `GlobalKTable` cache on this 
topic. With my PR, we could use the newly-added TTL KV cache with the same TTL 
as the underlying topic. And in this situation, wall-clock skew is fine, as 
there is no harm in them persisting in the cache for extra time. Without this 
change, I believe our underlying rocksDB cache would grow unbounded.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Priority: Major
>  Labels: api
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8436) Replace AddOffsetsToTxn request/response with automated protocol

2019-07-03 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877989#comment-16877989
 ] 

Boyang Chen commented on KAFKA-8436:


[https://github.com/apache/kafka/pull/7015]

> Replace AddOffsetsToTxn request/response with automated protocol
> 
>
> Key: KAFKA-8436
> URL: https://issues.apache.org/jira/browse/KAFKA-8436
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877988#comment-16877988
 ] 

ASF GitHub Bot commented on KAFKA-5998:
---

vvcephei commented on pull request #7030: KAFKA-5998: fix checkpointableOffsets 
handling
URL: https://github.com/apache/kafka/pull/7030
 
 
   * fix checkpoint file warning by filtering checkpointable offsets per task
   * clean up state manager hierarchy to prevent similar bugs
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: Bill Bejeck
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> 

[jira] [Commented] (KAFKA-8617) Replace EndTxn request/response with automated protocol

2019-07-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877985#comment-16877985
 ] 

ASF GitHub Bot commented on KAFKA-8617:
---

abbccdda commented on pull request #7029: KAFKA-8617: Use automated protocol 
for End Txn
URL: https://github.com/apache/kafka/pull/7029
 
 
   As title
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace EndTxn request/response with automated protocol
> ---
>
> Key: KAFKA-8617
> URL: https://issues.apache.org/jira/browse/KAFKA-8617
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877957#comment-16877957
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 4:32 PM:
---

I assume that you write your own production-side code, and others write 
consumer-side code.

That's why they were inconsistent.

Please make sure that there is no problem with the producer's version of code 
you wrote, and that there is a problem with the others consumer's version of 
code


was (Author: lizhitao):
I assume that you write your own production-side code, and others write 
consumer-side code.

That's why they were inconsistent.

Please make sure that there is no problem with the production version of code 
you wrote, and that there is a problem with the others consumer version of code

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at 

[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877957#comment-16877957
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 4:27 PM:
---

I assume that you write your own production-side code, and others write 
consumer-side code.

That's why they were inconsistent.

Please make sure that there is no problem with the production version of code 
you wrote, and that there is a problem with the others consumer version of code


was (Author: lizhitao):
I assume that you write your own production-side code, and others write 
consumer-side code.

That's why they were inconsistent.

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877957#comment-16877957
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 4:24 PM:
---

I assume that you write your own production-side code, and others write 
consumer-side code.

That's why they were inconsistent.


was (Author: lizhitao):
I assume that you write your own production-side code, and others write 
consumer-side code.

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


 [ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhitao Li(李志涛) updated KAFKA-8624:
--
Comment: was deleted

(was: That's why they were inconsistent.)

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8626) Group will fall into constant incremental rebalancing with a long non-responsive static member

2019-07-03 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8626:
--

 Summary: Group will fall into constant incremental rebalancing 
with a long non-responsive static member
 Key: KAFKA-8626
 URL: https://issues.apache.org/jira/browse/KAFKA-8626
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen
Assignee: Boyang Chen


Currently when a group rebalances, static members have up until the expiration 
of the rebalance timeout to rejoin. if they do not rejoin in time, then they 
are rejoined virtually by the coordinator. basically the coordinator just uses 
the old subscription. This behavior may be a problem for cooperative 
reassignment. the issue is that the old subscription may contain a set of owned 
partitions. the assignor will respect the owned set of partitions, but that 
won't stop it from trying to move them to another consumer. in this case, we 
will set the NEED_REJOIN error code. the idea is that consumers observe this 
error, revoke any needed partitions and immediately rejoin. but if the static 
member just continues using its old subscription, then we'll be stuck in 
rebalance state until the static member comes back online, because the 
non-responsive static member won't give up subscription.

Some ideas proposed by Jason:

1. make revocation optional. basically get rid of the internal REJOIN_NEEDED 
error code. consumers only rebalance if they revoke partitions themselves or 
detect the group rebalancing. in this case, the static member would just 
decline to give up its partitions until it is back online.
2. make the assignor aware of which members are active in the current 
rebalance. if a static member is not active, then the assignor can just not 
reassign any of its owned partitions. it might be a good idea to have this 
anyway because rebalances are often used as a (clumsy) way to collect 
information from the group members. for example, when connect rebalances a 
group, it is looking for consistency among the members on the config offset 
that have read. if one member is just reporting old state, then this protocol 
won't work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877958#comment-16877958
 ] 

Zhitao Li(李志涛) commented on KAFKA-8624:
---

That's why they were inconsistent.

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877957#comment-16877957
 ] 

Zhitao Li(李志涛) commented on KAFKA-8624:
---

I assume that you write your own production-side code, and others write 
consumer-side code.

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877955#comment-16877955
 ] 

Zhitao Li(李志涛) commented on KAFKA-8624:
---

The reason for my analysis is that it should be caused by the low version on 
the consumer side. please look at image below:

!image-2019-07-04-00-18-15-781.png!

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


 [ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhitao Li(李志涛) updated KAFKA-8624:
--
Attachment: image-2019-07-04-00-18-15-781.png

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-07-04-00-18-15-781.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877928#comment-16877928
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 3:51 PM:
---

Although client-side is consistent with server-side version but look at server 
log that I found client record format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> 'appendWithOffset'

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem


was (Author: lizhitao):
Although client-side is consistent with server-side version but look at server 
log that I found client record format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> ''

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> 

[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877928#comment-16877928
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 3:50 PM:
---

Although client-side is consistent with server-side version but look at server 
log that I found client record format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> ''

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem


was (Author: lizhitao):
If client-side is consistent with server-side version but look at server log 
that I found client record format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> ''

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> 

[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877928#comment-16877928
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 3:47 PM:
---

If client-side is consistent with server-side version but look at server log 
that I found client record format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> ''

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem


was (Author: lizhitao):
If client-side is consistent with server-side version but look at server log 
that I found client message format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> ''

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> 

[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877928#comment-16877928
 ] 

Zhitao Li(李志涛) commented on KAFKA-8624:
---

If client-side is consistent with server-side version but look at server log 
that I found client message format is obviously too low ,Please look at code 
below:

java class -> 'MemoryRecordsBuilder' to method -> ''

```java

{color:#80}if {color}({color:#660e7a}magic {color}< 
RecordBatch.{color:#660e7a}MAGIC_VALUE_V2 {color}&& headers != 
{color:#80}null {color}&& headers.{color:#660e7a}length {color}> 
{color:#ff}0{color})
 {color:#80}throw new {color}IllegalArgumentException({color:#008000}"Magic 
v" {color}+ {color:#660e7a}magic {color}+ {color:#008000}" does not support 
record headers"{color});

```java

 

The above code indicates the problem

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was 

[jira] [Updated] (KAFKA-8468) AdminClient.deleteTopics doesn't wait until topic is deleted

2019-07-03 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-8468:
-
Affects Version/s: 2.3.0

> AdminClient.deleteTopics doesn't wait until topic is deleted
> 
>
> Key: KAFKA-8468
> URL: https://issues.apache.org/jira/browse/KAFKA-8468
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.2.1
>Reporter: Gabor Somogyi
>Priority: Major
>
> Please see the example app to reproduce the issue: 
> https://github.com/gaborgsomogyi/kafka-topic-stress
> ZKUtils is deprecated from Kafka version 2.0.0 but there is no real 
> alternative.
> * deleteTopics doesn't wait
> * ZookeeperClient has "private [kafka]" visibility



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8625) intra broker data balance stuck

2019-07-03 Thread Ryne Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryne Yang updated KAFKA-8625:
-
Description: 
when we used kafka cruise control to invoke kafka's new feature( [feature 
proposal|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories#KIP-113:Supportreplicasmovementbetweenlogdirectories-1)Howtomovereplicabetweenlogdirectoriesonthesamebroker]]
 ) intra broker disk balance, it did a great work however the process seems to 
stuck at the last mile. 

we stop seeing more movements meaning the move is done and we do see great 
balanced results from our monitoring, but there are some logdirs that are stuck 
at moving indicated as below example:
{code:java}
{"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
{code}
 there are a handful of those partitions on each broker and they seem to be 
random.

we have waited for days and they don't seem to go away. however we haven't 
tried to restart the controller broker yet. 

does anyone know how to solve this and more importantly why did this happen?

so far we've only tried on version 1.1.1. no idea if this got fixed in the 
later version. 

  was:
when we used kafka cruise control to invoke kafka's new feature([feature 
proposal|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories#KIP-113:Supportreplicasmovementbetweenlogdirectories-1)Howtomovereplicabetweenlogdirectoriesonthesamebroker]])
 intra broker disk balance, it did a great work however the process seems to 
stuck at the last mile. 

we stop seeing more movements meaning the move is done and we do see great 
balanced results from our monitoring, but there are some logdirs that are stuck 
at moving indicated as below example:
{code:java}
{"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
{code}
 there are a handful of those partitions on each broker and they seem to be 
random.

we have waited for days and they don't seem to go away. however we haven't 
tried to restart the controller broker yet. 

does anyone know how to solve this and more importantly why did this happen?

so far we've only tried on version 1.1.1. no idea if this got fixed in the 
later version. 


> intra broker data balance stuck
> ---
>
> Key: KAFKA-8625
> URL: https://issues.apache.org/jira/browse/KAFKA-8625
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.1
> Environment: Linux 2.6.32-431.5.1.el6.x86_64 #1 SMP x86_64 x86_64 
> x86_64 GNU/Linux
>Reporter: Ryne Yang
>Priority: Major
>
> when we used kafka cruise control to invoke kafka's new feature( [feature 
> proposal|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories#KIP-113:Supportreplicasmovementbetweenlogdirectories-1)Howtomovereplicabetweenlogdirectoriesonthesamebroker]]
>  ) intra broker disk balance, it did a great work however the process seems 
> to stuck at the last mile. 
> we stop seeing more movements meaning the move is done and we do see great 
> balanced results from our monitoring, but there are some logdirs that are 
> stuck at moving indicated as below example:
> {code:java}
> {"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
> {code}
>  there are a handful of those partitions on each broker and they seem to be 
> random.
> we have waited for days and they don't seem to go away. however we haven't 
> tried to restart the controller broker yet. 
> does anyone know how to solve this and more importantly why did this happen?
> so far we've only tried on version 1.1.1. no idea if this got fixed in the 
> later version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8625) intra broker data balance stuck

2019-07-03 Thread Ryne Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryne Yang updated KAFKA-8625:
-
Description: 
when we used kafka cruise control to invoke kafka's new feature([feature 
proposal|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories#KIP-113:Supportreplicasmovementbetweenlogdirectories-1)Howtomovereplicabetweenlogdirectoriesonthesamebroker]])
 intra broker disk balance, it did a great work however the process seems to 
stuck at the last mile. 

we stop seeing more movements meaning the move is done and we do see great 
balanced results from our monitoring, but there are some logdirs that are stuck 
at moving indicated as below example:
{code:java}
{"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
{code}
 there are a handful of those partitions on each broker and they seem to be 
random.

we have waited for days and they don't seem to go away. however we haven't 
tried to restart the controller broker yet. 

does anyone know how to solve this and more importantly why did this happen?

so far we've only tried on version 1.1.1. no idea if this got fixed in the 
later version. 

  was:
when we used kafka cruise control to invoke kafka's new feature intra broker 
disk balance, it did a great work however the process seems to stuck at the 
last mile. 

we stop seeing more movements meaning the move is done and we do see great 
balanced results from our monitoring, but there are some logdirs that are stuck 
at moving indicated as below example:
{code:java}
{"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
{code}
 there are a handful of those partitions on each broker and they seem to be 
random.

we have waited for days and they don't seem to go away. however we haven't 
tried to restart the controller broker yet. 

does anyone know how to solve this and more importantly why did this happen?

so far we've only tried on version 1.1.1. no idea if this got fixed in the 
later version. 


> intra broker data balance stuck
> ---
>
> Key: KAFKA-8625
> URL: https://issues.apache.org/jira/browse/KAFKA-8625
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.1
> Environment: Linux 2.6.32-431.5.1.el6.x86_64 #1 SMP x86_64 x86_64 
> x86_64 GNU/Linux
>Reporter: Ryne Yang
>Priority: Major
>
> when we used kafka cruise control to invoke kafka's new feature([feature 
> proposal|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories#KIP-113:Supportreplicasmovementbetweenlogdirectories-1)Howtomovereplicabetweenlogdirectoriesonthesamebroker]])
>  intra broker disk balance, it did a great work however the process seems to 
> stuck at the last mile. 
> we stop seeing more movements meaning the move is done and we do see great 
> balanced results from our monitoring, but there are some logdirs that are 
> stuck at moving indicated as below example:
> {code:java}
> {"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
> {code}
>  there are a handful of those partitions on each broker and they seem to be 
> random.
> we have waited for days and they don't seem to go away. however we haven't 
> tried to restart the controller broker yet. 
> does anyone know how to solve this and more importantly why did this happen?
> so far we've only tried on version 1.1.1. no idea if this got fixed in the 
> later version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8625) intra broker data balance stuck

2019-07-03 Thread Ryne Yang (JIRA)
Ryne Yang created KAFKA-8625:


 Summary: intra broker data balance stuck
 Key: KAFKA-8625
 URL: https://issues.apache.org/jira/browse/KAFKA-8625
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.1
 Environment: Linux 2.6.32-431.5.1.el6.x86_64 #1 SMP x86_64 x86_64 
x86_64 GNU/Linux
Reporter: Ryne Yang


when we used kafka cruise control to invoke kafka's new feature intra broker 
disk balance, it did a great work however the process seems to stuck at the 
last mile. 

we stop seeing more movements meaning the move is done and we do see great 
balanced results from our monitoring, but there are some logdirs that are stuck 
at moving indicated as below example:
{code:java}
{"partition":"LOGSTASH5-4","size":0,"offsetLag":123189442,"isFuture":true}
{code}
 there are a handful of those partitions on each broker and they seem to be 
random.

we have waited for days and they don't seem to go away. however we haven't 
tried to restart the controller broker yet. 

does anyone know how to solve this and more importantly why did this happen?

so far we've only tried on version 1.1.1. no idea if this got fixed in the 
later version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2019-07-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877731#comment-16877731
 ] 

ASF GitHub Bot commented on KAFKA-1194:
---

manmedia commented on pull request #3838: [KAFKA-1194 Invokes unmap if on 
Windows OS]
URL: https://github.com/apache/kafka/pull/3838
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The kafka broker cannot delete the old log files after the configured time
> --
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0
> Environment: window
>Reporter: Tao Qin
>Priority: Critical
>  Labels: features, patch, windows
> Attachments: KAFKA-1194.patch, RetentionExpiredWindows.txt, 
> Untitled.jpg, image-2018-09-12-14-25-52-632.png, 
> image-2018-11-26-10-18-59-381.png, kafka-1194-v1.patch, kafka-1194-v2.patch, 
> kafka-bombarder.7z, screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>  at scala.collection.immutable.List.foreach(List.scala:76)
>  at kafka.log.Log.deleteOldSegments(Log.scala:418)
>  at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>  at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>  at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>  at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>  at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

[jira] [Commented] (KAFKA-7025) Android client support

2019-07-03 Thread Leif Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877667#comment-16877667
 ] 

Leif Yu commented on KAFKA-7025:


Is there any plan to fix this issue?

> Android client support
> --
>
> Key: KAFKA-7025
> URL: https://issues.apache.org/jira/browse/KAFKA-7025
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.0.1
>Reporter: Martin Vysny
>Priority: Major
>
> Adding kafka-clients:1.0.0 (also 1.0.1 and 1.1.0) to an Android project would 
> make  the compilation fail: com.android.tools.r8.ApiLevelException: 
> MethodHandle.invoke and MethodHandle.invokeExact are only supported starting 
> with Android O (--min-api 26)
>  
> Would it be possible to make the kafka-clients backward compatible with 
> reasonable Android API (say, 4.4.x) please?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7025) Android client support

2019-07-03 Thread gxshao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877661#comment-16877661
 ] 

gxshao commented on KAFKA-7025:
---

Vote !!!

> Android client support
> --
>
> Key: KAFKA-7025
> URL: https://issues.apache.org/jira/browse/KAFKA-7025
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.0.1
>Reporter: Martin Vysny
>Priority: Major
>
> Adding kafka-clients:1.0.0 (also 1.0.1 and 1.1.0) to an Android project would 
> make  the compilation fail: com.android.tools.r8.ApiLevelException: 
> MethodHandle.invoke and MethodHandle.invokeExact are only supported starting 
> with Android O (--min-api 26)
>  
> Would it be possible to make the kafka-clients backward compatible with 
> reasonable Android API (say, 4.4.x) please?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877607#comment-16877607
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 8:02 AM:
---

{color:#00}I would like to answer your questions. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}


was (Author: lizhitao):
{color:#00}I would like to answer your question. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread CHARELS (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877610#comment-16877610
 ] 

CHARELS commented on KAFKA-8624:


It's okay, appreciate

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877607#comment-16877607
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 8:01 AM:
---

{color:#00}I would like to answer your question. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}


was (Author: lizhitao):
{color:#00}I'd love to answer your question. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877607#comment-16877607
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 7:58 AM:
---

{color:#00}I'd love to answer your question. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}


was (Author: lizhitao):
I would like to answer your question. I probably know what the question is. I 
need to see the code and verify it. I'm a bit busy at the moment. How about 
I'll reply to you later?

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877607#comment-16877607
 ] 

Zhitao Li(李志涛) edited comment on KAFKA-8624 at 7/3/19 7:58 AM:
---

{color:#00}I'd love to answer your question. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}


was (Author: lizhitao):
{color:#00}I'd love to answer your question. I probably know what the 
question is. I need to take the time to look at the code and verify it. I'm a 
little busy at the moment. How about replying to you later?{color}

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 李志涛


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877607#comment-16877607
 ] 

Zhitao Li(李志涛) commented on KAFKA-8624:
---

I would like to answer your question. I probably know what the question is. I 
need to see the code and verify it. I'm a bit busy at the moment. How about 
I'll reply to you later?

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread CHARELS (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877602#comment-16877602
 ] 

CHARELS commented on KAFKA-8624:


Our version of kafka is 1.0.0, but so many clients that producing messages to 
our kafka, I'd like to say that if we could confirm which client version 
conflict with our kafka? I also want to know if the version of client is lower 
than server's, would this error happen? Thanks

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread CHARELS (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877583#comment-16877583
 ] 

CHARELS commented on KAFKA-8624:


Our version of kafka is 1.0.0, but so many clients that producing messages to 
our kafka, I'd like to say that if we could confirm which client version 
conflict with our kafka? I also want to know if the version of client is lower 
than server's, would this error happen? Thanks

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread 许晋瑞


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877560#comment-16877560
 ] 

Jinrui Xu (许晋瑞) edited comment on KAFKA-8624 at 7/3/19 7:29 AM:


Could you provide the versions of client and server that you use?  Maybe I can 
offer some help.


was (Author: jerry xu):
Could you provide the versions of client and server that you use? Maybe I can 
offer some help.

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877560#comment-16877560
 ] 

许晋瑞 commented on KAFKA-8624:


Could you provide the versions of client and server that you use? Maybe I can 
offer some help.

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread lizhitao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877530#comment-16877530
 ] 

lizhitao edited comment on KAFKA-8624 at 7/3/19 7:10 AM:
-

{color:#00}Please give the client-side and server-side versions 
respectively,then ,our helps you analyze mistakes of context。 {color}


was (Author: lizhitao):
{color:#00}Please give the client-side and server-side versions 
respectively,then ,helps you analyze mistake of context。 
{color}

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8624) 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?

2019-07-03 Thread lizhitao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877530#comment-16877530
 ] 

lizhitao commented on KAFKA-8624:
-

{color:#00}Please give the client-side and server-side versions 
respectively,then ,helps you analyze mistake of context。 
{color}

> 这边是服务端的kafka,版本是 kafka_2.11-1.0.0.2,当客户端向主题发送消息的时候抛出如下异常,客户端跟服务端的版本不一致吗?
> 
>
> Key: KAFKA-8624
> URL: https://issues.apache.org/jira/browse/KAFKA-8624
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: CHARELS
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> ERROR [KafkaApi-1004] Error when handling request 
> \{replica_id=-1,max_wait_time=1,min_bytes=0,topics=[{topic=ETE,partitions=[{partition=3,fetch_offset=119224671,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: Magic v0 does not support record headers
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:519)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:517)
>  at scala.Option.map(Option.scala:146)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:517)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:507)
>  at scala.Option.flatMap(Option.scala:171)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:507)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:554)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:554)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2033)
>  at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
>  at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2032)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:568)
>  at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:587)
>  at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
>  at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:586)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:603)
>  at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
>  at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:595)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:99)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7245) Deprecate WindowStore#put(key, value)

2019-07-03 Thread Omkar Mestry (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877520#comment-16877520
 ] 

Omkar Mestry commented on KAFKA-7245:
-

[~mjsax] as the KIP accepted shall I close this Jira or any task is to be 
perfomed furthur.

> Deprecate WindowStore#put(key, value)
> -
>
> Key: KAFKA-7245
> URL: https://issues.apache.org/jira/browse/KAFKA-7245
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Omkar Mestry
>Priority: Minor
>  Labels: needs-kip, newbie
>
> We want to remove `WindowStore#put(key, value)` – for this, we first need to 
> deprecate is via a KIP and remove later.
> Instead of using `WindowStore#put(key, value)` we need to migrate code to 
> specify the timestamp explicitly using `WindowStore#put(key, value, 
> timestamp)`. The current code base use the explicit call to set the timestamp 
> in production code already. The simplified `put(key, value)` is only used in 
> tests, and thus, we would need to update those tests.
> KIP-474 :- 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115526545]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)