[ 
https://issues.apache.org/jira/browse/KAFKA-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969163#comment-15969163
 ] 

Matthias J. Sax commented on KAFKA-5071:
----------------------------------------

[~dnagarajan] Similar to KAFKA-5070 this might be fixed in {{0.10.2.1}} 
already. Can you verify, if it got fixed there already? Thanks a lot!

> 2017-04-11 18:18:45.574 ERROR StreamThread:783 StreamThread-128 - 
> stream-thread [StreamThread-128] Failed to commit StreamTask 0_304 state:  
> org.apache.kafka.streams.errors.ProcessorStateException: task [0_304] Failed 
> to flush state store fmdbt 
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-5071
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5071
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.2.0
>         Environment: Linux
>            Reporter: Dhana
>         Attachments: RocksDB_Issue_commitFailedonFlush.7z
>
>
> Scenario: we use two consumer(applicaion -puse10) in different machine.
> using 400 partitions, 200 streams/consumer.
> config:
> bootstrap.servers=10.16.34.29:9092,10.16.35.134:9092,10.16.38.27:9092
> zookeeper.connect=10.16.34.29:2181,10.16.35.134:2181,10.16.38.27:2181
> num.stream.threads=200
> pulse.per.pdid.count.enable=false
> replication.factor=2
> state.dir=/opt/rocksdb
> max.poll.records=50
> session.timeout.ms=180000
> request.timeout.ms=5020000
> max.poll.interval.ms=5000000
> fetch.max.bytes=102400
> max.partition.fetch.bytes=102400
> heartbeat.interval.ms = 60000
> Logs - attached.
> Error:
> 2017-04-11 18:18:45.170 INFO  VehicleEventsStreamProcessor:219 
> StreamThread-32 - Current size of Treemap is 4 for pdid 
> skga11041730gedvcl2pdid2236
> 2017-04-11 18:18:45.170 INFO  VehicleEventsStreamProcessor:245 
> StreamThread-32 - GE to be processed pdid skga11041730gedvcl2pdid2236 and 
> uploadTimeStamp 2017-04-11 17:46:06.883
> 2017-04-11 18:18:45.175 INFO  VehicleEventsStreamProcessor:179 
> StreamThread-47 - Arrived GE uploadTimestamp 2017-04-11 17:46:10.911 pdid 
> skga11041730gedvcl2pdid2290
> 2017-04-11 18:18:45.176 INFO  VehicleEventsStreamProcessor:219 
> StreamThread-47 - Current size of Treemap is 4 for pdid 
> skga11041730gedvcl2pdid2290
> 2017-04-11 18:18:45.176 INFO  VehicleEventsStreamProcessor:245 
> StreamThread-47 - GE to be processed pdid skga11041730gedvcl2pdid2290 and 
> uploadTimeStamp 2017-04-11 17:46:06.911
> 2017-04-11 18:18:45.571 INFO  StreamThread:737 StreamThread-128 - 
> stream-thread [StreamThread-128] Committing all tasks because the commit 
> interval 30000ms has elapsed
> 2017-04-11 18:18:45.571 INFO  StreamThread:775 StreamThread-128 - 
> stream-thread [StreamThread-128] Committing task StreamTask 0_304
> 2017-04-11 18:18:45.574 ERROR StreamThread:783 StreamThread-128 - 
> stream-thread [StreamThread-128] Failed to commit StreamTask 0_304 state: 
> org.apache.kafka.streams.errors.ProcessorStateException: task [0_304] Failed 
> to flush state store fmdbt
>       at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:325)
>       at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:72)
>       at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
>       at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:280)
>       at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:777)
>       at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:764)
>       at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:739)
>       at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:661)
>       at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error 
> while executing flush from store fmdbt
>       at 
> com.harman.analytics.stream.base.stores.HarmanRocksDBStore.flushInternal(HarmanRocksDBStore.java:353)
>       at 
> com.harman.analytics.stream.base.stores.HarmanRocksDBStore.flush(HarmanRocksDBStore.java:342)
>       at 
> com.harman.analytics.stream.base.stores.HarmanPersistentKVStore.flush(HarmanPersistentKVStore.java:72)
>       at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:323)
>       ... 8 more
> Caused by: org.rocksdb.RocksDBException: N
>       at org.rocksdb.RocksDB.flush(Native Method)
>       at org.rocksdb.RocksDB.flush(RocksDB.java:1642)
>       at 
> com.harman.analytics.stream.base.stores.HarmanRocksDBStore.flushInternal(HarmanRocksDBStore.java:351)
>       ... 11 more
> 2017-04-11 18:18:45.583 INFO  StreamThread:397 StreamThread-128 - 
> stream-thread [StreamThread-128] Shutting down
> 2017-04-11 18:18:45.584 INFO  StreamThread:1045 StreamThread-128 - 
> stream-thread [StreamThread-128] Closing a task 0_304
> 2017-04-11 18:18:45.790 INFO  StreamThread:1045 StreamThread-128 - 
> stream-thread [StreamThread-128] Closing a task 0_104
> 2017-04-11 18:18:45.790 INFO  StreamThread:544 StreamThread-128 - 
> stream-thread [StreamThread-128] Flushing state stores of task 0_304
> 2017-04-11 18:18:45.790 ERROR StreamThread:505 StreamThread-128 - 
> stream-thread [StreamThread-128] Failed while executing StreamTask 0_304 due 
> to flush state: 
> org.apache.kafka.streams.errors.ProcessorStateException: task [0_304] Failed 
> to flush state store fmdbt



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to