qianmoQ edited a comment on issue #8605: Failed to publish segments because of 
[java.lang.RuntimeException: Aborting transaction!].
URL: 
https://github.com/apache/incubator-druid/issues/8605#issuecomment-553303081
 
 
   > Any way to reproduce this? Somehow encountered this issue and unable to 
reproduce. What I found out later was my exact datasource is unused (unable to 
persist into deep storage)
   > Using kafka index service, druid version 0.15.1
   > 
   > ```
   > org.apache.druid.java.util.common.ISE: Failed to publish segments because 
of [java.lang.RuntimeException: Aborting transaction!].
   >    at 
org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:602)
 ~[druid-server-0.15.1-incubating.jar:0.15.1-incubating]
   >    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_151]
   >    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
   >    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
   >    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
   > 2019-11-08T06:56:35,879 ERROR [publish-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Error 
while publishing segments for sequenceNumber[SequenceMetadata{sequenceId=0, 
sequenceName='index_kafka_shop_stats_product_view_bef122043a00004_0', 
assignments=[], startOffsets={0=1015942}, exclusiveStartPartitions=[], 
endOffsets={0=1025942}, sentinel=false, checkpointed=true}]
   > org.apache.druid.java.util.common.ISE: Failed to publish segments because 
of [java.lang.RuntimeException: Aborting transaction!].
   >    at 
org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:602)
 ~[druid-server-0.15.1-incubating.jar:0.15.1-incubating]
   >    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_151]
   >    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
   >    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
   >    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
   > ```
   
   this is a bug, kafka index service generate Multiple tasks, However, when 
tasks were merged, multiple segments in the same group failed to be merged, and 
all the data of the current segment were lost.
   
   Recommend the use of non-apache version, the current version of apache 
upgrade too fast, the function is not perfect.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to