terry19850829 commented on issue #8605: Failed to publish segments because of 
[java.lang.RuntimeException: Aborting transaction!].
URL: https://github.com/apache/druid/issues/8605#issuecomment-600400215
 
 
   > Have the same issue with version 0.16.0-incubating
   > 
   > overlord logs:
   > 
   > ```
   > ....
   > 2020-03-18 08:22:46,796 INFO [KafkaSupervisor-crm_bd_capacity_ds] 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - 
{id='crm_bd_capacity_ds', generationTime=2020-03-18T00:22:46.796Z, 
payload=KafkaSupervisorReportPayload{dataSource='crm_bd_capacity_ds', 
topic='crm.capacity.event', partitions=3, replicas=1, durationSeconds=3600, 
active=[{id='index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm', 
startTime=2020-03-17T23:23:33.411Z, remainingSeconds=46}], publishing=[], 
suspended=false, healthy=true, state=RUNNING, detailedState=RUNNING, 
recentErrors=[]}}
   > 2020-03-18 08:23:16,796 INFO [KafkaSupervisor-crm_bd_capacity_ds] 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - 
{id='crm_bd_capacity_ds', generationTime=2020-03-18T00:23:16.796Z, 
payload=KafkaSupervisorReportPayload{dataSource='crm_bd_capacity_ds', 
topic='crm.capacity.event', partitions=3, replicas=1, durationSeconds=3600, 
active=[{id='index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm', 
startTime=2020-03-17T23:23:33.411Z, remainingSeconds=16}], publishing=[], 
suspended=false, healthy=true, state=RUNNING, detailedState=RUNNING, 
recentErrors=[]}}
   > 2020-03-18 08:23:34,477 INFO [IndexTaskClient-crm_bd_capacity_ds-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskClient - Task 
[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] paused successfully
   > 2020-03-18 08:23:34,515 INFO [KafkaSupervisor-crm_bd_capacity_ds] 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - 
{id='crm_bd_capacity_ds', generationTime=2020-03-18T00:23:34.515Z, 
payload=KafkaSupervisorReportPayload{dataSource='crm_bd_capacity_ds', 
topic='crm.capacity.event', partitions=3, replicas=1, durationSeconds=3600, 
active=[], 
publishing=[{id='index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm', 
startTime=2020-03-17T23:23:33.411Z, remainingSeconds=1799}], suspended=false, 
healthy=true, state=RUNNING, detailedState=RUNNING, recentErrors=[]}}
   > 2020-03-18 08:23:34,872 INFO [qtp343722304-89] 
org.apache.druid.indexing.common.actions.LocalTaskActionClient - Performing 
action for task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm]: 
SegmentTransactionalInsertAction{segmentsToBeOverwritten=null, 
segments=[DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-17T23:00:00.000Z_2020-03-18T00:00:00.000Z_2020-03-17T23:26:39.118Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200317T230000.000Z_20200318T000000.000Z/2020-03-17T23_26_39.118Z/0_41746007-f3d8-4bbb-ab51-c899fbced760_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=4309}, 
DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-18T00:00:00.000Z_2020-03-18T01:00:00.000Z_2020-03-18T00:08:10.800Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200318T000000.000Z_20200318T010000.000Z/2020-03-18T00_08_10.800Z/0_6cbb5e49-dac9-4c15-ae7f-63ca8deb7d49_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=5019}], 
startMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamStartSequenceNumbers{stream='crm.capacity.event',
 partitionSequenceNumberMap={0=2487405, 1=2487408, 2=2487404}, 
exclusivePartitions=[]}}, 
endMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamEndSequenceNumbers{stream='crm.capacity.event',
 partitionSequenceNumberMap={0=2487410, 1=2487413, 2=2487408}}}}
   > 2020-03-18 08:23:34,900 INFO [qtp343722304-89] 
org.apache.druid.java.util.emitter.core.LoggingEmitter - 
{"feed":"metrics","timestamp":"2020-03-18T00:23:34.900Z","service":"druid/overlord","host":"druid-master-001:8090","version":"0.16.0-incubating","metric":"segment/txn/failure","value":1,"dataSource":"crm_bd_capacity_ds","taskId":"index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm","taskType":"index_kafka"}
   > 2020-03-18 08:23:34,900 INFO [qtp343722304-89] 
org.apache.druid.java.util.emitter.core.LoggingEmitter - 
{"feed":"metrics","timestamp":"2020-03-18T00:23:34.900Z","service":"druid/overlord","host":"druid-master-001:8090","version":"0.16.0-incubating","metric":"task/action/run/time","value":28,"dataSource":"crm_bd_capacity_ds","taskId":"index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm","taskType":"index_kafka"}
   > 2020-03-18 08:23:34,910 INFO [qtp343722304-93] 
org.apache.druid.indexing.common.actions.LocalTaskActionClient - Performing 
action for task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm]: 
SegmentListUsedAction{dataSource='crm_bd_capacity_ds', 
intervals=[2020-03-17T23:00:00.000Z/2020-03-18T01:00:00.000Z]}
   > 2020-03-18 08:23:34,915 INFO [qtp343722304-93] 
org.apache.druid.java.util.emitter.core.LoggingEmitter - 
{"feed":"metrics","timestamp":"2020-03-18T00:23:34.914Z","service":"druid/overlord","host":"druid-master-001:8090","version":"0.16.0-incubating","metric":"task/action/run/time","value":4,"dataSource":"crm_bd_capacity_ds","taskId":"index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm","taskType":"index_kafka"}
   > 2020-03-18 08:23:35,718 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.RemoteTaskRunner - 
Worker[druid-data-004:8091] wrote FAILED status for task 
[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] on 
[TaskLocation{host='druid-data-004', port=8100, tlsPort=-1}]
   > 2020-03-18 08:23:35,718 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.RemoteTaskRunner - 
Worker[druid-data-004:8091] completed 
task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] with 
status[FAILED]
   > 2020-03-18 08:23:35,718 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskQueue - Received FAILED status for task: 
index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm
   > 2020-03-18 08:23:35,718 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.RemoteTaskRunner - Shutdown 
[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] because: [notified 
status change from task]
   > 2020-03-18 08:23:35,718 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.RemoteTaskRunner - Cleaning up 
task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] on 
worker[druid-data-004:8091]
   > 2020-03-18 08:23:35,724 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskLockbox - Removing 
task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] from activeTasks
   > 2020-03-18 08:23:35,724 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskLockbox - Removing 
task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] from 
TaskLock[TimeChunkLock{type=EXCLUSIVE, 
groupId='index_kafka_crm_bd_capacity_ds', dataSource='crm_bd_capacity_ds', 
interval=2020-03-17T23:00:00.000Z/2020-03-18T00:00:00.000Z, 
version='2020-03-17T23:26:39.118Z', priority=75, revoked=false}]
   > 2020-03-18 08:23:35,737 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskLockbox - Removing 
task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] from 
TaskLock[TimeChunkLock{type=EXCLUSIVE, 
groupId='index_kafka_crm_bd_capacity_ds', dataSource='crm_bd_capacity_ds', 
interval=2020-03-18T00:00:00.000Z/2020-03-18T01:00:00.000Z, 
version='2020-03-18T00:08:10.800Z', priority=75, revoked=false}]
   > 2020-03-18 08:23:35,756 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.MetadataTaskStorage - Updating task 
index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm to status: 
TaskStatus{id=index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm, 
status=FAILED, duration=3606169, 
errorMsg=java.util.concurrent.ExecutionException: 
org.apache.druid.java.util.common.ISE: Failed to publish se...}
   > 2020-03-18 08:23:35,766 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskQueue - Task done: 
AbstractTask{id='index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm', 
groupId='index_kafka_crm_bd_capacity_ds', 
taskResource=TaskResource{availabilityGroup='index_kafka_crm_bd_capacity_ds_374d70c186b714c',
 requiredCapacity=1}, dataSource='crm_bd_capacity_ds', 
context={forceTimeChunkLock=true, 
checkpoints={"0":{"0":2487405,"1":2487408,"2":2487404}}, 
IS_INCREMENTAL_HANDOFF_SUPPORTED=true}}
   > 2020-03-18 08:23:35,766 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.java.util.emitter.core.LoggingEmitter - 
{"feed":"metrics","timestamp":"2020-03-18T00:23:35.766Z","service":"druid/overlord","host":"druid-master-001:8090","version":"0.16.0-incubating","metric":"task/run/time","value":3606169,"dataSource":"crm_bd_capacity_ds","taskId":"index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm","taskStatus":"FAILED","taskType":"index_kafka"}
   > 2020-03-18 08:23:35,766 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskQueue - Task FAILED: 
AbstractTask{id='index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm', 
groupId='index_kafka_crm_bd_capacity_ds', 
taskResource=TaskResource{availabilityGroup='index_kafka_crm_bd_capacity_ds_374d70c186b714c',
 requiredCapacity=1}, dataSource='crm_bd_capacity_ds', 
context={forceTimeChunkLock=true, 
checkpoints={"0":{"0":2487405,"1":2487408,"2":2487404}}, 
IS_INCREMENTAL_HANDOFF_SUPPORTED=true}} (3606169 run duration)
   > 2020-03-18 08:23:35,767 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.TaskRunnerUtils - Task 
[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] status changed to 
[FAILED].
   > 2020-03-18 08:23:35,767 INFO [Curator-PathChildrenCache-1] 
org.apache.druid.indexing.overlord.RemoteTaskRunner - 
Task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] went bye bye.
   > ```
   
   task error log:
   
   ```
   2020-03-18T00:23:34,863 INFO [publish-0] 
org.apache.druid.indexing.common.actions.RemoteTaskActionClient - Performing 
action for task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm]: 
SegmentTransactionalInsertAction{segmentsToBeOverwritten=null, 
segments=[DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-17T23:00:00.000Z_2020-03-18T00:00:00.000Z_2020-03-17T23:26:39.118Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200317T230000.000Z_20200318T000000.000Z/2020-03-17T23_26_39.118Z/0_41746007-f3d8-4bbb-ab51-c899fbced760_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=4309}, 
DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-18T00:00:00.000Z_2020-03-18T01:00:00.000Z_2020-03-18T00:08:10.800Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200318T000000.000Z_20200318T010000.000Z/2020-03-18T00_08_10.800Z/0_6cbb5e49-dac9-4c15-ae7f-63ca8deb7d49_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=5019}], 
startMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamStartSequenceNumbers{stream='crm.capacity.event',
 partitionSequenceNumberMap={0=2487405, 1=2487408, 2=2487404}, 
exclusivePartitions=[]}}, 
endMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamEndSequenceNumbers{stream='crm.capacity.event',
 partitionSequenceNumberMap={0=2487410, 1=2487413, 2=2487408}}}}
   2020-03-18T00:23:34,868 INFO [publish-0] 
org.apache.druid.indexing.common.actions.RemoteTaskActionClient - Submitting 
action for task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] to 
overlord: [SegmentTransactionalInsertAction{segmentsToBeOverwritten=null, 
segments=[DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-17T23:00:00.000Z_2020-03-18T00:00:00.000Z_2020-03-17T23:26:39.118Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200317T230000.000Z_20200318T000000.000Z/2020-03-17T23_26_39.118Z/0_41746007-f3d8-4bbb-ab51-c899fbced760_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=4309}, 
DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-18T00:00:00.000Z_2020-03-18T01:00:00.000Z_2020-03-18T00:08:10.800Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200318T000000.000Z_20200318T010000.000Z/2020-03-18T00_08_10.800Z/0_6cbb5e49-dac9-4c15-ae7f-63ca8deb7d49_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=5019}], 
startMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamStartSequenceNumbers{stream='crm.capacity.event',
 partitionSequenceNumberMap={0=2487405, 1=2487408, 2=2487404}, 
exclusivePartitions=[]}}, 
endMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamEndSequenceNumbers{stream='crm.capacity.event',
 partitionSequenceNumberMap={0=2487410, 1=2487413, 2=2487408}}}}].
   2020-03-18T00:23:34,905 INFO [publish-0] 
org.apache.druid.indexing.common.actions.RemoteTaskActionClient - Performing 
action for task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm]: 
SegmentListUsedAction{dataSource='crm_bd_capacity_ds', 
intervals=[2020-03-17T23:00:00.000Z/2020-03-18T01:00:00.000Z]}
   2020-03-18T00:23:34,906 INFO [publish-0] 
org.apache.druid.indexing.common.actions.RemoteTaskActionClient - Submitting 
action for task[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm] to 
overlord: [SegmentListUsedAction{dataSource='crm_bd_capacity_ds', 
intervals=[2020-03-17T23:00:00.000Z/2020-03-18T01:00:00.000Z]}].
   2020-03-18T00:23:34,918 INFO [publish-0] 
org.apache.druid.storage.hdfs.HdfsDataSegmentKiller - Killing 
segment[crm_bd_capacity_ds_2020-03-17T23:00:00.000Z_2020-03-18T00:00:00.000Z_2020-03-17T23:26:39.118Z]
 mapped to 
path[hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200317T230000.000Z_20200318T000000.000Z/2020-03-17T23_26_39.118Z/0_41746007-f3d8-4bbb-ab51-c899fbced760_index.zip]
   2020-03-18T00:23:34,948 INFO [publish-0] 
org.apache.druid.storage.hdfs.HdfsDataSegmentKiller - Killing 
segment[crm_bd_capacity_ds_2020-03-18T00:00:00.000Z_2020-03-18T01:00:00.000Z_2020-03-18T00:08:10.800Z]
 mapped to 
path[hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200318T000000.000Z_20200318T010000.000Z/2020-03-18T00_08_10.800Z/0_6cbb5e49-dac9-4c15-ae7f-63ca8deb7d49_index.zip]
   2020-03-18T00:23:34,972 WARN [publish-0] 
org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Failed 
publish, not removing segments: [DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-17T23:00:00.000Z_2020-03-18T00:00:00.000Z_2020-03-17T23:26:39.118Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200317T230000.000Z_20200318T000000.000Z/2020-03-17T23_26_39.118Z/0_41746007-f3d8-4bbb-ab51-c899fbced760_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=4309}, 
DataSegment{binaryVersion=9, 
id=crm_bd_capacity_ds_2020-03-18T00:00:00.000Z_2020-03-18T01:00:00.000Z_2020-03-18T00:08:10.800Z,
 loadSpec={type=>hdfs, 
path=>hdfs://mycluster/druid_one_for_all/segments/crm_bd_capacity_ds/20200318T000000.000Z_20200318T010000.000Z/2020-03-18T00_08_10.800Z/0_6cbb5e49-dac9-4c15-ae7f-63ca8deb7d49_index.zip},
 dimensions=[level1_code, level2_code, level3_code, level4_code, level5_code, 
user_id], metrics=[cnt, merchant_create_cnt, store_create_cnt, audit_pass_cnt, 
realname_pass_cnt, terminal_bind_cnt, level5_average], 
shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, size=5019}]
   org.apache.druid.java.util.common.ISE: Failed to publish segments because of 
[java.lang.RuntimeException: Aborting transaction!].
        at 
org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:605)
 ~[druid-server-0.16.0-incubating.jar:0.16.0-incubating]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_242]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_242]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_242]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
   2020-03-18T00:23:34,974 ERROR [publish-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Error 
while publishing segments for sequenceNumber[SequenceMetadata{sequenceId=0, 
sequenceName='index_kafka_crm_bd_capacity_ds_374d70c186b714c_0', 
assignments=[], startOffsets={0=2487405, 1=2487408, 2=2487404}, 
exclusiveStartPartitions=[], endOffsets={0=2487410, 1=2487413, 2=2487408}, 
sentinel=false, checkpointed=true}]
   org.apache.druid.java.util.common.ISE: Failed to publish segments because of 
[java.lang.RuntimeException: Aborting transaction!].
        at 
org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:605)
 ~[druid-server-0.16.0-incubating.jar:0.16.0-incubating]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_242]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_242]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_242]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
   2020-03-18T00:23:34,978 INFO [task-runner-0-priority-0] 
org.apache.druid.segment.realtime.appenderator.AppenderatorImpl - Shutting down 
immediately...
   2020-03-18T00:23:34,980 INFO [task-runner-0-priority-0] 
org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Unannouncing 
segment[crm_bd_capacity_ds_2020-03-17T23:00:00.000Z_2020-03-18T00:00:00.000Z_2020-03-17T23:26:39.118Z]
 at 
path[/druid_olap/segments/druid-data-004:8100/druid-data-004:8100_indexer-executor__default_tier_2020-03-17T23:26:39.205Z_8163eaba53dd40318594fafa58ba16d90]
   2020-03-18T00:23:34,980 INFO [task-runner-0-priority-0] 
org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Unannouncing 
segment[crm_bd_capacity_ds_2020-03-18T00:00:00.000Z_2020-03-18T01:00:00.000Z_2020-03-18T00:08:10.800Z]
 at 
path[/druid_olap/segments/druid-data-004:8100/druid-data-004:8100_indexer-executor__default_tier_2020-03-17T23:26:39.205Z_8163eaba53dd40318594fafa58ba16d90]
   2020-03-18T00:23:34,980 INFO [task-runner-0-priority-0] 
org.apache.druid.curator.announcement.Announcer - unannouncing 
[/druid_olap/segments/druid-data-004:8100/druid-data-004:8100_indexer-executor__default_tier_2020-03-17T23:26:39.205Z_8163eaba53dd40318594fafa58ba16d90]
   2020-03-18T00:23:34,992 INFO [task-runner-0-priority-0] 
org.apache.druid.segment.realtime.firehose.ServiceAnnouncingChatHandlerProvider 
- Unregistering chat 
handler[index_kafka_crm_bd_capacity_ds_374d70c186b714c_ocaijcmm]
   2020-03-18T00:23:34,992 INFO [task-runner-0-priority-0] 
org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannouncing 
[DiscoveryDruidNode{druidNode=DruidNode{serviceName='druid/middleManager', 
host='druid-data-004', bindOnHost=false, port=-1, plaintextPort=8100, 
enablePlaintextPort=true, tlsPort=-1, enableTlsPort=false}, nodeType='PEON', 
services={dataNodeService=DataNodeService{tier='_default_tier', maxSize=0, 
type=indexer-executor, priority=0}, 
lookupNodeService=LookupNodeService{lookupTier='__default'}}}].
   2020-03-18T00:23:34,992 INFO [task-runner-0-priority-0] 
org.apache.druid.curator.announcement.Announcer - unannouncing 
[/druid_olap/internal-discovery/PEON/druid-data-004:8100]
   2020-03-18T00:23:34,997 INFO [task-runner-0-priority-0] 
org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannounced 
[DiscoveryDruidNode{druidNode=DruidNode{serviceName='druid/middleManager', 
host='druid-data-004', bindOnHost=false, port=-1, plaintextPort=8100, 
enablePlaintextPort=true, tlsPort=-1, enableTlsPort=false}, nodeType='PEON', 
services={dataNodeService=DataNodeService{tier='_default_tier', maxSize=0, 
type=indexer-executor, priority=0}, 
lookupNodeService=LookupNodeService{lookupTier='__default'}}}].
   2020-03-18T00:23:34,997 INFO [task-runner-0-priority-0] 
org.apache.druid.curator.announcement.Announcer - unannouncing 
[/druid_olap/announcements/druid-data-004:8100]
   2020-03-18T00:23:35,002 ERROR [task-runner-0-priority-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - 
Encountered exception while running task.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to