wengwh commented on pull request #7547:
URL: https://github.com/apache/druid/pull/7547#issuecomment-639233352


   before the version,we use hadoop-task and kafka-task ingestion data into the 
same datasource, is success.
   but upgrade new version, if one segment is ingestion by hadoop task,the 
shardSpec will be hashed,
   the kafka task ingestion the same segment will exception
   org.apache.druid.java.util.common.ISE: Could not allocate segment for row 
with timestamp
   
   now we must run compact task to solve the exception
   
   Is there any new solution to this problem?
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to