a2l007 commented on issue #11179:
URL: https://github.com/apache/druid/issues/11179#issuecomment-843279504


   For append tasks such as Kafka indexing, the segment identifier is 
identified using the allocate API of the overlord. This is different from the 
overwrite task behavior since append tasks cannot change the segment 
granularity of the existing segments. 
   Therefore during the allocate action in the overlord, it will initially try 
to find an existing segment which can be used to add the current row. If that's 
not possible, it will try to fit the current row into one of the [predefined 
granularities](https://github.com/apache/druid/blob/master/core/src/main/java/org/apache/druid/java/util/common/granularity/Granularities.java)
 that is closest to the segment granularity defined in the ingestion spec. In 
your case, the closest predefined granularity finer than 3H would be 
[HOUR](https://github.com/apache/druid/blob/master/core/src/main/java/org/apache/druid/java/util/common/granularity/Granularities.java#L34)
 which would be why an hourly segment got created in your case. It looks like 
the relevant documentation needs to be clarified in this aspect.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to