If you set *message.timestamp.type* (or *log.message.timestamp.type*) to be LogAppendTime, this would make sense.
I am new to Kafka, too, and if this was set to CreateTime, I don't know what the behavior would be. There is *message.timestamp.difference.max.ms <http://message.timestamp.difference.max.ms>* setting too, so there seem to be certain "boundedness" of how much clock skew is allowed between the producer and the broker, so you could implement various types of policies (min, max, etc) for this API. On Mon, Feb 19, 2018 at 7:36 AM, Xavier Noria <f...@hashref.com> wrote: > In the mental model I am building of how Kafka works (new to this), the > broker keeps offsets by consumer group, and individual consumers basically > depend on the offset of the consumer group they join. Also consumer groups > may opt to start from the beginning. > > OK, in that mental model there is a linearization of messages per > partition. As the documentation says, there is a total order per partition, > and the order is based on the offset, unrelated to the timestamp. > > But I see the Java library has timestamp-oriented methods like: > > > https://kafka.apache.org/0102/javadoc/org/apache/kafka/ > clients/consumer/Consumer.html#offsetsForTimes(java.util.Map) > > How does that make sense given the model described above? How is that > implemented? Does the broker has builtin support for this? What happens if > due to race conditions or machines with clocks out of sync you have > messages with timestamps interleaved? > > Could anyone concile that API with the intrinsec offset-based contract? > -- *Steve JangPRINCIPAL ENGINEER Mobile +1.206.384.2999 | Support +1.800.340.9194 <https://www.qualtrics.com/?utm_medium=email+signature&utm_source=internal+initiatives>*