Yes that is what I was saying for PublishKafka, but it seems like it
only makes sense if you had one message per flow file, which is not
always the case.

If you have a flow file with 10k messages in it, and you are streaming
it to Kafka based on some delimiter, would you want to use the same
timestamp for all 10k messages?

On Thu, Feb 1, 2018 at 1:30 AM, Mika Borner <n...@my2ndhead.com> wrote:
> Thanks Bryan
>
> For PublishKafka, couldn't this be done by using an attribute that contains
> the timestamp?
>
> For PublishKafkaRecord this sounds reasonable.
>
> I guess, I will open an enhancement request then.
>
> Mika>
>
>
>
> On 01/30/2018 02:58 PM, Bryan Bende wrote:
>>
>> Hello,
>>
>> The timestamp in Kafka is separate from the headers, currently there
>> isn't a way to specify the timestamp from NiFi.
>>
>> For PublishKafkaRecord, I could see having an option to take the value
>> of a specified field from each record and make that the timestamp,
>> assuming it can be converted to a long.
>>
>> For PublishKafka, the processor doesn't have any knowledge about the
>> actual data, so the only thing that could be done here is to set the
>> same timestamp for every piece of data from the given flow file, which
>> doesn't seem as helpful.
>>
>> -Bryan
>>
>>
>> On Mon, Jan 29, 2018 at 5:37 AM, Mika Borner <n...@my2ndhead.com> wrote:
>>>
>>> Hi,
>>>
>>> Is it possible to set the producer record timestamp within the
>>> PublishKafka_1_0 / PublishKafkaRecord_1_0 processor?
>>>
>>> I tried to use the "Attributes to Send as Headers" option with a
>>> timestamp
>>> attribute, but this did not work. Not sure if the timestamp producer
>>> record's timestamp is in the headers.
>>>
>>> Appreciate any help.
>>>
>>> Thanks,
>>>
>>> Mika>
>
>

Reply via email to