I would also add that the pattern of splitting to 1 record per flow
file was common before the record processors existed, and generally
this can/should be avoided now in favor of processing/manipulating
records in place, and keeping them together in large batches.



On Tue, Aug 7, 2018 at 9:10 AM, Andrew Grande <apere...@gmail.com> wrote:
> Careful, that makes too much sense, Joe ;)
>
>
> On Tue, Aug 7, 2018, 8:45 AM Joe Witt <joe.w...@gmail.com> wrote:
>>
>> i think we just need to make an ExecuteSqlRecord processor.
>>
>> thanks
>>
>> On Tue, Aug 7, 2018, 8:41 AM Mike Thomsen <mikerthom...@gmail.com> wrote:
>>>
>>> My guess is that it is due to the fact that Avro is the only record type
>>> that can match sql pretty closely feature to feature on data types.
>>> On Tue, Aug 7, 2018 at 8:33 AM Boris Tyukin <bo...@boristyukin.com>
>>> wrote:
>>>>
>>>> I've been wondering since I started learning NiFi why ExecuteSQL
>>>> processor only returns AVRO formatted data. All community examples I've 
>>>> seen
>>>> then convert AVRO to json and pretty much all of them then split json to
>>>> multiple flows.
>>>>
>>>> I found myself doing the same thing over and over and over again.
>>>>
>>>> Since everyone is doing it, is there a strong reason why AVRO is liked
>>>> so much? And why everyone continues doing this 3 step pattern rather than
>>>> providing users with an option to output json instead and another option to
>>>> output one flowfile or multiple (one per record).
>>>>
>>>> thanks
>>>> Boris

Reply via email to