Adam, I'm not very familiar with that specific processor but I think you'll find your case is probably far better handled using the Record reader/writer processors anyway. There is a GrokReader which you can use to read each line of a given input as grok expressions to parse out key fields against your desired schema. Then there are writers for csv, json, avro, etc.. They provide processors to partition based on like records, validate records match expected structure, merge records, convert, transfer to/from Kafka, Split records, etc..
Thanks Joe On Mon, Sep 25, 2017 at 6:46 PM, Adam Lamar <[email protected]> wrote: > Hi there, > > I've been playing with the ExtractGrok processor and noticed I was missing > some data that I expected to be extracted. After some investigation, it > seems that ExtractGrok extracts only the first line of the flowfile content, > and ignores the rest. > > Is this expected behavior? I should be able to use SplitText to break up the > records, but it surprised me because other grok tools I've used have been > line-oriented by default (at least from the perspective of the user). > > Cheers, > Adam
