Hi,

For the splitjson, it should be $ instead of $.
But as suggested, best is to use record processors.

Pierre

Le 25 août 2017 19:06, "Austin Duncan" <adun...@pyaanalytics.com> a écrit :

> How would you use convert record?
>
> On Fri, Aug 25, 2017 at 2:02 PM, Austin Duncan <adun...@pyaanalytics.com>
> wrote:
>
>> heres a sample flow. Sorry about it being vague i just dont want to risk
>> it.
>>
>> On Fri, Aug 25, 2017 at 2:00 PM, Bryan Bende <bbe...@gmail.com> wrote:
>>
>>> Would it help to go directly from Avro to CSV using ConvertRecord?
>>>
>>> Rather then making two conversions and splitting.
>>>
>>> On Fri, Aug 25, 2017 at 1:53 PM, Austin Duncan <adun...@pyaanalytics.com>
>>> wrote:
>>> > It's hipaa data but I can probably do a work around. Give me a minute
>>> >
>>> > On Fri, Aug 25, 2017 at 1:51 PM, Joe Witt <joe.w...@gmail.com> wrote:
>>> >>
>>> >> Austin,
>>> >>
>>> >> Can you share the sample data and flow template by chance?
>>> >>
>>> >> Thanks
>>> >> Joe
>>> >>
>>> >> On Fri, Aug 25, 2017 at 1:50 PM, Austin Duncan <
>>> adun...@pyaanalytics.com>
>>> >> wrote:
>>> >> > Hey all,
>>> >> >
>>> >> > So I am converting an avro schema to json. Then I am trying to
>>> split the
>>> >> > json so that each of item can be its own row in a csv. I am unable
>>> to
>>> >> > split
>>> >> > the json. i used a JSONPath Expression tester and it seems like the
>>> >> > expression '$.' should work. But when I put it into the SplitJson
>>> >> > processor
>>> >> > i get the error: "'' is invalind because Failed to run validation
>>> due to
>>> >> > java.langStringIndexOutOfBondsException: String index out of
>>> range:2."
>>> >> > It
>>> >> > seems like there should be a record "results" in the schema but it
>>> isnt
>>> >> > present in the json and it wont let me split on it. I have tried
>>> using
>>> >> > the
>>> >> > UpdateRecord processor but couldn't figure out how to nest the
>>> schema
>>> >> > objects with in a new object called "results". Any ideas?
>>> >> >
>>> >> > Thanks,
>>> >> > Austin
>>> >
>>> >
>>>
>>
>>
>

Reply via email to