I think you will need to use an ExecuteScript processor to nicely format
your data into attributes into the foo1=bar1 pattern. I'm assuming here
that you cannot predict what the key names 'foo1', 'foo2', etc. will
actually be. If you could predict those field names, an UpdateAttribute
processor
Jeff,
Sounds like fair game and a welcomed addition. Maybe just fire up a JIRA to
track work so we can avoid duplicating efforts.
Thanks!
> On Jul 31, 2017, at 17:17, Jeff Zemerick wrote:
>
> I have a processor that ingests triples to Apache Rya. Would it be ok to
>
I have a processor that ingests triples to Apache Rya. Would it be ok to
work on a pull request to include this processor in NiFi? (I guess what I
am asking more broadly is if there is any selection criteria that
determines what processors are included in NiFi?)
Thanks,
Jeff
Hi,
As Bryan said, you only need to run the command once. However, if it is run
from the same directory multiple times, and the nifi-key.key and nifi-cert.pem
files that are generated the first time are not removed between runs, it will
use the same CA key to sign all the generated
I am parsing a text file with semi-formatted data. I use the SplitContent
processor to get me each "block" of data- basically each object, and then
the ExtractText processor to get all the individual properties that will be
associated to that object.
So I might have a flowfile with data:
foo1:
Hello,
I think you should only make one call to the toolkit which should
generate a CA, the server certs, and the client cert all at the same
time. The -C flag is for the client cert which you already had on the
first call so I think it generated it already.
By running it twice like above, the
Hello,
The PutParquet processor uses the Hadoop client to write to a filesystem.
For example, to write to HDFS you would have a core-site.xml with a
filesystem like:
fs.defaultFS
hdfs://yourhost
And to write to a local filesystem you could have a core-site.xml with: