Looking for feedback on if 2 new sinks would be accepted by the community.
1. InvokeHttpSink: that would make an http request per record, to insert or
update a remote end point
2. DistributredMapCacheSink: to insert records into the cache map by a key
given a record path
I wouldn't mind https://github.com/apache/nifi/pull/4554 making it in. Have
seen a couple more instances where people have created custom processors to
work around this PR.
On Thu, Nov 26, 2020 at 11:10 AM Mike Thomsen
wrote:
> Also, for most of us (committers and community members), it's a
>
I also see this as a good idea, if only for transportability. If the dev
environment is in a docker container and wants to use an environment
variable to set the context, but in production where the sensitive values
need more protection they can be set manually or in another way.
Chris Sampson's
Pierre,
I think this discussion brings up a valid conversation point. At some point
a PMC member needs to approve the merge request, so from a contributors
level what can we do to make that merge both easier and/or more likely to
happen. That and how the community can help filter down the ever
I am running a 1.11.4 instance with adoptopenjdk 1.11. The timestamps on
the data provenance events are off by ~2 months and the date on the system
is correct.
Thanks for any help,
Phillip
Would it be reasonable to add the details of the failures to the flow file
attributes? I know they exist on the provenance event, but that can not be
persisted easily to file for later analysis and correction. This also seems
harder to locate for non-developers and flow designers.
Thanks,
So a couple notes based on your debug statements.
"2.1 -> 2.2" If you do not plan to provide an input to this processor and
run it on cron or timer, you can just do the session.create at the top.
This will create a new flowfile. If you do expect an incoming flow file
just return in the null check