I'm not sure that would solve the problem because you'd still be
limited to one directory. What most people are asking for is the
ability to use a dynamic directory from an incoming flow file.
I think we might be trying to fit two different use-cases into one
processor which might not make sense.
tation are
packaged separately, which is slightly different than what I was suggesting for
the Mongo case.
> On Mar 26, 2018, at 10:06 PM, Bryan Bende <bbe...@gmail.com> wrote:
>
> I’m a +1 for moving the Mongo stuff out of standard services.
>
> Controller service APIs
> NAR dependency.
>>>>>
>>>>> We might want to visit this as a much broader case, since I believe we
>>>>> could run into this with other services APIs (Elasticsearch, HBase,
>>>>> etc.)? Certainly when the Extension Registry becomes a t
Brian,
Is your custom processor using the MongoDBClientService provided by
NiFI's standard services API? or does your NAR have a parent of
nifi-standard-services-api-nar to use other services?
Looking at where the Mongo JARs are from a build of master...
find work/nar/ -name *mongo-java*.jar
= redisConnectionPool.getConnection();
On Mon, Mar 26, 2018 at 11:58 AM, Mike Thomsen <mikerthom...@gmail.com> wrote:
> Yeah, it does. Copied withConnection from the state provider. Looks like
> copya pasta may have struck again...
>
> On Mon, Mar 26, 2018 at 11:44 AM, Bryan Bende <bb
You might be able to get the nifi-kafka-0-10-nar from 1.5.0 and run it in 1.4.0.
On Mon, Mar 26, 2018 at 11:28 AM, Milan Das <m...@interset.com> wrote:
> Hi Bryan,
> We are using NIFI 1.4.0. Can we backport this fix to NIFI 1.4?
>
> Thanks,
> Milan Das
>
> On 3/26/18, 1
I can't tell for sure, but the stacktrace looks like your
AbstractRedisProcessor is making a direct call to RedisUtils to create
a connection, rather than using the RedisConnectionPool to obtain a
connection.
On Mon, Mar 26, 2018 at 11:38 AM, Bryan Bende <bbe...@gmail.com> wrote:
> Can
Can you share the code for your AbstractRedisProcessor?
On Mon, Mar 26, 2018 at 9:52 AM, Mike Thomsen wrote:
> Over the weekend I started playing around with a new processor called
> PutRedisHash based on a request from the user list. I set up a really
> simple IT and
Hello,
Passing LDAP credentials in plain-text over http would not be secure.
You'll want to have the SSL connection pass through the load balancer
all the way to the NiFi nodes.
There are several articles on setting up a secure NiFi cluster:
Hello,
What version of NiFi are you using?
This should be fixed in 1.5.0:
https://issues.apache.org/jira/browse/NIFI-4639
Thanks,
Bryan
On Sun, Mar 25, 2018 at 6:45 PM, Milan Das wrote:
> Hello Nifi Users,
>
> Apparently, it seems like PublishKafkaRecord_0_10 doesn't
:38 AM -0500, Pierre Villard
>>> <pierre.villard...@gmail.com>, wrote:
>>>> -1 (binding)
>>>>
>>>> I confirm the issue mentioned by Bryan. That's actually what Matt and I
>>>> experienced when trying the PR about the S2S Metrics Reporting task [
pected with no new comments to add.
>>
>> -- Mike
>>
>>
>> On Fri, Mar 23, 2018 at 4:02 PM, Scott Aslan <scottyas...@gmail.com>
>> wrote:
>>
>> > +1 (binding)
>> >
>> > - Ran through release helper
>> > - Setup
+1 (binding)
- Ran through release helper and everything checked out
- Verified some test flows with the restricted components + keytab CS
On Fri, Mar 23, 2018 at 2:42 PM, Mark Payne wrote:
> +1 (binding)
>
> Was able to verify hashes, build with contrib-check, and start
the
>>>> flow file?
>>>>
>>>>
>>>> On March 20, 2018 at 10:39:37, Jorge Machado (jom...@me.com) wrote:
>>>>
>>>> So that is what we actually are doing EvaluateJsonPath the problem with
>>>> that is, that is hard
...@me.com) wrote:
>>>
>>> So that is what we actually are doing EvaluateJsonPath the problem with
>>> that is, that is hard to build something generic if we need to specify each
>>> property by his name, that’s why this idea.
>>>
>>> Should
t;def obj = slurper.parseText(text)
>>> obj.each {k,v ->
>>>if(v!=null && v.toString()!=""){
>>> attrs[k] = v.toString()
>>> }
>>>}
>>>} as InputStreamCallback)
>>>
is done by groovy script) but then
> would be nice to use this standard processor and instead of writing this to a
> flow content write it to attributes.
>
>
> Jorge Machado
>
>
>
>
>
>> On 20 Mar 2018, at 14:47, Bryan Bende <bbe...@gmail.com>
What would be the main use case for wanting all the flattened values
in attributes?
If the reason was to keep the original content, we could probably just
added an original relationship.
Also, I think FlattenJson supports flattening a flow file where the
root is an array of JSON documents
ava sdk core in a dependency but also have to depend on the nifi-aws-nar.
>
>
> On March 2, 2018 at 13:40:21, Bryan Bende (bbe...@gmail.com) wrote:
>
> Doug,
>
> I think the only solution is what you proposed about fixing the
> nifi-gcp-bundle...
>
> Basically, if a NAR
Toivo,
I think there needs to be some improvements around variables &
sensitive property handling, but it is a challenging situation.
Some things you could investigate with the current capabilities..
- With the registry scenario, you could define a DBCPConnectionPool at
the root process group
You may want to consider moving from templates to NiFi Registry for
your deployment approach. The idea of this approach is that your flow
will get saved to registry with no sensitive values, when you import
the flow to the next environment you enter the sensitive values there
the first time and
Toivo,
The password property on DBCPConnectionPool is a "sensitive" property
which means it is already encrypted in the flow.xml.gz using
nifi.sensitive.props.key.
Are you saying you are trying to externalize the value outside the
flow and keep it encrypted somewhere else?
-Bryan
On Mon, Mar
+1
On Fri, Mar 9, 2018 at 3:11 PM, Joe Witt wrote:
> +1
>
> On Mar 9, 2018 3:10 PM, "Scott Aslan" wrote:
>
> All,
>
> Following a solid discussion for the past couple of weeks [1] regarding the
> establishment of Fluid Design System as a sub-project of
NiFi is not a single WAR that can be deployed somewhere. You should
think of it like other software that you install on your system, for
example a relational database. You wouldn't expect to deploy your
Postgres DB to your WildFly server.
On Wed, Mar 7, 2018 at 9:00 AM, Mike Thomsen
Making a call to "/process-groups/root" should retrieve the root
process group which should then have an id element.
On Mon, Feb 26, 2018 at 5:20 PM, Daniel Hernandez
wrote:
> Thanks Matt,
>
> I get now what is the problem, in order to exhaust all my
Hello,
Your custom processor would be the same as if you were writing an
external client program.
You would need to provide the processor with a username and password
in the processor properties, and then it would need to make a call to
the token REST end-point.
Processors don't run as the user
You should be able to include a canned flow.xml.gz in your in your
container, just have nothing under the root group.
On Mon, Feb 26, 2018 at 3:50 PM, Matt Gilman wrote:
> Daniel,
>
> Unfortunately, there is no way to set this currently. This is ultimately a
> lifecycle
As a possible work around, there were date functions added to record path
in 1.5.0, so if you had a schema that treated the field as a string, you
could reformat the column in place using UpdateRecord to get it into
whatever format it needs to be in.
On Tue, Feb 13, 2018 at 9:17 PM Koji Kawamura
Currently it means that the dataflow manager/developer is expected to
set the 'Execution Nodes' strategy to "Primary Node" at the time of
flow design.
We don't have anything that restricts the scheduling strategy of a
processor, but we probably should consider having an annotation like
I agree more with Andy about sticking with Java. The more varying languages
used, the more challenging it is to maintain. Once the code is part of the
Apache NiFi git repo, it is now the responsibility of the committers and
PMC members to maintain it.
I’d even say I am somewhat against the
Hello,
Is there a specific issue/problem you are trying to figure out?
If you are just interested in how it works, the main code to look at
would be in FlowController in the "reload" methods, here is the one
for a processor node:
using
> toolkit?. I did not find any good post that talks end to end from
> installing to making it secure using tls toolkit.
>
> Any help is appreciated.
>
> Thanks
> Anil
>
>
>
> On Wed, Jan 31, 2018 at 6:42 PM, Bryan Bende <bbe...@gmail.com> wrote:
>
Hello,
The identity in authorizers.xml for your initial admin does not match the
identity of your client cert.
You should be putting “CN=TC, OU=NIFI” as the initial admin because that is
the DN of your client cert.
You’ll need to stop NiFi, edit authorizers.xml, delete users.xml and
I definitely agree with all of these points.
With our current setup, the only way a committer can close a PR is by
issuing a commit with the magic "This closes ..." clause. The
submitter of the PR is the only one who can actually close it in
GitHub.
I don't want to hijack the discussion with a
In the default case, "Connection Per Flow File" is false, which means
there is one connection created and used across many flow files, which
will perform best.
Setting "Connection Per Flow File" to true, means it will close the
connection at the end of every on trigger call.
We could potentially
Hello,
Can you take a couple of thread dumps while this is happening and provide
them so we can take a look?
You can put a file name as the argument to nifi.sh dump to have it written
to a file.
Thanks,
Bryan
On Wed, Jan 24, 2018 at 6:48 AM we are wrote:
> Hi,
>
>
+1 binding
Ran through everything in the release helper and looked good, thanks!
On Fri, Jan 19, 2018 at 3:03 PM, Matt Gilman wrote:
> +1 Release this package as minifi-0.4.0
>
> Verified hashes, signature, build, etc. Ran sample flows and everything
> looks good.
>
>
gt;> The first is a histogram of nar file size in buckets of 10MB.
>>> The
>>> > > >>>>> second
>>> > > >>>>>>> basically is similar to a cumulative distribution, the x axis
>>> is
>>> > > >> the
&
t;> Instead, when we declared dependency on nifi-standard-services-api-nar,
>> provided scope worked ok, and everything else as well.
>>
>> Thanks for your help!
>> Martin.
>>
>> 2018-01-12 16:29 GMT+01:00 Bryan Bende <bbe...@gmail.com>:
>>
&g
Long term I'd like to see the extension registry take form and have
that be the solution (#3).
In the more near term, we could separate all of the NARs, except for
framework and maybe standard processors & services, into a separate
git repo.
In that new git repo we could organize things like Joe
In addition to what Matt said, the reason nifi-record is marked as
provided is because it is part of nifi-standard-services-api-nar, and
if your NAR was going to do anything with a record reader/writer you
would have a NAR dependency on nifi-standard-services-api-nar so at
runtime that is where
+1 (binding)
- Ran through release helper with no issues
- Ran into a minor issue related to component versioning when using
the registry and created this JIRA [1], would be more of an issue for
next release
[1] https://issues.apache.org/jira/browse/NIFI-4763
On Wed, Jan 10, 2018 at 10:05 AM,
On behalf of the Apache NiFi PMC, I am very pleased to announce that
Kevin Doran has accepted the PMC's invitation to become a committer on
the Apache NiFi project. We greatly appreciate all of Kevin's hard
work and generous contributions to the project. We look forward to his
continued
Apache NiFi Community,
I am pleased to announce that the 0.1.0 release of Apache NiFi
Registry passes with:
12 +1 (binding) votes
4 +1 (non-binding) votes
0 0 votes
0 -1 votes
Thanks to all who helped make this release possible.
Here is the PMC vote thread:
>> process group under version control, saving different versions, changing
>> versions, importing a version, stopping version control
>>
>> Awesome initial release for this project!
>>
>> Drew
>>
>>
>> > On Dec 28, 2017, at 1:09 PM, Bryan Ben
Hello Apache NiFi community,
Please find the associated guidance to help those interested in
validating/verifying the Apache NiFi Registry release so they can
vote.
# Download latest KEYS file:
https://dist.apache.org/repos/dist/dev/nifi/KEYS
# Import keys file:
gpg --import KEYS
# [optional]
Hello,
I am pleased to be calling this vote for the source release of Apache
NiFi Registry 0.1.0.
The source zip, including signatures, digests, etc. can be found at:
https://repository.apache.org/content/repositories/orgapachenifi-1115/
The Git tag is nifi-registry-0.1.0-RC1
The Git commit ID
ack
> to normal. I’m wondering if disconnecting PG B shouldn’t be considered as a
> local change to be committed? Because, I could be in a situation where I
> don’t want to delete PG B, I just want to stop version control on it, no?
>
> I'll run some more tests in secured environments
steven.k.byers@mail.mil
>
>
>
> -Original Message-
> From: Bryan Bende [mailto:bbe...@gmail.com]
> Sent: Tuesday, December 26, 2017 11:25 AM
> To: dev@nifi.apache.org
> Subject: [Non-DoD Source] Re: Moving from version 1.1.2 to 1.4.0
>
> Hello,
>
> This mea
Hello,
This means your custom NAR is bundling the standard processors jar and as a
result they are getting discovered twice, once from your NAR and once from
the standard NAR.
You’ll have to look at your maven dependencies for your custom NARs and
figure out why the dependency on standard
Hello,
Does your processor happen to have a @TriggerWhenEmpty annotation on it?
That would cause it to always execute regardless of what is in the queue,
so just wanted to rule that out.
Thanks,
Bryan
On Fri, Dec 22, 2017 at 12:45 PM, Oleksi Derkatch wrote:
>
on 2, upgrade the other one to
version 2, etc.
Hope that helps.
-Bryan
On Fri, Dec 8, 2017 at 9:19 AM, Bryan Bende <bbe...@gmail.com> wrote:
> Mike,
>
> You brought up a good point... documentation is one of the things that
> still needs to be done.
>
> There is some inf
gt;
>> > > On 12/7/17, 10:45, "Joe Witt" <joe.w...@gmail.com> wrote:
>> > >
>> > > Bryan - very exciting and awesome. Having experimented with the
>> > > registry on the JIRAs/PRs you mention I must say this is going to
>>
Hey folks,
There has been a lot of great work done on the NiFi Registry [1] and I
think we are probably very close to an initial release focused on
storing "versioned flows".
Since NiFi will have a dependency on client code provided by the
registry, the first release of the registry would need
I think there is an open PR for a "MoveHDFS" processor that might do
what you are describing, but currently I think you'd have to use
ExecuteScript to issue an hdfs mv command.
If you are interested in trying to fix the code for PutParquet, then I
would suggest trying to add an overwrite
Hello,
As far as I know there is not an option in Parquet to append due to
the way it's internal format works.
The ParquetFileWriter has a mode which only has CREATE and OVERWRITE:
Hello,
I haven't verified this against HDFS yet, but this may be a bug in the
processor...
The value of "Overwrite Files" is being passed to the Parquet Writer
to put it in "overwrite" mode, but since we first write a temp file,
but this would only help to overwrite the temp file if it was
Jamie,
You can definitely implement your own LoginIdentityProvider...
It should work just like any other extension point, meaning you build
a NAR with your extension in it and drop it in the lib directory.
We don't have an archetype for this, but you could probably start with
the processor
In general that approach should work, there were a few community
efforts that did something like this in the past [1][2].
For the RPG, you may need to substitute another value as well, because
I believe the template also contains the UUID of the ports it is
connected to, which will be different
Currently, there is the variable properties file which would require a
service restart and also would need to be on all nodes in a cluster.
The last release (1.4.0) added a more user-friendly variable registry
in the UI which you can access from the context palette for a given
process group, near
Hello,
Regarding Remote Process Groups, this is definitely something that
needs to be improved. There is a JIRA to make the URL editable [1].
A significant amount of work has been done on the flow registry [2],
and this will become the primary way to deploy flows across
environments.
The
Mark,
I believe that property is no longer used...
Grep'ing the source tree for it shows a few lingering references in
the admin guide and in src/test/resources, but nothing in regular
code.
It may be residual from the 0.x clustering model that was removed
during the 1.0.0 release.
-Bryan
On
t;> nifi-framework-api-1.3.0.jar
>> nifi-JSONCondenser-processors-0.1.jar
>> nifi-nar-utils-1.3.0.jar
>> nifi-properties-1.3.0.jar
>> nifi-runtime-1.3.0.jar
>> slf4j-api-1.7.25.jar
>>
>>
>>> On 7 Nov 2017, at 5:13 am, Bryan Bende <bbe...@gmail.com&
aven.apache.org/xsd/maven-4.0.0.xsd;>
> 4.0.0
>
>
> com.jidmu
> JSONCondenser
> 0.1
>
>
> nifi-JSONCondenser-nar
> 0.1
> nar
>
> true
> true
>
>
>
>
> com.jid
It is most likely an issue with the Maven configuration in one of your modules.
Can you share your project, or the pom files for the processors, NAR,
and bundle?
Thanks,
Bryan
On Mon, Nov 6, 2017 at 12:20 PM, Phil H wrote:
> Nifi version is 1.3.0, running on Java
each flow file
> could potentially have a different sftp host address in the queues.
>
> All together we have to pull from about 60 servers. If this doesn't work
> out with the list/fetch I plan to have a micro acquisition cluster just
> for gets.
>
> Ryan
>
> O
Ryan,
Personally I don't have experience running these processors at scale,
but from a code perspective they are fundamentally different...
GetSFTP is a source processor, meaning is not being fed by an upstream
connection, so when it executes it can create a connection and
retrieve up to
Mike,
Regarding the licensing, I believe LGPL is a no-go for Apache projects.
Take a look here:
https://www.apache.org/legal/resolved.html#category-x
-Bryan
On Sat, Oct 28, 2017 at 4:47 PM, Mike Thomsen wrote:
> The processor breaks down a much larger file into a huge
Hi Fredrik,
These are some good ideas.
If we did support multiple initial admins, I would suggest it be done
through multiple elements, rather than a comma separate list since
commas are part of a DN which could be a single user.
We already support this patter in the new user group provider:
If you can provide an example message we can try to see why
ListenSyslog says it is invalid.
I'm not sure that will solve the issue, but would give you something
else to try.
On Thu, Oct 19, 2017 at 8:38 AM, Andrew Psaltis
wrote:
> Dave,
> To clarify you are using the
; reporting-task/service/processor?
>
>
> On Wed, Oct 11, 2017 at 8:48 PM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> I just added you to the contributors list in JIRA so you should be
>> able to assign things to yourself.
>>
>> I think initially putting all the
module so
> that anyone who wants to implement their own service can do so without
> including modules they don't need.
>
> By the way, if it's OK of course, could you please add me to the jira so
> that the issue can be assigned to me once opened? Thank you!
>
> On Wed, 11 Oct 2017 at
Omer,
I think adding the new versions that implement the new
MetricReporterService, and marking the old ones as deprecated makes
sense. They could potentially be removed on a major future release
like 2.0.0.
Were you envisioning the DataDogMetricReportService and
AmbariMetricReportingService
Peter,
The images didn’t come across for me, but since you mentioned that a failure
queue is involved, is it possible all the flow files going to failure are being
penalized which would cause them to not be processed immediately?
-Bryan
> On Oct 8, 2017, at 10:49 PM, Peter Wicks (pwicks)
is prefixed with
> the length of the message.
>
> -Clay
>
> On Thu, Oct 5, 2017 at 8:22 AM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> Have you tried using the template?
>>
>> https://gist.githubusercontent.com/bbende/fa2bff34e721fef2145398633
Ben,
1) Yes, the variables are hierarchical, so a variable at the root group would
be visible to all components, unless there is a variable with the same name at
a lower level which would override it.
2) I haven’t tried this, but I would expect that you should still be able to
use
Uwe,
I don't think there is specific documentation on how to write code
using the record readers and writers, but the best example to look at
would be ConvertRecord
ConvertRecord actually extends from AbstractRecordProcessor:
y on
> the processor). I'd appreciate if you let me what I am doing wrong.
>
> thanks
> Clay
>
> On Wed, Oct 4, 2017 at 9:14 AM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> Hello,
>>
>> I wrote a post that shows an example of using ListenTCPRecord with a
&
Hello,
I wrote a post that shows an example of using ListenTCPRecord with a
GrokReader to receive multi-line log messages. There is a link to a
template of the flow at the very end.
You could easily change the example so that PutTCP is sending a single
JSON document, or an array of JSON
+1 (binding)
- Ran through the release helper and everything checked out.
- Ran a couple of sample flows with no issues
On Fri, Sep 29, 2017 at 9:46 AM, James Wing wrote:
> Jeff, I agree the updated KEYS file has been published. Thanks.
>
> On Fri, Sep 29, 2017 at 6:00 AM,
I think the reason for the upgrade issue was the following...
Normally there is an automatic upgrade of component versions, with the
following logic:
- If the flow says you are using version X of a component, and during
startup version X is not found, but version Y is found, and version Y
is the
Hello,
You can run a standard HTTP load-balancer in front of ListenHTTP and have
your producers use the URL of the load-balancer.
Nginx or apache httpd can be used.
Thanks,
Bryan
On Tue, Aug 29, 2017 at 11:40 AM, mayank rathi
wrote:
> Does this help?
>
> [image:
Mark,
I don't believe there is currently anything like this in Authorizer API.
You would likely have to build something similar to what processors have...
In ProcessorInitializationContext they get access to a NodeType which
tells them if they are currently primary or not.
Then they can
Ben,
I apologize if I am not understanding the situation, but...
In the case where your OnScheduled code is in a retry loop, if someone
stops the processor it will call your OnUnscheduled code which will
set the flag to bounce out of the loop. This sounds like what you
want, right?
In the case
The way controller services are setup you have the following...
- DBCPService interface (provides getConnection()) extends
ControllerService interface (empty interface to indicate it is a CS)
- DBCPConnectionPool extends AbstractControllerService implements DBCPService
- Processor XYZ depends on
Toivo,
Besides the Jetty NAR and the framework NAR, you should be able to
remove most of the other NARs without any negative impact.
The hope is that eventually most of these NARs can live in an
extension repository and then people can pick and choose which
processors to add to their
Hello,
I think I see what the problem is now...
The exception in your second email is coming from the CSV writer which
is set to get the schema from the "Schema Text" property, which is in
turn set to ${avro.schema}.
I believe what you showed in the section ### FLOWFILE ### is the
content of
Hello,
I'm assuming this error came from #2 when you tried to use Schema Text
set to ${avro.schema} ?
The error means your flow file doesn't have an attribute called
avro.schema, which it would need to have if you reference
${avro.schema}.
What were the results using Embedded Avro Schema? That
Hello,
I think you should only make one call to the toolkit which should
generate a CA, the server certs, and the client cert all at the same
time. The -C flag is for the client cert which you already had on the
first call so I think it generated it already.
By running it twice like above, the
Hello,
The PutParquet processor uses the Hadoop client to write to a filesystem.
For example, to write to HDFS you would have a core-site.xml with a
filesystem like:
fs.defaultFS
hdfs://yourhost
And to write to a local filesystem you could have a core-site.xml with:
I agree with encouraging reviews from everyone, but I lean towards
"binding" reviews coming from committers.
If we allow any review to be binding, there could be completely
different levels of review that occur...
There could be someone who isn't a committer yet, but has been
contributing
Scott,
Thanks for providing the stacktrace... do any of your custom
processors use the @DefaultSchedule annotation? and if so, do any of
them set the tasks to a number less than 1?
The exception you are getting is from some code that is preventing
using 0 or negative number of tasks for a
Steve,
In 1.2.0 there were some new processors added called Wait/Notify...
With those you could send your original JSON (before splitting) to a
Wait processor and tell it to wait until the signal count is equal to
the number of splits, then you could put a Notify processor right
after PutMongo
Mike,
I don't know of any work being done or any JIRAs that exist for this,
but seems like it would be good to support them. Most likely its just
that no one has asked for it yet.
I'd go ahead and create a JIRA, or if you were planning to incorporate
it into the HBase record processors then that
Rohith,
Can you share more details about how you have configured PutParquet?
What Record Reader are you using and what Schema Access Strategy?
If your data is already in Avro then you would need to set the Record
Reader to an AvroRecordReader. The AvroRecordReader can be configured
to use the
Yes, I think running NiFi on edge nodes would make sense, this way
they can access the public network to receive data, but also access
HDFS on the private network.
On Fri, Jun 23, 2017 at 4:24 PM, Mothi86 wrote:
> Hi Bryan,
>
> Greetings and appreciate your instant reply.
Hello,
Every node where NiFi is running must be able to connect to the data
node process on every node where HDFS is running. I believe the
default port for the HDFS data node process is usually 50010.
I'm assuming your 4 worker nodes are running HDFS, so NiFi would have
to access those.
-Bryan
Hi Michael,
Those two points are definitely good ones to help make the decision.
For the second point, it doesn't necessarily have to be an enormous
set of dependencies, it could be just one dependency that the new
processors need that is in conflict with what the standard processors
are already
Hi Chris,
I think a good place for an abstract record processor would be in the
standard record utils under nifi-nar-bundles/nifi-extension-utils:
https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-standard-record-utils
We've taken a similar
Joe,
Thanks for the info.
I think this might be similar to
https://issues.apache.org/jira/browse/NIFI-3900, but the fix didn't
account for RPGs.
I created this JIRA - https://issues.apache.org/jira/browse/NIFI-4075
-Bryan
On Thu, Jun 15, 2017 at 10:14 AM, Joe Gresock
401 - 500 of 778 matches
Mail list logo