Thank you Adam and James. This has been very helpful, and gives me a number
of options to explore. I am all set, thanks again for your help! -Jim
On Fri, Mar 17, 2017 at 5:33 PM, Adam Lamar wrote:
> Jim,
>
> Absolutely that's one way. Depending on how many directories you have, you
> can also do
Jim,
Absolutely that's one way. Depending on how many directories you have, you
can also do it directly with RouteOnAttribute and the expression language:
Property name: s3exists
Property value: ${outputTarget:equals('foo'):or(outputTarget:equals('bar'))}
Then route the s3exists relationship to
So keep my list in a python script dictionary called by an ExecuteScript
processor, and toss my outputTarget value against that. Set a new attribute
s3exists to true or false in my script based on that result, and then use
RouteAttribute to direct the output. Is that what you have in mind? -Jim
On
Jim,
Also keep in mind that as an object store, S3 uses "directories" only as a
grouping concept, and not as a hierarchal storage mechanism. That's why the
initial PutS3Object doesn't fail with a new "directory". See
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
I think Jame
Hmmm. Thank you James - I'm certainly willing to give this a try. Do you
have a link to an example that does a lookup against a DistributedMapCache
and takes two different workflow paths depending on the outcome? -Jim
On Fri, Mar 17, 2017 at 2:42 PM, James Wing wrote:
> Jim,
>
> You could use Li
Frank,
The short answer is that in the current setup it is not possible to
get access to the http request from inside the LoginIdentityProvider.
The longer complicated answer...
The LoginIdentityProvider is part of the nifi-api module which is in a
JAR in the lib directory. The http related clas
Jim,
You could use ListS3 to get existing S3 keys, then parse out the
'directories', and put the directories in a key/value store for a lookup
(like DistributedMapCache). But you might also be able to maintain the
lookup just with your metadata attributes in NiFi alone.
Thanks,
James
On Fri,
Hello,
I responded on stackoverflow:
https://stackoverflow.com/questions/42853055/how-to-access-controller-service-created-in-ui-into-root-processors-in-nifi-1-1
Thanks,
Bryan
On Fri, Mar 17, 2017 at 5:09 AM, prabhu Mahendran
wrote:
>
> Nifi-1.1.0:
>
> In nifi-1.1.0 i have attached created c
Good afternoon. In my workflow I build an S3 output target from metadata
attributes. The vast majority of the time, the output target exists, and so
in my PutS3Object processor I set Objectkey to ${outputTarget}/${filename},
the target output folder exists, and my file is written to the right place
Hey Pere,
Here's a list of the processors we use the most often, not in any order...
UpdateAttribute
InvokeHTTP
ExecuteScriptCommand
ExecuteScript
PutS3Object
PutSNS
GetFile/PutFile
RouteOnAttribute
HandleHttpRequest/Response
GenerateFlowFile
EvaluateJsonPath
GenerateTableFetch
ExecuteSQL
Convert
Hi,
my name is Pere Urbon and I am working in small book / crash course on
building data processing systems, ETL's, etc with Apache NiFi.
I was wondering if there is some sense of the most used processors for each
category? I know the question is really hard to answer exactly, but
probably just
Could you post your nifi.properties for each node? (replacing anything
sensitive with placeholders)
On Fri, Mar 17, 2017 at 8:46 AM, Ryan H
wrote:
> Hi Andy,
>
> Here is what I have (still stuck):
>
> * *Using keytool, ensure there are no Extended Key Usage restrictions on
> your certificates wh
Hi Andy,
Here is what I have (still stuck):
* *Using keytool, ensure there are no Extended Key Usage restrictions on
your certificates which prevent them from being used for client
authentication*
-not really sure how to check this. Everything was generated using the nifi
toolkit. This is what I
Hi Jeremy,
The issue we are facing is that we need to keep the nifi.web.http.host blank
in order to have a working swarm setup, but this conflicts with the way nifi
does cluster communication. Let me try to explain:
I have 2 nifi instances (cluster nodes) in a docker swarm connected to
zookeeper
14 matches
Mail list logo