Jim, You could use ListS3 to get existing S3 keys, then parse out the 'directories', and put the directories in a key/value store for a lookup (like DistributedMapCache). But you might also be able to maintain the lookup just with your metadata attributes in NiFi alone.
Thanks, James On Fri, Mar 17, 2017 at 10:31 AM, James McMahon <[email protected]> wrote: > Good afternoon. In my workflow I build an S3 output target from metadata > attributes. The vast majority of the time, the output target exists, and so > in my PutS3Object processor I set Objectkey to ${outputTarget}/${filename}, > the target output folder exists, and my file is written to the right place > in my S3 bucket. > > On rare occasions the output target may not exist. Perhaps someone did an > Http Request with malformed incoming attributes. In this case I want to > feed the Failure from my first PutS3Object to a second PutS3Object that > hard wires the Objectkey to an existing for_review folder in my bucket. > > My problem is that the first PutS3Object appears to force the creation of > the malformed outputTarget named folder, and I can't get the error case to > cascade to the second S3 output processor. Is there a means to do this? Is > there a processor I can use prior to the S3 output processor to check for > the *existence *of the S3 folder, and output to either outputTarget (if > it exists), or to for_review (if it does not)? > > Thanks in advance for your help. -Jim >
