Re: Patch Submitted - Re: Nifi-1325 - Enhancing Nifi AWS S3 for cross account access - Refactoring Nifi-AWS Processor credentials to use credentials provider

2016-01-11 Thread Aldrin Piri
Mans,

Thanks for your continued efforts on this.  I'll get this applied and start
scoping it out.

On Mon, Jan 11, 2016 at 4:54 PM, M Singh 
wrote:

> Hey Folks:
> I've uploaded a patch for nifi-1325 (
> https://issues.apache.org/jira/browse/NIFI-1325).  I've added unit test
> cases and added integration tests (ignored in the checkin since they
> require aws resources arns) - that uses the credentials provider controller
> and they passed.
> Please let me know if you have any thoughts/comments for me.
> Thanks again and looking forward for your feedback.
> Mans
>
> On Saturday, January 9, 2016 1:43 PM, Aldrin Piri <
> aldrinp...@gmail.com> wrote:
>
>
>  Mans,
>
> Sounds great.  Feel free to let us know if you have any issues and we are
> happy to work through it with you.  Thanks again for taking this work on!
>
> On Sat, Jan 9, 2016 at 4:21 PM, M Singh 
> wrote:
>
> > Sounds like a plan, Aldrin.  Let me explore this path.
> > Mans
> >
> >On Saturday, January 9, 2016 1:16 PM, Aldrin Piri <
> > aldrinp...@gmail.com> wrote:
> >
> >
> >  Mans,
> >
> > In the way I specified via the linked snippet, we could potentially just
> > have it implement the AWSCredentialsProvider signature, and in the case
> > that the prior properties are used instead of the controller service,
> > create a CredentialsProvider (something along the lines of a
> > BasicAWSCredentials Provider) that just returns a credentials object and
> a
> > no-op refresh.
> >
> > Unfortunately due to some ambiguity about the extension points for the
> > codebase, we are being very sensitive to those items and are avoiding
> such
> > breaking changes.  I agree there could be some confusion, but changing
> the
> > particular structure in terms of operation and configuration is one we
> > certainly cannot do as it would break flows on upgrade.  In the interim,
> > the controller service allows us to provide implementations for various
> > types of credentials.  I do agree, that when we are afforded the luxury
> of
> > breaking type changes, the currently established set of properties would
> > also best be served in that controller service type of role.
> >
> > On Sat, Jan 9, 2016 at 3:51 PM, M Singh 
> > wrote:
> >
> > > Hi Aldrin:
> > > Just to clarify that the current abstract aws processor (s3, sns, and
> > sqs)
> > > would implement both createClient methods as mentioned below:
> > > @Deprecatedprotected ClientType createClient(final ProcessContext
> > context,
> > > final AWSCredentials credentials, final ClientConfiguration config)
> > >  protected abstract ClientType createClient(final ProcessContext
> context,
> > > final AWSCredentialsProvider credentials, final ClientConfiguration
> > > config);}
> > >
> > > I had already started working on aws creds provider service controller.
> > > In my imp for the nifi aws processors I had removed the createClient
> with
> > > aws creds, replacing it with creds provider argument, but will put it
> > back
> > > as you've recommended.
> > > If we follow this path - the configuration for the aws processors will
> > > still have the original properties (aws secrets/access key, credentials
> > > file, etc) for backward compatibility and a aws credentials service
> > > controller which have the same properties (aws secrets/access key/creds
> > > files/anonymous option) along with the cross account attributes.  IMHO
> -
> > > this will be confusing and my suggestion was to make the breaking
> change.
> > > But I will work through your recommendation.
> > > If there is any other advice/recommendation, please let me know.
> > > Thanks again
> > >
> > >
> > >On Saturday, January 9, 2016 11:30 AM, Aldrin Piri <
> > > aldrinp...@gmail.com> wrote:
> > >
> > >
> > >  Mans,
> > >
> > > Fair points concerning the duplication.  I was thinking that in
> > conjunction
> > > with marking that method deprecated we would also drop the abstract
> > > classifier and require implementers subclassing the original class to
> > > provide the override explicitly.  It's not ideal, but does alleviate
> the
> > > issues concerning excess methods in the interface.  Sorry for omission
> of
> > > what is certainly a very valid issue.
> > >
> > > Outside of that, the items you are establishing sounds like the right
> > > path.  I hashed this out a little more fully to better express my ideas
> > > [1].
> > >
> > > [1] https://gist.github.com/apiri/6a17b71e261f457daecc
> > >
> > > On Sat, Jan 9, 2016 at 1:17 PM, M Singh 
> > > wrote:
> > >
> > > > Hi Aldrin:
> > > > Even if we subclass AbstractAWSProcessor and overwrite the
> onScheduled
> > > > method, we still have to add (rather then replace createClient with
> aws
> > > > creds argument) a createClient method that would take the credential
> > > > provider argument rather than the aws credentials argument (the
> current
> > > > implementation).
> > > > 

Patch Submitted - Re: Nifi-1325 - Enhancing Nifi AWS S3 for cross account access - Refactoring Nifi-AWS Processor credentials to use credentials provider

2016-01-11 Thread M Singh
Hey Folks:
I've uploaded a patch for nifi-1325 
(https://issues.apache.org/jira/browse/NIFI-1325).  I've added unit test cases 
and added integration tests (ignored in the checkin since they require aws 
resources arns) - that uses the credentials provider controller and they passed.
Please let me know if you have any thoughts/comments for me.
Thanks again and looking forward for your feedback. 
Mans 

On Saturday, January 9, 2016 1:43 PM, Aldrin Piri  
wrote:
 

 Mans,

Sounds great.  Feel free to let us know if you have any issues and we are
happy to work through it with you.  Thanks again for taking this work on!

On Sat, Jan 9, 2016 at 4:21 PM, M Singh 
wrote:

> Sounds like a plan, Aldrin.  Let me explore this path.
> Mans
>
>    On Saturday, January 9, 2016 1:16 PM, Aldrin Piri <
> aldrinp...@gmail.com> wrote:
>
>
>  Mans,
>
> In the way I specified via the linked snippet, we could potentially just
> have it implement the AWSCredentialsProvider signature, and in the case
> that the prior properties are used instead of the controller service,
> create a CredentialsProvider (something along the lines of a
> BasicAWSCredentials Provider) that just returns a credentials object and a
> no-op refresh.
>
> Unfortunately due to some ambiguity about the extension points for the
> codebase, we are being very sensitive to those items and are avoiding such
> breaking changes.  I agree there could be some confusion, but changing the
> particular structure in terms of operation and configuration is one we
> certainly cannot do as it would break flows on upgrade.  In the interim,
> the controller service allows us to provide implementations for various
> types of credentials.  I do agree, that when we are afforded the luxury of
> breaking type changes, the currently established set of properties would
> also best be served in that controller service type of role.
>
> On Sat, Jan 9, 2016 at 3:51 PM, M Singh 
> wrote:
>
> > Hi Aldrin:
> > Just to clarify that the current abstract aws processor (s3, sns, and
> sqs)
> > would implement both createClient methods as mentioned below:
> > @Deprecatedprotected ClientType createClient(final ProcessContext
> context,
> > final AWSCredentials credentials, final ClientConfiguration config)
> >  protected abstract ClientType createClient(final ProcessContext context,
> > final AWSCredentialsProvider credentials, final ClientConfiguration
> > config);}
> >
> > I had already started working on aws creds provider service controller.
> > In my imp for the nifi aws processors I had removed the createClient with
> > aws creds, replacing it with creds provider argument, but will put it
> back
> > as you've recommended.
> > If we follow this path - the configuration for the aws processors will
> > still have the original properties (aws secrets/access key, credentials
> > file, etc) for backward compatibility and a aws credentials service
> > controller which have the same properties (aws secrets/access key/creds
> > files/anonymous option) along with the cross account attributes.  IMHO -
> > this will be confusing and my suggestion was to make the breaking change.
> > But I will work through your recommendation.
> > If there is any other advice/recommendation, please let me know.
> > Thanks again
> >
> >
> >    On Saturday, January 9, 2016 11:30 AM, Aldrin Piri <
> > aldrinp...@gmail.com> wrote:
> >
> >
> >  Mans,
> >
> > Fair points concerning the duplication.  I was thinking that in
> conjunction
> > with marking that method deprecated we would also drop the abstract
> > classifier and require implementers subclassing the original class to
> > provide the override explicitly.  It's not ideal, but does alleviate the
> > issues concerning excess methods in the interface.  Sorry for omission of
> > what is certainly a very valid issue.
> >
> > Outside of that, the items you are establishing sounds like the right
> > path.  I hashed this out a little more fully to better express my ideas
> > [1].
> >
> > [1] https://gist.github.com/apiri/6a17b71e261f457daecc
> >
> > On Sat, Jan 9, 2016 at 1:17 PM, M Singh 
> > wrote:
> >
> > > Hi Aldrin:
> > > Even if we subclass AbstractAWSProcessor and overwrite the onScheduled
> > > method, we still have to add (rather then replace createClient with aws
> > > creds argument) a createClient method that would take the credential
> > > provider argument rather than the aws credentials argument (the current
> > > implementation).
> > > current nifi aws createClient (with aws credentials)
> > >    protected abstract ClientType createClient(final ProcessContext
> > > context, final AWSCredentials credentials, final ClientConfiguration
> > > config);
> > > new nifi aws createClient (with aws credentials provider)
> > >    protected abstract ClientType createClient(final ProcessContext
> > > context, final AWSCredentialsProvider 

Re: Groovy unit tests

2016-01-11 Thread Andy LoPresto
Thanks to everyone who weighed in. This feature is documented in NIFI-1365 [1] 
and there is a patch available [2].

The tests do not run by default and are triggered with a Java variable named 
`groovy` being set to `test`. It can be invoked as follows:

`mvn clean test -Dgroovy=test`

[1] https://issues.apache.org/jira/browse/NIFI-1365 

[2] https://github.com/apache/nifi/pull/163 


Andy LoPresto
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Jan 10, 2016, at 6:06 AM, Joshua Davis  wrote:
> 
> +1 Excellent idea
> 
> Joshua Davis
> Senior Consultant
> Hortonworks Professional Services
> (407)476-6752
> 
> 
> 
> 
> 
> 
> On 1/9/16, 11:52 AM, "Oleg Zhurakousky" 
> wrote:
> 
>> Big +1
>> 
>> Sent from my iPhone
>> 
>> On Jan 4, 2016, at 18:30, Andy LoPresto
>> > wrote:
>> 
>> I am considering writing unit tests in for new development/regression
>> testing in Groovy. There are numerous advantages to this [1][2] (such as
>> map coercion, relaxed permissions on dependency injection, etc.). Mocking
>> large and complex objects, such as NiFiProperties, when only one feature
>> is under test is especially easy. I plan to write "Java-style" unit
>> tests, but this would also make TDD/BDD frameworks like Spock or Cucumber
>> much easier to use.
>> 
>> I figured before doing this I would poll the community and see if anyone
>> strongly objects? In previous situations, I have created a custom Maven
>> profile which only runs when triggered (by an environment variable,
>> current username, etc.) to avoid polluting the environment of anyone who
>> doesn't want the Groovy test dependencies installed.
>> 
>> Does anyone have thoughts on this?
>> 
>> 
>> [1] http://www.ibm.com/developerworks/java/library/j-pg11094/index.html
>> [2]
>> https://keyholesoftware.com/2015/04/13/short-on-time-switch-to-groovy-for-
>> unit-testing/
>> 
>> 
>> Andy LoPresto
>> alopresto.apa...@gmail.com
>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail


[GitHub] nifi pull request: NIFI-1283 Fixing ControllerStatusReportingTask ...

2016-01-11 Thread jvwing
GitHub user jvwing opened a pull request:

https://github.com/apache/nifi/pull/166

NIFI-1283 Fixing ControllerStatusReportingTask logger name

ControllerStatusReportingTask was using an abbreviated class name to prefix 
its loggers, "ControllerStatusReportingTask", instead of the fully-qualified 
name "org.apache.nifi.controller.ControllerStatusReportingTask", as specified 
in the documentation and standard across reporting tasks in the same package.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jvwing/nifi nifi-1283

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #166


commit c526656a228389aa972ce7ebc1df037333000516
Author: James Wing 
Date:   2016-01-11T21:39:15Z

NIFI-1283 Fixing ControllerStatusReportingTask loggers to use 
fully-qualified class name




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] roadmap for the next 6-12 months

2016-01-11 Thread Sean Busbey
On Fri, Jan 8, 2016 at 8:29 AM, Joe Witt  wrote:
> We should also have a discussion on how long we should be committed to
> supporting the 0.x line and what that means.  We need to document a
> commitment for the community.


Worth a dedicated thread?

Presuming that we're going to use "1.0.0" as the start of a stricter
versioning policy, I think the first issue is deciding how long we'll
support major versions once that happens. If the expectation is that
"pre-1.0" versions involve more latitude as we work out what NiFi
needs to better find a fit for the public good, then I think it's
reasonable for our continuing support period to be less than a
'normal' major version. But we have to know what that looks like
first.

-- 
Sean


Re: State Management

2016-01-11 Thread Mark Payne
In case any of you are following along here...

In updating Processors to use the new state management API, I found that there 
were a few use cases that
were a bit hard to accommodate with the proposed API so I have updated the API 
a bit, making it simpler, so that
state is just retrieved/set by using a Map. This lends itself well to modifying 
multiple key/value pairs in the state
atomically. I have updated the Feature Proposal to reflect this.

Thanks
-Mark

> On Dec 31, 2015, at 4:08 PM, Mark Payne  wrote:
> 
> Yeah, absolutely agree! I intend for that to work but don't have a unit test 
> developed for that yet. I will be sure that I do get a unit test in to verify 
> that we can nest it.
> 
>> On Dec 31, 2015, at 4:02 PM, Ricky Saltzer  wrote:
>> 
>> +1 for making the znode a configurable option. Especially if we can nest
>> it, such as "/nifi//production" and "/nifi//development".
>> On Dec 31, 2015 3:49 PM, "Mark Payne"  wrote:
>> 
>>> At this point I'm thinking that it would just be a configurable value,
>>> defaulting to /nifi
>>> This way, admins can easily configure it and that way they could view
>>> whats in there, etc.
>>> out-of-band of NiFi. Though I'm all ears if there's a better way of doing
>>> this.
>>> 
>>> 
 On Dec 31, 2015, at 3:44 PM, Ricky Saltzer  wrote:
 
 Overall this looks great, and will prove very useful as we try to scale
 nifi out.
 
 When in clustered mode, will we have control over which znode the nifi
 cluster persists to? Or - do we want this unique to the cluster (e.g uuid
 of the flow)?
 On Dec 31, 2015 12:35 PM, "Mark Payne"  wrote:
 
> All,
> 
> I have spent a good amount of time in the past few weeks working on the
> State Management Feature Proposal, described at [1].
> The main JIRA for this is found at [2].
> 
> I have updated the Feature Proposal with more implementation details
> describing the path that I have taken.
> If anyone has any interest in reviewing the ideas and providing
>>> feedback,
> please do so, as this is something that
> we want to ensure that we get right!
> 
> Also of note, for those interested and located in the MD/DC/VA area, I
> will be talking a bit about this feature at the next
> Maryland Apache NiFi meetup on Jan. 7.
> 
> Thanks
> -Mark
> 
> 
> [1] https://cwiki.apache.org/confluence/display/NIFI/State+Management <
> https://cwiki.apache.org/confluence/display/NIFI/State+Management>
> [2] https://issues.apache.org/jira/browse/NIFI-259 <
> https://issues.apache.org/jira/browse/NIFI-259>
> 
> 
>>> 
>>> 
> 



Re: question regarding input stream from reddit.com

2016-01-11 Thread Aldrin Piri
Shahzad,

Joe is correct in that we do not have anything that maps directly to this
data stream source.

As a means of getting the data into a NiFi flow, you could also consider
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.ExecuteProcess/index.html
processor.  I have attached a template that uses an ExecuteProcess instance
to perform the functionality you are interested in.

>From the template descripton:

This template makes use of ExecuteProcess to invoke curl and perform
batching on the streaming response provided by pushshift.io.  These are
batched into 1s intervals. This is inexact, and as a result, some results
may get truncated depending on time boundaries.  We perform a very naive
RouteOnContent to filter out those events without the data payload.


This gives a nice proof of concept of how you could interact with the data
source before diving into a custom processor.  As Joe mentioned, a custom
processor might be nice to handle the data format to be cognizant of event
boundaries and would potentially obviate the need for the included
SplitContent processor.  The attached template will discard some events
that fall on those time batch boundaries.

Let us know if you have any more questions or if you are seeing a number of
"similar" data APIs that could potentially be generically supported from a
project standpoint.

--aldrin

On Mon, Jan 11, 2016 at 11:14 AM, Joe Percivall <
joeperciv...@yahoo.com.invalid> wrote:

> Hello Shahzad,
>
> Unfortunately the "stream" functionality of pushshift.io doesn't fit into
> any current NiFi processor. Processors work by having an "OnTrigger" method
> that is used to create FlowFiles with each call. This works nicely for
> aspects of the pushshit.io api like "
> https://api.pushshift.io/reddit/search?q=Einstein=100; where it
> returns a single "unit" of information with each http request. If you are
> able to get the same information you need using the base "api" call for
> pushshift instead of "stream that would work best.
>
> Else you may be able to create a custom processor around your java code
> although it may be pretty difficult. You would need to translate the stream
> into chunks of information that would be put into the contents of FlowFiles
> and routed to a relationship using Session.Transter. For more information
> on creating a custom processor check out the developer guide:
> https://nifi.apache.org/developer-guide.html.
>
> Do either of those help or is a general processor that streams over HTTP
> necessary?
>
> Joe
> - - - - - -
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
>
>
>
> On Friday, January 8, 2016 11:16 AM, Shahzad K  wrote:
>
>
>
> Hi
>
> My name is Shahzad Karamat, i am trying to read some tweets from
> http://stream.pushshift.io/  into nifi.
> I am using a mac and can read the stream using curl -i '
> http://stream.pushshift.io/?subreddit=askreddit'
> I can get the stream into my terminal and i have also developed system to
> read this by using Java code.
> The question is:
> The strings i read from http://stream.pushshift.io/ <
> http://stream.pushshift.io/>  by using java, how can i make flowFile of
> this stream to transfer it to a certain relation?
>
> Regards
>
> Shahzad K
>
This template makes use of ExecuteProcess to invoke curl and perform batching on the streaming response provided by pushshift.io.  These are batched into 1s intervals. This is inexact, and as a result, some results may get truncated depending on time boundaries.  We perform a very naive RouteOnContent to filter out those events without the data payload.Retrieve Data from pushshift.io Streaming595462e7-e7ee-41eb-b77b-1aa11ebe8d2efa08eb86-1e2f-4879-b0e8-96ad9add8cee0 MB0fa08eb86-1e2f-4879-b0e8-96ad9add8cee839f733d-cf04-4cd6-af6a-0a6c75cc4567PROCESSOR0 sec1full.eventsfa08eb86-1e2f-4879-b0e8-96ad9add8cee975703db-4912-49e7-b8a1-d670e059a03bPROCESSOR099b5737c-398b-4551-af00-06a99ba9ce9dfa08eb86-1e2f-4879-b0e8-96ad9add8cee0 MB0fa08eb86-1e2f-4879-b0e8-96ad9add8cee975703db-4912-49e7-b8a1-d670e059a03bPROCESSOR0 sec1splitsfa08eb86-1e2f-4879-b0e8-96ad9add8ceeae019332-fc08-4272-a136-c3889d29f6d4PROCESSOR0a911c7e8-47ad-4a09-810b-2c3495056d03fa08eb86-1e2f-4879-b0e8-96ad9add8cee0 MB0fa08eb86-1e2f-4879-b0e8-96ad9add8ceeae019332-fc08-4272-a136-c3889d29f6d4PROCESSOR0 sec1successfa08eb86-1e2f-4879-b0e8-96ad9add8cee034a76df-22a9-4052-be68-2d49425a290fPROCESSOR09fbb6b8d-6f8c-4aa7-b1f7-d64e781c97eafa08eb86-1e2f-4879-b0e8-96ad9add8cee0 MB0fa08eb86-1e2f-4879-b0e8-96ad9add8ceea2608424-8eec-4a4c-ac7b-1c074ae8c800PROCESSOR0 sec0Incomplete Eventsunmatchedfa08eb86-1e2f-4879-b0e8-96ad9add8cee975703db-4912-49e7-b8a1-d670e059a03bPROCESSOR0975703db-4912-49e7-b8a1-d670e059a03bfa08eb86-1e2f-4879-b0e8-96ad9add8cee2002.9272939105067578.2776955537918WARN1TIMER_DRIVEN1EVENT_DRIVEN0CRON_DRIVEN1TIMER_DRIVEN0 secCRON_DRIVEN* * * * * ?Match Requirementcontent must match 

Re: [DISCUSS] Proposal for an Apache NiFi sub project - MiNiFi

2016-01-11 Thread Sumanth Chinthagunta
Good idea. There will be many possibilities if we can make MiNIFi run on 
android / iOS or other embedded devices.

Wonder how back-pressure works in this kind of distributed setup. 
I was reading about reactivesocket project, This project is trying to solve 
reactive / back-pressure problem over network boundaries 

 http://reactivesocket.io

Sumo
Sent from my iPhone

> On Jan 9, 2016, at 4:29 PM, Joe Witt  wrote:
> 
> NiFi Community,
> 
> I'd like to initiate discussion around a proposal to create our first
> sub-project of NiFi.  A possible name for it is "MiNiFi" a sort of
> play on Mini-NiFi.
> 
> The idea is to provide a complementary data collection agent to NiFi's
> current approach of dataflow management.  As noted in our ASF TLP
> resolution NiFi is to provide "an automated and durable data broker
> between systems providing interactive command and control and detailed
> chain of custody for data."  MiNiFi would be consistent with that
> scope with a  specific focus on the first-mile challenge so common in
> dataflow.
> 
> Specific goals of MiNiFi would be to provide a small, lightweight,
> centrally managed  agent that natively generates data provenance and
> seamlessly integrates with NiFi for follow-on dataflow management and
> maintenance of the chain of custody provided by the powerful data
> provenance features of NiFi.
> 
> MiNiFi should be designed to operate directly on or adjacent to the
> source sensor, system, server generating the events as a resource
> sensitive tenant.  There are numerous agent models in existence today
> but they do not offer the command and control or provenance that is so
> important to the philosophy and scope of NiFi.
> 
> These agents would necessarily have a different interactive command
> and control model than NiFi as you'd not expect consistent behavior,
> capability, or accessibility of all instances of the agents at any
> given time.
> 
> Multiple implementations of MiNiFi are envisioned including those that
> operate on the JVM and those that do not.
> 
> As the discussion advances we can put together wiki pages, concept
> diagrams, and requirements to help better articulate how this might
> evolve.  We should also discuss the mechanics of how this might work
> in terms of infrastructure, code repository, and more.
> 
> Thanks
> Joe


Re: State Management

2016-01-11 Thread xmlking
Some open source projects are using Hazelcast's distributed map for state 
management. 
It provides standard Java map API.
http://docs.hazelcast.org/docs/3.5/manual/html/map.html

Sumo
Sent from my iPad

> On Jan 11, 2016, at 5:37 AM, Mark Payne  wrote:
> 
> In case any of you are following along here...
> 
> In updating Processors to use the new state management API, I found that 
> there were a few use cases that
> were a bit hard to accommodate with the proposed API so I have updated the 
> API a bit, making it simpler, so that
> state is just retrieved/set by using a Map. This lends itself well to 
> modifying multiple key/value pairs in the state
> atomically. I have updated the Feature Proposal to reflect this.
> 
> Thanks
> -Mark
> 
>> On Dec 31, 2015, at 4:08 PM, Mark Payne  wrote:
>> 
>> Yeah, absolutely agree! I intend for that to work but don't have a unit test 
>> developed for that yet. I will be sure that I do get a unit test in to 
>> verify that we can nest it.
>> 
>>> On Dec 31, 2015, at 4:02 PM, Ricky Saltzer  wrote:
>>> 
>>> +1 for making the znode a configurable option. Especially if we can nest
>>> it, such as "/nifi//production" and "/nifi//development".
 On Dec 31, 2015 3:49 PM, "Mark Payne"  wrote:
 
 At this point I'm thinking that it would just be a configurable value,
 defaulting to /nifi
 This way, admins can easily configure it and that way they could view
 whats in there, etc.
 out-of-band of NiFi. Though I'm all ears if there's a better way of doing
 this.
 
 
> On Dec 31, 2015, at 3:44 PM, Ricky Saltzer  wrote:
> 
> Overall this looks great, and will prove very useful as we try to scale
> nifi out.
> 
> When in clustered mode, will we have control over which znode the nifi
> cluster persists to? Or - do we want this unique to the cluster (e.g uuid
> of the flow)?
>> On Dec 31, 2015 12:35 PM, "Mark Payne"  wrote:
>> 
>> All,
>> 
>> I have spent a good amount of time in the past few weeks working on the
>> State Management Feature Proposal, described at [1].
>> The main JIRA for this is found at [2].
>> 
>> I have updated the Feature Proposal with more implementation details
>> describing the path that I have taken.
>> If anyone has any interest in reviewing the ideas and providing
 feedback,
>> please do so, as this is something that
>> we want to ensure that we get right!
>> 
>> Also of note, for those interested and located in the MD/DC/VA area, I
>> will be talking a bit about this feature at the next
>> Maryland Apache NiFi meetup on Jan. 7.
>> 
>> Thanks
>> -Mark
>> 
>> 
>> [1] https://cwiki.apache.org/confluence/display/NIFI/State+Management <
>> https://cwiki.apache.org/confluence/display/NIFI/State+Management>
>> [2] https://issues.apache.org/jira/browse/NIFI-259 <
>> https://issues.apache.org/jira/browse/NIFI-259>
> 


Re: Nifi + Oracle redo files?

2016-01-11 Thread jasondene
regarding: "Are you aware of any libraries that are open source friendly and 
support Oracle's redo logs?"

No, I'm not aware of any such libraries that would meet our need.

Will continue to search...  thank you!



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Nifi-Oracle-redo-files-tp6123p6158.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.