Re: get controller service's configuration

2017-08-09 Thread Andy LoPresto
You can get the current property values of a controller service from the 
processor by using the ProcessContext object. For example, in GetHTTP [1], in 
the @OnScheduled method, you could do:

context.getControllerServiceLookup().getControllerService("my-controller-service-id”);

context.getProperty("controller-service-property-name");
context.getProperty(SomeControllerService.CONSTANT_PROPERTY_DESCRIPTOR);

I forget if context.getProperty() will give the controller service properties 
as well as the processor properties. If it doesn’t, you can cast the retrieved 
ControllerService into AbstractControllerService or the concrete class and 
access available properties directly from the encapsulated ConfigurationContext.

[1] 
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetHTTP.java#L295

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Aug 9, 2017, at 6:57 PM, 尹文才  wrote:
> 
> Thanks Koji, I checked the link you provided and I think getting a
> DataSource is no different than getting the DBCP service(they could just
> get the connection). Actually I was trying to get the configured driver
> class to check the database type.
> 
> Regards,
> Ben
> 
> 2017-08-10 9:29 GMT+08:00 Koji Kawamura :
> 
>> Hi Ben,
>> 
>> I'm not aware of ways to obtain configurations of a controller from a
>> processor. Those should be encapsulated inside a controller service.
>> If you'd like to create DataSource instance instead of just obtaining
>> a connection, this discussion might be helpful:
>> https://github.com/apache/nifi/pull/1417
>> 
>> Although I would not recommend, if you really need to obtain all
>> configurations, you can do so by calling NiFi REST API from your
>> processor.
>> 
>> Thanks,
>> Koji
>> 
>> On Thu, Aug 10, 2017 at 10:09 AM, 尹文才  wrote:
>>> Hi guys, I have a customized processor with a DBCP controller service as
>> a
>>> property. I could get the DBCP controller service in my code, but does
>>> anyone know how to obtain all the configurations of the DBCP controller
>>> service in java code(e.g. Database Connection URL, Database Driver
>>> Location, etc) Thanks.
>>> 
>>> Regards,
>>> Ben
>> 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: get controller service's configuration

2017-08-09 Thread 尹文才
Thanks Koji, I checked the link you provided and I think getting a
DataSource is no different than getting the DBCP service(they could just
get the connection). Actually I was trying to get the configured driver
class to check the database type.

Regards,
Ben

2017-08-10 9:29 GMT+08:00 Koji Kawamura :

> Hi Ben,
>
> I'm not aware of ways to obtain configurations of a controller from a
> processor. Those should be encapsulated inside a controller service.
> If you'd like to create DataSource instance instead of just obtaining
> a connection, this discussion might be helpful:
> https://github.com/apache/nifi/pull/1417
>
> Although I would not recommend, if you really need to obtain all
> configurations, you can do so by calling NiFi REST API from your
> processor.
>
> Thanks,
> Koji
>
> On Thu, Aug 10, 2017 at 10:09 AM, 尹文才  wrote:
> > Hi guys, I have a customized processor with a DBCP controller service as
> a
> > property. I could get the DBCP controller service in my code, but does
> > anyone know how to obtain all the configurations of the DBCP controller
> > service in java code(e.g. Database Connection URL, Database Driver
> > Location, etc) Thanks.
> >
> > Regards,
> > Ben
>


Re: NiFi RecordPath Question

2017-08-09 Thread Andy LoPresto
Hi David,

This is absolutely the correct place to ask these questions. There is also a 
mailing list more focused on the user experience and less of the development 
discussion at us...@nifi.apache.org which may help as well.

Mark Payne contributed most of the record processing code, and has written 
about it here [1] and here [2], and Bryan Bende has also written a tutorial [3] 
on using it. I am far from a record path expert, but have you tried simply 
combining the predicates like so? (I don’t have a 1.2 instance available right 
now to test with).

 /loc2[*][isEmpty(./rlp)][./src = 'network']/acc

If this doesn’t work, hopefully Mark or another user can chime in with a 
suggestion, or if it is not currently possible, you can open a Jira ticket 
requesting this feature.

[1] https://blogs.apache.org/nifi/entry/record-oriented-data-with-nifi
[2] https://blogs.apache.org/nifi/entry/real-time-sql-on-event
[3] 
http://bryanbende.com/development/2017/06/20/apache-nifi-records-and-schema-registries
 


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Aug 9, 2017, at 3:51 PM, David Nahoopii  wrote:
> 
> https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#filters
> 
> 
> Apologies in advance - I'm sure this email alias isn't used for questions, 
> but I did see it at the top of the page in the link above.
> Is it possible to do any AND condition with using record path?
> 
> 
> 
> Given the below json object, I would like to use RecordPath in an 
> UpdateRecord processor to get the value of /loc2[*][isEmpty(./rlp) AND ./src 
> = 'network']/acc  (meaning, src = 'network' and rlp is empty/does not exist). 
>  Using nifi 1.2, is there no AND condition to use?  Has one been added in 1.3 
> or later?
> 
> 
> Thanks and apologies in advance. - Dave
> 
> 
> {
> 
>"loc2": [
>{
>"acc": 92,
>"src": "gps",
>"ll": {
>"lat": 41.83173031,
>"lon": -88.09725264
>},
>"sol": 1498861615000,
>"brn": 307.7,
>"alt": 191,
>"spd": 2.48reddi
>},
>{
>"acc": 18.088,
>"src": "network",
>"ll": {
>"lat": 41.8317428,
>"lon": -88.096802
>},
>"sol": 1498865950344,
>"alt": 193.9
>},
>{
>"acc": 18.088,
>"src": "network",
>"ll": {
>"lat": 41.8317428,
>"lon": -88.096802
>},
>"sol": 1498865950344,
>"alt": 193.9,
>"rlp": "passive"
>},
> 
>{
>"acc": 20,
>"src": "fused",
>"ll": {
>"lat": 41.8317428,
>"lon": -88.096802
>},
>"sol": 1498867975640,
>"alt": 0,
>"spd": 0
>}
>],
> 
> 
> }



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: get controller service's configuration

2017-08-09 Thread Koji Kawamura
Hi Ben,

I'm not aware of ways to obtain configurations of a controller from a
processor. Those should be encapsulated inside a controller service.
If you'd like to create DataSource instance instead of just obtaining
a connection, this discussion might be helpful:
https://github.com/apache/nifi/pull/1417

Although I would not recommend, if you really need to obtain all
configurations, you can do so by calling NiFi REST API from your
processor.

Thanks,
Koji

On Thu, Aug 10, 2017 at 10:09 AM, 尹文才  wrote:
> Hi guys, I have a customized processor with a DBCP controller service as a
> property. I could get the DBCP controller service in my code, but does
> anyone know how to obtain all the configurations of the DBCP controller
> service in java code(e.g. Database Connection URL, Database Driver
> Location, etc) Thanks.
>
> Regards,
> Ben


get controller service's configuration

2017-08-09 Thread 尹文才
Hi guys, I have a customized processor with a DBCP controller service as a
property. I could get the DBCP controller service in my code, but does
anyone know how to obtain all the configurations of the DBCP controller
service in java code(e.g. Database Connection URL, Database Driver
Location, etc) Thanks.

Regards,
Ben


NiFi RecordPath Question

2017-08-09 Thread David Nahoopii
https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#filters


Apologies in advance - I'm sure this email alias isn't used for questions, but 
I did see it at the top of the page in the link above.
Is it possible to do any AND condition with using record path?



Given the below json object, I would like to use RecordPath in an UpdateRecord 
processor to get the value of /loc2[*][isEmpty(./rlp) AND ./src = 
'network']/acc  (meaning, src = 'network' and rlp is empty/does not exist).  
Using nifi 1.2, is there no AND condition to use?  Has one been added in 1.3 or 
later?


Thanks and apologies in advance. - Dave


{

"loc2": [
{
"acc": 92,
"src": "gps",
"ll": {
"lat": 41.83173031,
"lon": -88.09725264
},
"sol": 1498861615000,
"brn": 307.7,
"alt": 191,
"spd": 2.48reddi
},
{
"acc": 18.088,
"src": "network",
"ll": {
"lat": 41.8317428,
"lon": -88.096802
},
"sol": 1498865950344,
"alt": 193.9
},
{
"acc": 18.088,
"src": "network",
"ll": {
"lat": 41.8317428,
"lon": -88.096802
},
"sol": 1498865950344,
"alt": 193.9,
"rlp": "passive"
},

{
"acc": 20,
"src": "fused",
"ll": {
"lat": 41.8317428,
"lon": -88.096802
},
"sol": 1498867975640,
"alt": 0,
"spd": 0
}
],


}


Re: MINIFI-350 minifi-cpp end-to-end integration testing framework

2017-08-09 Thread Andy Christianson
MiNiFi cpp team,

I have created the initial pytest/docker based test framework as well as a few 
initial test cases. Please review & merge the PR 
(https://github.com/apache/nifi-minifi-cpp/pull/126) at your convenience.

Regards,

Andy I.C.

From: kangax...@gmail.com  on behalf of Haimo Liu 

Sent: Thursday, July 13, 2017 2:07 PM
To: dev@nifi.apache.org
Subject: Re: MINIFI-350 minifi-cpp end-to-end integration testing framework

great idea Andy! I can see this being extremely valuable even outside of
the MINIFI cpp context. Specifically, to migrate my dataflow from one
environment to another (DEV to QA to PROD), an integration testing
framework could be very helpful for flow validation purposes.

in addition to testing your MINIFI agents and network connectivities, have
you taken into consideration the integration testing of a potentially very
complex dataflow itself? Say I am collecting data from 50 data sources, and
ingesting to 20 different targets, may I leverage your testing framework to
spin up necessary containers (HDFS, Hbase, Oracle, etc. just different end
points) and run a docker compose script to validate my flow during
migration? Would be very nice to see your framework to be designed
extensible in a way to cover flow specific testing as well. Maybe you
already have it all sorted out :)

Thanks,
Haimo

On Thu, Jul 13, 2017 at 1:50 PM, Andy Christianson <
achristian...@hortonworks.com> wrote:

> Thanks for the feedback. I will put together a proof of concept which we
> can further evaluate/refine/merge upstream.
>
> -Andy
>
> On 7/13/17, 11:30 AM, "Kevin Doran"  wrote:
>
> Great idea, Andy! Additional types of automated tests would help the
> minifi-cpp project significantly, and I think your proposal is an
> appropriate way to add integration tests for the minifi agent. This sounds
> like a great way to verify expected behavior of processors and the system
> of components in flow combinations.
>
> I like the idea of declarative tests that are interpreted / run by a
> harness or framework as a way to allow others can contribute test cases.
>
> I've never used the Bats framework before, but it seems like a
> reasonable option for what you describe. It might require writing a fair
> amount of bash code under-the-hood to get the functionality you want
> (helper functions and such), but it looks like it would keep the test cases
> themselves and the output clean and light. Perhaps others can offer
> suggestions here.
>
> One comment, which you've probably already considered, is that we
> should keep the dependencies (if any) that get added for integration tests
> that leverage the docker target optional so they are not required for folks
> that just want to build libminifi or the agent. It would be more of a
> developer/contributor option but users could skip these tests.
>
> /docker/test/integration seems like a reasonable place to add test
> cases. Others would probably know better. I think the README.md would be a
> reasonable place to document how to run the tests with a reference to
> another document that describes how to add / contribute new test cases. I'm
> not sure where the best location for the documentation should live.
>
> Thanks,
> Kevin
>
> On 7/13/17, 10:34, "Andy Christianson" 
> wrote:
>
> Yes, I envision having a directory of declarative test cases. Each
> would include a flow yaml, one or more input files, and expected outputs.
>
> I’d like to document the convention before writing the
> implementation because if the conventions are solid, we can change out the
> actual test driver implementation later on if needed.
>
> Would it be best to document this in a section within /README.md,
> or should I add a new file such as /docs/Testing.md, or /TESTING.md?
>
> As for where the test cases would be added, I was thinking maybe
> /docker/test/integration, keeping consistent with the existing convention
> (i.e. /libminifi/test/integration).
>
> -Andy
>
> On 7/13/17, 10:14 AM, "Marc"  wrote:
>
> Hi Andy,
>I think this is a great idea to test integrating MiNiFi
> among multiple
> system components. Do you have a feel for how you will allow
> others to
> create test cases? Will you attempt to minimize the footprint
> of
> contributed tests by creating a bats based framework? I ask
> because it
> would be cool if contributors could supply a flow ( input )
> and expected
> output and we automatically run the necessary
> containers/components. Is
> this along the lines of your vision?
>
>   Thanks,
>Marc
>
> On Wed, Jul 12, 2017 at 12:26 PM, Andy Christianson <
> achristian...@hortonworks.com> wrote:
>

Re: MINIFI-350 CMake target for docker integration tests

2017-08-09 Thread Aldrin Piri
Yeah, my vote would definitely be to have the separate targets.  I like
having a lot of options for testing but definitely like being able to
minimize needed dependencies appropriate to desired level of evaluation.

In terms of naming, would likely avoid calling these integration given
their special nature, but otherwise have no strong preferences.

On Wed, Aug 9, 2017 at 1:42 PM, Andy Christianson <
achristian...@hortonworks.com> wrote:

> These are good points. Having a separate target makes sense, plus it'll
> reduce risk of interfering with the existing native development workflow.
>
> What shall we call the new target? Some possibilities:
>
> - make sip (system integration tests)
> - make docker-verify
> - make verify-docker
> - make integration
>
> I'm open to ideas.
>
> -Andy I.C.
> 
> From: Marc 
> Sent: Wednesday, August 09, 2017 1:16 PM
> To: dev@nifi.apache.org
> Subject: Re: MINIFI-350 CMake target for docker integration tests
>
> Andy,
>   This is great stuff. To facilitate use by developers we should probably
> come to agreement on terminology. In my opinion, it seems that you are
> facilitating more system integration testing (SIT) based on our
> discussions. While what you are doing doesn't preclude you from integrating
> one or two components it does cause confusion on what to use when I want to
> test something. I think the same exists for our unit and integration tests
> that we have now, but those are a function of design decisions made vice a
> focus on testability. I think most of the tests that use the docker
> framework will be system tests, and the importance of this is that we
> should probably isolate these to a separate target. I think better
> isolation of what these test facilities do will end users, and limiting
> what ctest executes to pure unit and modular integration tests will be very
> useful.
>I think that also helps sell the story that one must install docker if
> they want to run system integration tests, but you can still integrate
> several components with ctest if you want to limit what you are testing. I
> personally look forward to these changes as I think having multi host tests
> will help find more bugs, but I wouldn't want this to run when I type make
> test.
>
> On Wed, Aug 9, 2017 at 12:39 PM, Andy Christianson <
> achristian...@hortonworks.com> wrote:
>
> > MiNiFi cpp team,
> >
> >
> > I am currently working on MINIFI-350. I have an integration test
> framework
> > plus a few integration tests which are invoked via a new script called
> > docker/DockerVerify.sh (https://github.com/achristian
> son/nifi-minifi-cpp/
> > blob/MINIFI-350/docker/DockerVerify.sh?). This is intended to be
> > consistent in naming and structure to the extant DockerBuild.sh.
> >
> >
> > My question resides in a final bit of integration. I see that there is a
> > custom CMake target in cmake/DockerConfig.cmake that calls
> DockerBuild.sh.
> > It seems like the best thing to integrate DockerVerify.sh is to add a
> > sibling CMake custom target, perhaps called something like
> 'docker-verify.'
> >
> >
> > A few stylistic/structural questions come to mind. Do we want these
> > integration tests to be invoked directly via a custom target, or do we
> want
> > that custom target to be an intermediate target which is called by some
> > existing 'make test' or 'make verify' target? The biggest risk, from
> what I
> > can see, is that if we hooked into 'make test,' then it would fail if the
> > user hadn't already run 'make docker.'
> >
> >
> > Please advise on preferred style and structure with regard to integration
> > of the new docker integration tests with CMake.
> >
> >
> > -Andy I.C.
> >
>
>
>


Re: FlowFile Logging

2017-08-09 Thread James Farrington
ah that sounds perfect for what I need to do. Thank you!

On Tue, Aug 8, 2017 at 7:20 PM, Andy LoPresto  wrote:

> You can add an appender in the conf/logback.xml file which handles
> “org.apache.nifi” and anything of ERROR level and writes to a separate log
> file, which you can then monitor/parse/send wherever you like. This will
> filter out the normal operations logging from the failures you are
> interested in. Unfortunately, by default the full stacktraces and some
> other error information will be included here as well.
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Aug 8, 2017, at 1:09 PM, James Farrington  wrote:
>
> Hi Andy,
>
> I am trying to catch any error that happens from any processor in my flow.
> So adding a PutFile for each and every processor would not be ideal. And I
> don't need the data to be transformed into a usable type. I am passing this
> data to Logstash as a csv file (so raw data is just fine). Any thoughts?
>
> Thank you,
> James
>
> On Mon, Aug 7, 2017 at 11:35 PM, Andy LoPresto 
> wrote:
>
> Hi James,
>
> The app log will definitely contain a lot of relevant information about
> flowfile failure, but you can also make this easier for yourself by routing
> the failure connection of the relevant processor to a PutFile/PutEmail
> processor which outputs the flowfile UUID and content claim size to a
> special error destination. This means you won’t have to parse the app log
> and filter the non-error information. You may also want to look at the
> SiteToSiteBulletinReportingTask and SiteToSiteProvenanceReportingTask to
> transform the metadata (errors = bulletins, provenance for failure events)
> into raw data that you can then operate on directly.
>
> Obviously, you can also have that failure connection go to other
> processors, back to the original to retry, etc.
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Aug 7, 2017, at 12:09 PM, James Farrington  wrote:
>
> Hi everyone,
>
> Whenever there is an error in a flow and some flow file is not processed
> properly, I need to retrieve the Flow File UUID and the size of that file.
> I checked the nifi-app.log file and it seemed that most (if not all) of the
> time this information was being sent there on a processor error. Does
> anyone know if this will always be the case? Or if there is a better way to
> capture all flow files that failed to process?
>
> Any help would be great!
>
> Thank you.
>
>
>


Re: MINIFI-350 CMake target for docker integration tests

2017-08-09 Thread Andy Christianson
These are good points. Having a separate target makes sense, plus it'll reduce 
risk of interfering with the existing native development workflow.

What shall we call the new target? Some possibilities:

- make sip (system integration tests)
- make docker-verify
- make verify-docker
- make integration

I'm open to ideas.

-Andy I.C.

From: Marc 
Sent: Wednesday, August 09, 2017 1:16 PM
To: dev@nifi.apache.org
Subject: Re: MINIFI-350 CMake target for docker integration tests

Andy,
  This is great stuff. To facilitate use by developers we should probably
come to agreement on terminology. In my opinion, it seems that you are
facilitating more system integration testing (SIT) based on our
discussions. While what you are doing doesn't preclude you from integrating
one or two components it does cause confusion on what to use when I want to
test something. I think the same exists for our unit and integration tests
that we have now, but those are a function of design decisions made vice a
focus on testability. I think most of the tests that use the docker
framework will be system tests, and the importance of this is that we
should probably isolate these to a separate target. I think better
isolation of what these test facilities do will end users, and limiting
what ctest executes to pure unit and modular integration tests will be very
useful.
   I think that also helps sell the story that one must install docker if
they want to run system integration tests, but you can still integrate
several components with ctest if you want to limit what you are testing. I
personally look forward to these changes as I think having multi host tests
will help find more bugs, but I wouldn't want this to run when I type make
test.

On Wed, Aug 9, 2017 at 12:39 PM, Andy Christianson <
achristian...@hortonworks.com> wrote:

> MiNiFi cpp team,
>
>
> I am currently working on MINIFI-350. I have an integration test framework
> plus a few integration tests which are invoked via a new script called
> docker/DockerVerify.sh (https://github.com/achristianson/nifi-minifi-cpp/
> blob/MINIFI-350/docker/DockerVerify.sh?). This is intended to be
> consistent in naming and structure to the extant DockerBuild.sh.
>
>
> My question resides in a final bit of integration. I see that there is a
> custom CMake target in cmake/DockerConfig.cmake that calls DockerBuild.sh.
> It seems like the best thing to integrate DockerVerify.sh is to add a
> sibling CMake custom target, perhaps called something like 'docker-verify.'
>
>
> A few stylistic/structural questions come to mind. Do we want these
> integration tests to be invoked directly via a custom target, or do we want
> that custom target to be an intermediate target which is called by some
> existing 'make test' or 'make verify' target? The biggest risk, from what I
> can see, is that if we hooked into 'make test,' then it would fail if the
> user hadn't already run 'make docker.'
>
>
> Please advise on preferred style and structure with regard to integration
> of the new docker integration tests with CMake.
>
>
> -Andy I.C.
>




Re: MINIFI-350 CMake target for docker integration tests

2017-08-09 Thread Marc
Andy,
  This is great stuff. To facilitate use by developers we should probably
come to agreement on terminology. In my opinion, it seems that you are
facilitating more system integration testing (SIT) based on our
discussions. While what you are doing doesn't preclude you from integrating
one or two components it does cause confusion on what to use when I want to
test something. I think the same exists for our unit and integration tests
that we have now, but those are a function of design decisions made vice a
focus on testability. I think most of the tests that use the docker
framework will be system tests, and the importance of this is that we
should probably isolate these to a separate target. I think better
isolation of what these test facilities do will end users, and limiting
what ctest executes to pure unit and modular integration tests will be very
useful.
   I think that also helps sell the story that one must install docker if
they want to run system integration tests, but you can still integrate
several components with ctest if you want to limit what you are testing. I
personally look forward to these changes as I think having multi host tests
will help find more bugs, but I wouldn't want this to run when I type make
test.

On Wed, Aug 9, 2017 at 12:39 PM, Andy Christianson <
achristian...@hortonworks.com> wrote:

> MiNiFi cpp team,
>
>
> I am currently working on MINIFI-350. I have an integration test framework
> plus a few integration tests which are invoked via a new script called
> docker/DockerVerify.sh (https://github.com/achristianson/nifi-minifi-cpp/
> blob/MINIFI-350/docker/DockerVerify.sh?). This is intended to be
> consistent in naming and structure to the extant DockerBuild.sh.
>
>
> My question resides in a final bit of integration. I see that there is a
> custom CMake target in cmake/DockerConfig.cmake that calls DockerBuild.sh.
> It seems like the best thing to integrate DockerVerify.sh is to add a
> sibling CMake custom target, perhaps called something like 'docker-verify.'
>
>
> A few stylistic/structural questions come to mind. Do we want these
> integration tests to be invoked directly via a custom target, or do we want
> that custom target to be an intermediate target which is called by some
> existing 'make test' or 'make verify' target? The biggest risk, from what I
> can see, is that if we hooked into 'make test,' then it would fail if the
> user hadn't already run 'make docker.'
>
>
> Please advise on preferred style and structure with regard to integration
> of the new docker integration tests with CMake.
>
>
> -Andy I.C.
>


RE: [EXT] Re: Updating users through Rest API

2017-08-09 Thread Karthik Kothareddy (karthikk) [CONT - Type 2]
Matt,

Sorry I forgot to update the  community on this. I tried what you suggested and 
it worked like magic. So the right way to do it is 

1. Create a user first and get his UID, do not give him a UID and give Version: 
0 (POST)
2. Get a UserGroup (json format). (GET)
3. Add the new user using UserGroup json returned from Step-2. Add UID and 
respective version, permissions to the Json. (PUT)

I hope the procedure is similar for the access policies.

Thanks
Karthik

-Original Message-
From: Matt Gilman [mailto:matt.c.gil...@gmail.com] 
Sent: Friday, August 04, 2017 10:31 AM
To: dev@nifi.apache.org
Subject: [EXT] Re: Updating users through Rest API

Karthik,

Group membership is managed through the group. So you would need to update the 
group by adding the user identifier to the users list and 'PUT' that.
To see these requests in action, I would suggest opening the Developer Tools of 
your browser as the UI uses the REST API exclusively.

Please let me know if you have any follow-up questions.

Thanks

Matt

On Fri, Aug 4, 2017 at 11:50 AM, Karthik Kothareddy (karthikk) [CONT - Type 2] 
 wrote:

> Hello All,
>
> I am trying to add/update users through REST API, I am using 
> InvokeHTTP to do that, I tried simple addition of user with the below 
> Json and it worked perfectly.
>
> {
>   "revision" : {
> "version" : 0
>   },
>   "permissions" : {
> "canRead" : true,
> "canWrite" : false
>   },
>   "component" : {
> "identity" : "testuser"
>   }
> }
>
> However, once the user is added I am trying to add him to the user 
> groups that I have for my instance. I'm using the json that I got by 
> querying a different user by (/tenants/users/{id}). I have updated all 
> the UID in the returned json to match the other user and use this PUT 
> /tenants/users/{id} for the update. It doesn't seem to have any effect 
> on the "testuser", it still says he does not belong any group. Can 
> anyone help me with some examples on how to effectively add/update users.
>
> Thanks for your time,
>
> -Karthik
>


MINIFI-350 CMake target for docker integration tests

2017-08-09 Thread Andy Christianson
MiNiFi cpp team,


I am currently working on MINIFI-350. I have an integration test framework plus 
a few integration tests which are invoked via a new script called 
docker/DockerVerify.sh 
(https://github.com/achristianson/nifi-minifi-cpp/blob/MINIFI-350/docker/DockerVerify.sh?).
 This is intended to be consistent in naming and structure to the extant 
DockerBuild.sh.


My question resides in a final bit of integration. I see that there is a custom 
CMake target in cmake/DockerConfig.cmake that calls DockerBuild.sh. It seems 
like the best thing to integrate DockerVerify.sh is to add a sibling CMake 
custom target, perhaps called something like 'docker-verify.'


A few stylistic/structural questions come to mind. Do we want these integration 
tests to be invoked directly via a custom target, or do we want that custom 
target to be an intermediate target which is called by some existing 'make 
test' or 'make verify' target? The biggest risk, from what I can see, is that 
if we hooked into 'make test,' then it would fail if the user hadn't already 
run 'make docker.'


Please advise on preferred style and structure with regard to integration of 
the new docker integration tests with CMake.


-Andy I.C.


MINIFI-368 exclude hidden files when scanning for src files

2017-08-09 Thread Andy Christianson
MiNiFi cpp team,


I have submitted a PR which fixes an issue where having a file open in vim 
causes the cmake build to fail. It fails because the BuildTests.cmake file 
includes hidden files in its scan for source files.


Please have a look at https://github.com/apache/nifi-minifi-cpp/pull/125? and 
merge in when possible.


-Andy C.


Date format method in NiFi Expression Language doesn't support negative timestamp

2017-08-09 Thread YuNing
Hello,Everyone
   Now i found the date format method in NiFi Expression Language doesn't 
support negative timestamp which means the time before 1970.01.01 . Is there 
any idea on how to achieve this, and can I define my own  NiFi Expression 
Language.

Thanks for your early reply.



Best Regards
YuNing