Connection between RPG and input port is failing via API

2018-03-13 Thread Ravi Papisetti (rpapiset)
Hi,

Nifi version 1.5

I am trying to create connection between output port to input port at root RPG. 
Below is my json request:


{"revision": {"clientId": "anonymous","version": 0},"component": {"name": 
"","source": {  "id": "22df69f1-0162-1000--6c01773e",  "groupId": 
"22df42c1-0162-1000--cd3f7d2b",  "type": "OUTPUT_PORT"},"destination": 
{  "id": "16628c20-0ed4-3372-a9cc-9b38c49aaf29",  "groupId": 
"ad4a6524-0161-1000--f8293860",  "type": "REMOTE_INPUT_PORT"}}}



This works fine if I put delay about 45 secs after creating destination port 
(remote input port). Otherwise it is failing with below error:

Response to create connection from out to RPG: Node xxx is unable to fulfill 
this request due to: Unable to find the specified destination.



This port will show up in the response from 
/nifi-api/remote-process-groups/" + rootRPGId. Still it 
complains saying unable to find specified destination.



Are there any other parameters to be checked before issuing the connection 
request?



Appreciate any help.



Thanks,

Ravi Papisetti







Re: Issues with QueryDatabaseTable processor's State

2018-03-13 Thread Matt Burgess
Marcio,

These were choices based on user experience (UX), there's always a tradeoff
between ease-of-use and flexibility. We could offer a property for a state
prefix (that uses Expression Language), please feel free to write a Jira
for that improvement. The original idea was just to specify a column that
would always increase, so future executions of the SQL statement would only
grab new rows.

Re: arbitrary queries, that has been proposed and a solution submitted [1].
In general, it seems a popular concept is that you can provide whatever
query you want, along with whatever columns you want to keep track of, and
somehow we'd always be able to generate the appropriate SQL, even with all
the discrepancies between drivers, dialects, etc. It seems easy to
implement (from a user perspective), but once you have to touch the SQL you
need to parse it, then you need to consider the dialects, etc. which is a
common problem for many software projects. Or else the user has to generate
the appropriate SQL but somehow specify which state is persisted for the
next run. Not saying it can't be done, but as you say it is meant for more
advanced users.

Basically, I think you're describing the convergence of ExecuteSQL with
QueryDatabaseTable, and IMO I think it might be better to have a third
processor that can do both, for the advanced users that don't mind fiddling
with configuration when things go wrong. The existing processors are for
the general user, to offer some incremental-fetch / layman-CDC capabilities
to cover a large number of basic-to-intermediate use cases.

Regards,
Matt

[1] https://github.com/apache/nifi/pull/2162

On Tue, Mar 13, 2018 at 10:33 PM, Márcio Faria 
wrote:

> Matt,
>
> Just curious: Why is it necessary to identify that global state using
> catalog, schema, table or column names? Those are database concepts. As a
> user, I'd prefer to have some flexibility here. Surely the component needs
> to know how to identify the database object, but beyond that, we should be
> free to choose how to identify that "global"state ourselves. Would that be
> feasible, maybe as an optional attribute that by default would be set with
> the database.table.column string?
>
> I have another example, this time regarding the "while" attribute of
> QueryDatabaseTable. Simply put, it's not enough. I have data that for some
> reason was originally inserted out of order into a source table (or the
> database just decided to show it like that), and without an "order by"
> clause I can't avoid having gaps in the data I'm trying to extract from it
> now. Not a total blocker since I could use a little SQL injection to work
> around the limitation, but it should not be like that. What if one needs a
> more elaborate "select", for example? I would be far preferable to trust
> the user and let them inform the whole SQL command, of course with
> placeholders so NiFi could properly set the values under its control. An
> optional attribute that would take precedence over the "while" when
> informed would be enough. I'd expect that kind of thing to be not so hard
> to implement, and it wold make the processor much more powerful for more
> advanced users without complicating the lives of those who are happy with a
> simpler configuration.
>
> In general, when dealing with SQL and databases, the less assumptions we
> make about how the component is going to be used, the better. SQL is very
> powerful, and its power should be fully available to this kind of
> processor, IMHO.
>
> What do you think?
>
> Thank you,
>
> Marcio
>
>
> On Tuesday, March 13, 2018 8:56 PM, Matt Burgess 
> wrote:
>
>
> Raman,
>
> Originally, we stored state only based on column names, which could
> obviously cause problems when you have two different tables with the same
> column name. However this was because the original DB Fetch processors
> (QueryDatabaseTable, GenerateTableFetch, e.g.) did not accept incoming flow
> files and thus acted on a single table.  Then we added the ability for
> GenerateTableFetch to accept incoming flow files, and realized we should be
> storing state at least based on table name + column name, since GTF might
> get multiple tables in.  I believe you are running into the issue where
> we'd need to qualify the state based on database name + table name + column
> name, and please feel free to write up a Jira for that improvement.  The
> cutoff at table name was a tradeoff against complexity, as a database might
> not be fully-qualified by its name either (imagine multiple hosts with
> replicated DBs inside, then how do we know that two hosts don't point at
> the same place?).
>
> For your use case, I think we'd need to store state by the aforementioned
> "better-qualified" key, but that might be the limit to our name
> qualification. We will have to deal with backwards-compatibility as we did
> before we added table names, but since we have precedence I wouldn't think
> it would be 

Re: Issues with QueryDatabaseTable processor's State

2018-03-13 Thread Márcio Faria
Matt, 
Just curious: Why is it necessary to identify that global state using catalog, 
schema, table or column names? Those are database concepts. As a user, I'd 
prefer to have some flexibility here. Surely the component needs to know how to 
identify the database object, but beyond that, we should be free to choose how 
to identify that "global"state ourselves. Would that be feasible, maybe as an 
optional attribute that by default would be set with the database.table.column 
string?
I have another example, this time regarding the "while" attribute of 
QueryDatabaseTable. Simply put, it's not enough. I have data that for some 
reason was originally inserted out of order into a source table (or the 
database just decided to show it like that), and without an "order by" clause I 
can't avoid having gaps in the data I'm trying to extract from it now. Not a 
total blocker since I could use a little SQL injection to work around the 
limitation, but it should not be like that. What if one needs a more elaborate 
"select", for example? I would be far preferable to trust the user and let them 
inform the whole SQL command, of course with placeholders so NiFi could 
properly set the values under its control. An optional attribute that would 
take precedence over the "while" when informed would be enough. I'd expect that 
kind of thing to be not so hard to implement, and it wold make the processor 
much more powerful for more advanced users without complicating the lives of 
those who are happy with a simpler configuration.
In general, when dealing with SQL and databases, the less assumptions we make 
about how the component is going to be used, the better. SQL is very powerful, 
and its power should be fully available to this kind of processor, IMHO.
What do you think?
Thank you,

Marcio 

On Tuesday, March 13, 2018 8:56 PM, Matt Burgess  
wrote:
 

 Raman,
Originally, we stored state only based on column names, which could obviously 
cause problems when you have two different tables with the same column name. 
However this was because the original DB Fetch processors (QueryDatabaseTable, 
GenerateTableFetch, e.g.) did not accept incoming flow files and thus acted on 
a single table.  Then we added the ability for GenerateTableFetch to accept 
incoming flow files, and realized we should be storing state at least based on 
table name + column name, since GTF might get multiple tables in.  I believe 
you are running into the issue where we'd need to qualify the state based on 
database name + table name + column name, and please feel free to write up a 
Jira for that improvement.  The cutoff at table name was a tradeoff against 
complexity, as a database might not be fully-qualified by its name either 
(imagine multiple hosts with replicated DBs inside, then how do we know that 
two hosts don't point at the same place?).
For your use case, I think we'd need to store state by the aforementioned 
"better-qualified" key, but that might be the limit to our name qualification. 
We will have to deal with backwards-compatibility as we did before we added 
table names, but since we have precedence I wouldn't think it would be too 
difficult to implement.
As a workaround, you might try swapping QueryDatabaseTable with 
GenerateTableFetch, and trying to distribute the flow files for a particular DB 
to a particular instance of ExecuteSQL to actually fetch the rows. You should 
be able to use RouteOnAttribute for this, assuming your table name is an 
attribute on the flow file.
Regards,Matt

On Tue, Mar 13, 2018 at 8:30 PM, Ramaninder Singh Jhajj 
 wrote:

Hello Everyone,
I am facing an issue with QueryDatabaseTable processor.
I have 4 identical mySQL database tables on 4 different AWS instances. The 
structure is same but data is different. What I am trying to do is, have 4 
QueryDatabaseTable processors and fetch the data from all the 4 instances to 
process it further and store in elasticsearch. 

This is the structure of the flow, now my issue is:When any one of the 
processors run, it stores the "Maximum-value Columns" value in the state as 
shown below and this state is global in the cluster. 
​
​

Now when second QueryDatabaseTable processor runs, it overwrites the state 
value written by the first. I am facing issue with maintaining state for all 4 
processors. Processors runs fine without any issue but obviously, the data 
being fetches is not consistent as "id" column values gets overwritten in the 
state.
Is there any solution to this problem. I need to do incremental fetch to all 4 
identical tables on 4 instances in a single flow and single cluster.
Please let me know if anyone faced similar problem and if there is any solution 
for this.
Kind Regards,Raman



   

Re: Issues with QueryDatabaseTable processor's State

2018-03-13 Thread Matt Burgess
Raman,

Originally, we stored state only based on column names, which could
obviously cause problems when you have two different tables with the same
column name. However this was because the original DB Fetch processors
(QueryDatabaseTable, GenerateTableFetch, e.g.) did not accept incoming flow
files and thus acted on a single table.  Then we added the ability for
GenerateTableFetch to accept incoming flow files, and realized we should be
storing state at least based on table name + column name, since GTF might
get multiple tables in.  I believe you are running into the issue where
we'd need to qualify the state based on database name + table name + column
name, and please feel free to write up a Jira for that improvement.  The
cutoff at table name was a tradeoff against complexity, as a database might
not be fully-qualified by its name either (imagine multiple hosts with
replicated DBs inside, then how do we know that two hosts don't point at
the same place?).

For your use case, I think we'd need to store state by the aforementioned
"better-qualified" key, but that might be the limit to our name
qualification. We will have to deal with backwards-compatibility as we did
before we added table names, but since we have precedence I wouldn't think
it would be too difficult to implement.

As a workaround, you might try swapping QueryDatabaseTable with
GenerateTableFetch, and trying to distribute the flow files for a
particular DB to a particular instance of ExecuteSQL to actually fetch the
rows. You should be able to use RouteOnAttribute for this, assuming your
table name is an attribute on the flow file.

Regards,
Matt


On Tue, Mar 13, 2018 at 8:30 PM, Ramaninder Singh Jhajj <
jhajj.raman...@gmail.com> wrote:

> Hello Everyone,
>
> I am facing an issue with QueryDatabaseTable processor.
>
> I have 4 identical mySQL database tables on 4 different AWS instances. The
> structure is same but data is different. What I am trying to do is, have 4
> QueryDatabaseTable processors and fetch the data from all the 4 instances
> to process it further and store in elasticsearch.
>
>
> This is the structure of the flow, now my issue is:
> *When any one of the processors run, it stores the "Maximum-value Columns"
> value in the state as shown below and this state is global in the cluster. *
>
>
>
> *​​*
>
> *Now when second QueryDatabaseTable processor runs, it overwrites the
> state value written by the first. I am facing issue with maintaining state
> for all 4 processors. Processors runs fine without any issue but obviously,
> the data being fetches is not consistent as "id" column values gets
> overwritten in the state.*
>
> Is there any solution to this problem. I need to do incremental fetch to
> all 4 identical tables on 4 instances in a single flow and single cluster.
>
> Please let me know if anyone faced similar problem and if there is any
> solution for this.
>
> Kind Regards,
> Raman
>


Issues with QueryDatabaseTable processor's State

2018-03-13 Thread Ramaninder Singh Jhajj
Hello Everyone,

I am facing an issue with QueryDatabaseTable processor.

I have 4 identical mySQL database tables on 4 different AWS instances. The
structure is same but data is different. What I am trying to do is, have 4
QueryDatabaseTable processors and fetch the data from all the 4 instances
to process it further and store in elasticsearch.


This is the structure of the flow, now my issue is:
*When any one of the processors run, it stores the "Maximum-value Columns"
value in the state as shown below and this state is global in the cluster. *



*​​*

*Now when second QueryDatabaseTable processor runs, it overwrites the state
value written by the first. I am facing issue with maintaining state for
all 4 processors. Processors runs fine without any issue but obviously, the
data being fetches is not consistent as "id" column values gets overwritten
in the state.*

Is there any solution to this problem. I need to do incremental fetch to
all 4 identical tables on 4 instances in a single flow and single cluster.

Please let me know if anyone faced similar problem and if there is any
solution for this.

Kind Regards,
Raman


Re: Error when executing NiFi REST API PUT /processors/{id} call

2018-03-13 Thread Daniel Chaffelson
Hi Vitaly,
To give a more detailed example of what Matt is saying, here's the Python
version of what you are doing from the NiPyApi test suite.
In this test we change the scheduling period of a processor to 3s:

processor = nipyapi.canvas.get_processor('someprocessor')
update = nifi.ProcessorConfigDTO(
scheduling_period='3s'
)
return nipyapi.nifi.ProcessorsApi().update_processor(
id=processor.id,
body=nipyapi.nifi.ProcessorEntity(
component=nipyapi.nifi.ProcessorDTO(
config=update,
id=processor.id
),
revision=processor.revision,
)
)

I found it very helpful to limit the information passed to the API to
exactly what it needs, as Matt mentioned, to avoid unexpected behaviour.
https://github.com/Chaffelson/nipyapi

Hope this helps,
Dan

On Tue, Mar 13, 2018 at 7:09 PM Matt Gilman  wrote:

> Vitaly,
>
> I believe that error is being generated by Jackson which handles the JSON
> (de)serialization. Often times there is additional information that gets
> logged to your logs/nifi-user.log. Check there to see if there is anything
> helpful.
>
> By any chance, are you attempting to pass the status back into the update
> call? The entity object holds information about the component (the
> configuration, the revision, status, permissions, etc). When performing an
> update request you only need to pass in the revision (RevisionDTO) and the
> configuration (ProcessorDTO). Check out the Developer Tools in your browser
> to see various requests in action.
>
> Hope this helps.
>
> Matt
>
> On Tue, Mar 13, 2018 at 2:45 PM, Vitaly Krivoy  > wrote:
>
>>
>>
>>
>>
>>
>>
>> I am trying to update processors for a workflow, which was previously
>> generated from a template by instantiating it. All work is done from a Java
>> program through NiFi REST API. Template holds processors in a group. Since
>> I know the template group name and strategy by which NiFi assigns a new
>> group name when it instantiates a template (“Copy of “ > in a template>), I find instantiated group by name, get its group id and
>> get list of processors in the group a ProcessorsEntity object. I then
>> step through the list of processors contained in ProcessorsEntity, and
>> for each processor set desired properties contained in  ProcessorEntity and
>> its linked ProcessorDTO and ProcessorConfigDTO classes. I then set ClientId
>> in the Revison object like this:
>> proc.getRevision().setClientId(restClient.getClientId());
>>
>> Here *proc* is ProcessorEntity and restClient is a custom class which
>> contains all code necessary to communicate with NiFi REST API.
>>
>> At this point I am trying to update the processor through
>> PUT/processors/{id} call, passing it modified data in ProcessorEntity, the
>> same one that I got from ProcessorsEntity and update in place, rather
>> than updating a copy. I figured that there is no need to do it for a DTO
>> object.
>> When I execute PUT/processors/{id} call, I get an exception, which I need
>> the help with. Here is what I see in Eclipse.
>>
>> Exception when calling: ProcessorsApi#updateProcessor
>>
>> Response body: Text '01/01/1970 13:04:02 EST' could not be parsed at
>> index 2 (through reference chain:
>> org.apache.nifi.web.api.entity.ProcessorEntity["status"]->org.apache.nifi.web.api.dto.status.ProcessorStatusDTO["statsLastRefreshed"])
>>
>> io.swagger.client.ApiException: Bad Request
>>
>>at
>> io.swagger.client.ApiClient.handleResponse(ApiClient.java:1058)
>>
>>at io.swagger.client.ApiClient.execute(ApiClient.java:981)
>>
>>at
>> io.swagger.client.api.ProcessorsApi.updateProcessorWithHttpInfo(ProcessorsApi.java:707)
>>
>>at
>> io.swagger.client.api.ProcessorsApi.updateProcessor(ProcessorsApi.java:692)
>>
>>at
>> com.bdss.nifi.trickle.NiFiRestClient.updateProcessor(NiFiRestClient.java:149)
>>
>>at
>> com.bdss.nifi.trickle.TemplateConfigurator.configureGroupProcessors(TemplateConfigurator.java:84)
>>
>>at
>> com.bdss.nifi.trickle.MdmAzureConfigurator.configureGroup(MdmAzureConfigurator.java:89)
>>
>>at
>> com.bdss.nifi.trickle.Deployer.deployTemplate(Deployer.java:52)
>>
>>at com.bdss.nifi.trickle.Trickle.main(Trickle.java:132)
>>
>>
>>
>> Notes:
>>
>> Everything that begins with com.bdss.nifi.trickle are classes in the Java
>> application which I am implementing.
>> To access NiFi REST API I am using this REST client:
>> https://github.com/simplesteph/nifi-api-client-java, which itself relies
>> on Swagger.
>>
>>
>>
>> Many thanks for any tips.
>>
>>
>>
>> STATEMENT OF CONFIDENTIALITY The information contained in this email
>> message and any attachments may be confidential and legally privileged and
>> is intended for the use of the addressee(s) only. If you are not an

Re: NiFi https/ssl configuration

2018-03-13 Thread Andy LoPresto
Prashanth,

The command you ran to generate client certificates did not have a space 
between “CN=admin,” and “OU=NIFI” in the certificate DN. This DN must match 
exactly the Initial Admin Identity you configure in authorizers.xml, which it 
does not. Either change the IAI to match the certificate DN and remove 
users.xml and authorizations.xml and restart NiFi, or use the TLS Toolkit to 
regenerate a client certificate with the DN that you put in authorizers.xml.


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 13, 2018, at 5:24 AM, V, Prashanth (Nokia - IN/Bangalore) 
>  wrote:
> 
> Hi Team,
> 
> I did the following steps to configure ssl for nifi:
> Ran `bin/tls-toolkit.sh standalone -n 'hostname' -C 'CN=admin,OU=NIFI' -o 
> ./target`
> Copied nifi.propeties, keystore & trustore jks file under nifi/conf folder
> Updated authorizers.xml with
> 
> Then restarted NiFi
> 
> I was getting error like ‘No applicable policies could be found. Contact the 
> system administrator.’
> Then I just restarted the nifi again, then the error went. I am seeing this 
> behaviour everytime when I delete existing users.xml & authorizers.xml & 
> restarting NiFi ☹.
> 
> Is it NiFi default behaviour? Please help me in resolving this issue.
> 
> Thanks & Regards,
> Prashanth V



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: InvokeHttp -- StandardSSLContextService Validator Exception

2018-03-13 Thread Andy LoPresto
Pat,

That error means that NiFi could not find a valid trusted certificate for the 
hostname in question within the provided truststore. Understanding that the 
system in question may be on a limited network, can you please document what 
“the certs work when I use curl” means? Sometimes people include flags in curl 
that sidestep certain verification steps. You can also use the s_client tool 
provided within OpenSSL to verify the hostname and certificate exchange.

In general, you should be able to use a browser tool or s_client to show the 
certificate(s) being presented by the endpoint, and verify that the Subject 
Public Key Identifier of one or more of those certificates matches one listed 
in your truststore ($ keytool -list -v -keystore my_truststore.jks). Some other 
good things to verify:

* the certificate has validity dates that are currently active
* the certificate presents the proper hostname/IP address that the remote 
service is running on. Ensure any alternates you want to resolve are in the 
Subject Alternative Names entry
* you only need the private key in a keystore (and the keystore at all) if you 
are using TLS mutual authentication (i.e. NiFi presents a client certificate 
for authentication to be verified by the remote service)

Let us know if these steps help and you have further information.

$ openssl s_client -connect  -debug -state -cert 
 -key  -CAfile 



Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 13, 2018, at 12:16 PM, Jones, Patrick L.  wrote:
> 
> The best I could do right now is:
> 
> invokehttp.java.exception.class
> javax.net.ssl.SSLHandshakeException
> 
> invokehttp.java.exception.message
> sun.security.validator.ValidatorException: PKIX path building failed:
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
> 
> Any thoughts?
> 
> Pat
> 
> From: Jorge Machado [mailto:jom...@me.com ]
> Sent: Tuesday, March 13, 2018 9:48 AM
> To: users@nifi.apache.org 
> Subject: Re: InvokeHttp -- StandardSSLContextService Validator Exception
> 
> Any trace for us ?
> Working Example:
> 
> 
> Jorge Machado
> 
> 
> 
> 
> 
> 
> On 13 Mar 2018, at 13:11, Jones, Patrick L.  > wrote:
> 
> Howdy,
> 
> I’m using a StandardSSLContextService with InvokeHttp and I get a 
> ValidatorException ‘unable to find valid certification path to requested 
> target.’  The certs work when I use curl.  I put the CA cert and the public 
> key cert in the StandardSSLContextService truststore and the private key in 
> the keystore.  Any thoughts on how to make this work?
> 
> Thank you,
> 
> Pat



signature.asc
Description: Message signed with OpenPGP using GPGMail


RE: InvokeHttp -- StandardSSLContextService Validator Exception

2018-03-13 Thread Jones, Patrick L.
The best I could do right now is:

invokehttp.java.exception.class
javax.net.ssl.SSLHandshakeException

invokehttp.java.exception.message
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target

Any thoughts?

Pat

From: Jorge Machado [mailto:jom...@me.com]
Sent: Tuesday, March 13, 2018 9:48 AM
To: users@nifi.apache.org
Subject: Re: InvokeHttp -- StandardSSLContextService Validator Exception

Any trace for us ?
Working Example:
[cid:image001.png@01D3BADE.2D9BAF00]

Jorge Machado






On 13 Mar 2018, at 13:11, Jones, Patrick L. 
> wrote:

Howdy,

I’m using a StandardSSLContextService with InvokeHttp and I get a 
ValidatorException ‘unable to find valid certification path to requested 
target.’  The certs work when I use curl.  I put the CA cert and the public key 
cert in the StandardSSLContextService truststore and the private key in the 
keystore.  Any thoughts on how to make this work?

Thank you,

Pat




Re: Error when executing NiFi REST API PUT /processors/{id} call

2018-03-13 Thread Matt Gilman
Vitaly,

I believe that error is being generated by Jackson which handles the JSON
(de)serialization. Often times there is additional information that gets
logged to your logs/nifi-user.log. Check there to see if there is anything
helpful.

By any chance, are you attempting to pass the status back into the update
call? The entity object holds information about the component (the
configuration, the revision, status, permissions, etc). When performing an
update request you only need to pass in the revision (RevisionDTO) and the
configuration (ProcessorDTO). Check out the Developer Tools in your browser
to see various requests in action.

Hope this helps.

Matt

On Tue, Mar 13, 2018 at 2:45 PM, Vitaly Krivoy 
wrote:

>
>
>
>
>
>
> I am trying to update processors for a workflow, which was previously
> generated from a template by instantiating it. All work is done from a Java
> program through NiFi REST API. Template holds processors in a group. Since
> I know the template group name and strategy by which NiFi assigns a new
> group name when it instantiates a template (“Copy of “  in a template>), I find instantiated group by name, get its group id and
> get list of processors in the group a ProcessorsEntity object. I then
> step through the list of processors contained in ProcessorsEntity, and
> for each processor set desired properties contained in  ProcessorEntity and
> its linked ProcessorDTO and ProcessorConfigDTO classes. I then set ClientId
> in the Revison object like this:
> proc.getRevision().setClientId(restClient.getClientId());
>
> Here *proc* is ProcessorEntity and restClient is a custom class which
> contains all code necessary to communicate with NiFi REST API.
>
> At this point I am trying to update the processor through
> PUT/processors/{id} call, passing it modified data in ProcessorEntity, the
> same one that I got from ProcessorsEntity and update in place, rather
> than updating a copy. I figured that there is no need to do it for a DTO
> object.
> When I execute PUT/processors/{id} call, I get an exception, which I need
> the help with. Here is what I see in Eclipse.
>
> Exception when calling: ProcessorsApi#updateProcessor
>
> Response body: Text '01/01/1970 13:04:02 EST' could not be parsed at index
> 2 (through reference chain: org.apache.nifi.web.api.
> entity.ProcessorEntity["status"]->org.apache.nifi.web.api.dto.status.
> ProcessorStatusDTO["statsLastRefreshed"])
>
> io.swagger.client.ApiException: Bad Request
>
>at io.swagger.client.ApiClient.
> handleResponse(ApiClient.java:1058)
>
>at io.swagger.client.ApiClient.execute(ApiClient.java:981)
>
>at io.swagger.client.api.ProcessorsApi.
> updateProcessorWithHttpInfo(ProcessorsApi.java:707)
>
>at io.swagger.client.api.ProcessorsApi.updateProcessor(
> ProcessorsApi.java:692)
>
>at com.bdss.nifi.trickle.NiFiRestClient.updateProcessor(
> NiFiRestClient.java:149)
>
>at com.bdss.nifi.trickle.TemplateConfigurator.
> configureGroupProcessors(TemplateConfigurator.java:84)
>
>at com.bdss.nifi.trickle.MdmAzureConfigurator.
> configureGroup(MdmAzureConfigurator.java:89)
>
>at com.bdss.nifi.trickle.Deployer.deployTemplate(
> Deployer.java:52)
>
>at com.bdss.nifi.trickle.Trickle.main(Trickle.java:132)
>
>
>
> Notes:
>
> Everything that begins with com.bdss.nifi.trickle are classes in the Java
> application which I am implementing.
> To access NiFi REST API I am using this REST client: https://github.com/
> simplesteph/nifi-api-client-java, which itself relies on Swagger.
>
>
>
> Many thanks for any tips.
>
>
>
> STATEMENT OF CONFIDENTIALITY The information contained in this email
> message and any attachments may be confidential and legally privileged and
> is intended for the use of the addressee(s) only. If you are not an
> intended recipient, please: (1) notify me immediately by replying to this
> message; (2) do not use, disseminate, distribute or reproduce any part of
> the message or any attachment; and (3) destroy all copies of this message
> and any attachments.
>


Error when executing NiFi REST API PUT /processors/{id} call

2018-03-13 Thread Vitaly Krivoy



I am trying to update processors for a workflow, which was previously generated 
from a template by instantiating it. All work is done from a Java program 
through NiFi REST API. Template holds processors in a group. Since I know the 
template group name and strategy by which NiFi assigns a new group name when it 
instantiates a template ("Copy of " ), I 
find instantiated group by name, get its group id and get list of processors in 
the group a ProcessorsEntity object. I then step through the list of processors 
contained in ProcessorsEntity, and for each processor set desired properties 
contained in  ProcessorEntity and its linked ProcessorDTO and 
ProcessorConfigDTO classes. I then set ClientId in the Revison object like this:
proc.getRevision().setClientId(restClient.getClientId());
Here proc is ProcessorEntity and restClient is a custom class which contains 
all code necessary to communicate with NiFi REST API.
At this point I am trying to update the processor through PUT/processors/{id} 
call, passing it modified data in ProcessorEntity, the same one that I got from 
ProcessorsEntity and update in place, rather than updating a copy. I figured 
that there is no need to do it for a DTO object.
When I execute PUT/processors/{id} call, I get an exception, which I need the 
help with. Here is what I see in Eclipse.
Exception when calling: ProcessorsApi#updateProcessor
Response body: Text '01/01/1970 13:04:02 EST' could not be parsed at index 2 
(through reference chain: 
org.apache.nifi.web.api.entity.ProcessorEntity["status"]->org.apache.nifi.web.api.dto.status.ProcessorStatusDTO["statsLastRefreshed"])
io.swagger.client.ApiException: Bad Request
   at 
io.swagger.client.ApiClient.handleResponse(ApiClient.java:1058)
   at io.swagger.client.ApiClient.execute(ApiClient.java:981)
   at 
io.swagger.client.api.ProcessorsApi.updateProcessorWithHttpInfo(ProcessorsApi.java:707)
   at 
io.swagger.client.api.ProcessorsApi.updateProcessor(ProcessorsApi.java:692)
   at 
com.bdss.nifi.trickle.NiFiRestClient.updateProcessor(NiFiRestClient.java:149)
   at 
com.bdss.nifi.trickle.TemplateConfigurator.configureGroupProcessors(TemplateConfigurator.java:84)
   at 
com.bdss.nifi.trickle.MdmAzureConfigurator.configureGroup(MdmAzureConfigurator.java:89)
   at 
com.bdss.nifi.trickle.Deployer.deployTemplate(Deployer.java:52)
   at com.bdss.nifi.trickle.Trickle.main(Trickle.java:132)

Notes:
Everything that begins with com.bdss.nifi.trickle are classes in the Java 
application which I am implementing.
To access NiFi REST API I am using this REST client: 
https://github.com/simplesteph/nifi-api-client-java, which itself relies on 
Swagger.

Many thanks for any tips.


STATEMENT OF CONFIDENTIALITY The information contained in this email message 
and any attachments may be confidential and legally privileged and is intended 
for the use of the addressee(s) only. If you are not an intended recipient, 
please: (1) notify me immediately by replying to this message; (2) do not use, 
disseminate, distribute or reproduce any part of the message or any attachment; 
and (3) destroy all copies of this message and any attachments.


Re: Reuse of templates and formatting functions

2018-03-13 Thread Anil Rai
Thanks Bryan. That helps.
We were also thinking of exposing standard formatting functions as an API
that other data flows can call and use.

Regards
Anil


On Tue, Mar 13, 2018 at 1:13 PM, Bryan Bende  wrote:

> Anil,
>
> It sounds like what you are interested in would be some combination of
> the referenceable process groups [1] or worm-hole connections [2],
> which were ideas that were discussed, but haven't been implemented.
>
> You might be able to make your situation slightly better by using the
> NiFi Registry instead of templates...
>
> As an example, you could design a stand-alone process group for your
> retry logic and place it under version control in the registry. Then
> you import this flow from the registry into various points of your
> overall flow.
>
> You could then go back to the original standalone process group and
> make a change to the logic and save version 2 to the registry. At this
> point all the other places you imported it to will show an upgrade and
> you can change their version to v2 to bring in the change.
>
> It still requires manual upgrading of all the versioned process
> groups, but at least they are all tied back to the same version, where
> as templates are not linked to anything once they instantiated.
>
> -Bryan
>
> [1] https://cwiki.apache.org/confluence/display/NIFI/
> Reference-able+Process+Groups
> [2] https://cwiki.apache.org/confluence/display/NIFI/Wormhole+Connections
>
> On Tue, Mar 13, 2018 at 10:48 AM, Anil Rai  wrote:
> > Team,
> >
> > My question is around re-use.
> > Templates : We are creating flows for different use cases. Most of them
> have
> > a requirement for retry. Say the API call fails, we would like to retry 3
> > time after certain interval before giving up. We have created a retry
> > template that works perfect.
> > Q: If I have to use this template, then i have to import this template in
> > all the use case flows. If we have a change to this template, then i
> would
> > be forced to go to every flow that uses this template and make that
> change.
> > Date, String, Number formatting : I have seen couple of ways of doing
> this
> > formatting. Like in UpdateAttribute, Jolt transformation, ExecuteScript.
> > Q: Like in Java, can we have this utility in one place that can be used
> by
> > all my flows at run time by passing the required Parameters?
> >
> > Thanks
> > Anil
> >
> >
>


Re: Reuse of templates and formatting functions

2018-03-13 Thread Bryan Bende
Anil,

It sounds like what you are interested in would be some combination of
the referenceable process groups [1] or worm-hole connections [2],
which were ideas that were discussed, but haven't been implemented.

You might be able to make your situation slightly better by using the
NiFi Registry instead of templates...

As an example, you could design a stand-alone process group for your
retry logic and place it under version control in the registry. Then
you import this flow from the registry into various points of your
overall flow.

You could then go back to the original standalone process group and
make a change to the logic and save version 2 to the registry. At this
point all the other places you imported it to will show an upgrade and
you can change their version to v2 to bring in the change.

It still requires manual upgrading of all the versioned process
groups, but at least they are all tied back to the same version, where
as templates are not linked to anything once they instantiated.

-Bryan

[1] 
https://cwiki.apache.org/confluence/display/NIFI/Reference-able+Process+Groups
[2] https://cwiki.apache.org/confluence/display/NIFI/Wormhole+Connections

On Tue, Mar 13, 2018 at 10:48 AM, Anil Rai  wrote:
> Team,
>
> My question is around re-use.
> Templates : We are creating flows for different use cases. Most of them have
> a requirement for retry. Say the API call fails, we would like to retry 3
> time after certain interval before giving up. We have created a retry
> template that works perfect.
> Q: If I have to use this template, then i have to import this template in
> all the use case flows. If we have a change to this template, then i would
> be forced to go to every flow that uses this template and make that change.
> Date, String, Number formatting : I have seen couple of ways of doing this
> formatting. Like in UpdateAttribute, Jolt transformation, ExecuteScript.
> Q: Like in Java, can we have this utility in one place that can be used by
> all my flows at run time by passing the required Parameters?
>
> Thanks
> Anil
>
>


Reuse of templates and formatting functions

2018-03-13 Thread Anil Rai
Team,

My question is around re-use.
Templates : We are creating flows for different use cases. Most of them
have a requirement for retry. Say the API call fails, we would like to
retry 3 time after certain interval before giving up. We have created a
retry template that works perfect.
Q: If I have to use this template, then i have to import this template in
all the use case flows. If we have a change to this template, then i would
be forced to go to every flow that uses this template and make that change.
Date, String, Number formatting : I have seen couple of ways of doing this
formatting. Like in UpdateAttribute, Jolt transformation, ExecuteScript.
Q: Like in Java, can we have this utility in one place that can be used by
all my flows at run time by passing the required Parameters?

Thanks
Anil


RE: InvokeHttp -- StandardSSLContextService Validator Exception

2018-03-13 Thread Jones, Patrick L.
The working example looks the same as mine.  I’ll post a trace when I get to 
that computer.


Pat

From: Jorge Machado [mailto:jom...@me.com]
Sent: Tuesday, March 13, 2018 9:48 AM
To: users@nifi.apache.org
Subject: Re: InvokeHttp -- StandardSSLContextService Validator Exception

Any trace for us ?
Working Example:
[cid:image001.png@01D3BAB2.353B4C60]

Jorge Machado






On 13 Mar 2018, at 13:11, Jones, Patrick L. 
> wrote:

Howdy,

I’m using a StandardSSLContextService with InvokeHttp and I get a 
ValidatorException ‘unable to find valid certification path to requested 
target.’  The certs work when I use curl.  I put the CA cert and the public key 
cert in the StandardSSLContextService truststore and the private key in the 
keystore.  Any thoughts on how to make this work?

Thank you,

Pat




Re: InvokeHttp -- StandardSSLContextService Validator Exception

2018-03-13 Thread Jorge Machado
Any trace for us ? 
Working Example: 


Jorge Machado





> On 13 Mar 2018, at 13:11, Jones, Patrick L.  wrote:
> 
> Howdy,
>  
> I’m using a StandardSSLContextService with InvokeHttp and I get a 
> ValidatorException ‘unable to find valid certification path to requested 
> target.’  The certs work when I use curl.  I put the CA cert and the public 
> key cert in the StandardSSLContextService truststore and the private key in 
> the keystore.  Any thoughts on how to make this work?
>  
> Thank you,
>  
> Pat
>  



RE: NiFi https/ssl configuration

2018-03-13 Thread V, Prashanth (Nokia - IN/Bangalore)
Hi Team,

I did the following steps to configure ssl for nifi:

  *   Ran `bin/tls-toolkit.sh standalone -n 'hostname' -C 'CN=admin,OU=NIFI' -o 
./target`
  *   Copied nifi.propeties, keystore & trustore jks file under nifi/conf folder
  *   Updated authorizers.xml with

[cid:image001.png@01D3BAF4.6104FDE0]

  *   Then restarted NiFi


I was getting error like ‘No applicable policies could be found. Contact the 
system administrator.’

Then I just restarted the nifi again, then the error went. I am seeing this 
behaviour everytime when I delete existing users.xml & authorizers.xml & 
restarting NiFi ☹.



Is it NiFi default behaviour? Please help me in resolving this issue.

Thanks & Regards,
Prashanth V



InvokeHttp -- StandardSSLContextService Validator Exception

2018-03-13 Thread Jones, Patrick L.

Howdy,

I'm using a StandardSSLContextService with InvokeHttp and I get a 
ValidatorException 'unable to find valid certification path to requested 
target.'  The certs work when I use curl.  I put the CA cert and the public key 
cert in the StandardSSLContextService truststore and the private key in the 
keystore.  Any thoughts on how to make this work?

Thank you,

Pat