Re: Disconnect nifi node cluster via API

2018-02-22 Thread Charlie Meyer
On https://nifi.apache.org/docs/nifi-docs/rest-api/index.html take a look
at DELETE /controller/cluster/nodes/{id}

That might have what you need.

-Charlie

On Feb 22, 2018 02:19, "Jorge Machado"  wrote:

Hi guys,

is there an api to disconnect a node from the cluster ? This would be great
to securely automate a rolling restart.


Jorge


Re: configuration variables?

2018-08-06 Thread Charlie Meyer
You might want to take a look at utilizing the variable registry for this:
https://community.hortonworks.com/articles/155823/introduction-into-process-group-variables.html

On Mon, Aug 6, 2018 at 4:07 PM lvic4...@gmail.com 
wrote:

> I have several processors in my flow using the same database url
> parameter. Is there any method in Nifi to read configuration parameter(s)
> from Zookeeper, or similar?
> Thank you,
>


Re: Nifi driven by external configuration

2018-04-18 Thread Charlie Meyer
To solve this problem, I have written code that sets configuration
properties in the variable registry on each nifi instance, then each flow
that is provisioned has its configuration reference those variables. This
way we are able to deploy nifi instances for different environments, etc
but maintain common logic.

On Wed, Apr 18, 2018 at 6:39 AM, Carlos Manuel Fernandes (DSI) <
carlos.antonio.fernan...@cgd.pt> wrote:

> Nifi simple principles of flow, attributes of flow ,  processes and
>  properties of  processors are very powerful and permit a lot of great
> combinations.
>
>
>
> The problem i see,  is the tendency to repeat flows to make the  same
> thing, because some properties are hardwire (properties without expression
> language)  and doesn’t exist a processor to read External configuration to
> map to attributes , which can be properties  in processors.
>
>
>
> In my Use Cases  I try to address this problem making services with this
> skeleton :
>
> HandleHttpRequest (receive some Key)  -> GetExternalConfig (Using  Key) ->
> Others Nifi Procs with  properties based on attributes read from external
> config -> HandleHttpResponse
>
> GetExternalConfig is a custom script processor, which transform a sql
> Query (all the columns of first row)  in attributes of the current Flow.
>
>
>
> In this arrangement , in the zone of “Other Nifi Procs” ,  if properties
> of processors don’t permit expression language is not possible to use the
> configuration already read, which is annoying.
>
> Till now, I had problems with SplitText (Line Split Count*) *and
> CSVRecordSetWriter (value separator, time format, etc), but the problem is
> general.
>
>
>
> If this make sense for more people in the community , we can think to:
>
> 1-  Create a Nifi Processor to read external Configuration from
> relational Databases  to Attributes. (probably make sense, read from
> another non relational sources)
>
> 2-  By default all the processors properties must admit expression
> language (probably some technical ones don’t make sense)
>
>
>
> If not, I continue  with my custom GetExternalConfig and blaming when I
> find a property without expression language J
>
>
>
> Thanks
>
>
>
> Carlos
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: Security issue when programmatically trying to create DBCP Controller Service

2018-04-17 Thread Charlie Meyer
We use a swagger-generated client as well and the main difference that I
see is that I do not set

.state(ControllerServiceDTO.StateEnum.ENABLED)

But rather create it, then enable it as a next step.

In debugging these things, I have extensively used the chrome developer
tools as I make actions in the UI, then get my payloads from my java client
to match those exactly. If a field isnt set by the UI, I dont set it in my
payloads from my code either (and the inverse).

Hope that helps


On Tue, Apr 17, 2018 at 2:42 PM, Vitaly Krivoy 
wrote:

> Hi,
>
> I am trying to programmatically create a DBCP Controller Service from a
> Java program and I am getting an error returned from
> POST/process-groups/{id}/controller-services.
>
> Exception when calling: ControllerServicesApi#createControllerService
>
>Response body: Unable to create component to verify if it
> references any Controller Services. Contact the system administrator.
>
>io.swagger.client.ApiException: Unauthorized
>
>   at io.swagger.client.ApiClient.
> handleResponse(ApiClient.java:1058)
>
>   at io.swagger.client.ApiClient.
> execute(ApiClient.java:981)
>
>   at io.swagger.client.api.ProcessgroupsApi.
> createControllerServiceWithHttpInfo(ProcessgroupsApi.java:396)
>
>   at io.swagger.client.api.ProcessgroupsApi.
> createControllerService(ProcessgroupsApi.java:381)
>
>   at com.bdss.nifi.trickle.NiFiRestClient.
> createControllerService(NiFiRestClient.java:304)
>
>   at com.bdss.nifi.trickle.
> NiFiRestClient.createDbcp(NiFiRestClient.java:347)
>
>   at com.bdss.nifi.trickle.
> resources.NiFiResourceManager.getMdmDbcp(NiFiResourceManager.java:45)
>
>   at com.bdss.nifi.trickle.Trickle.
> createTemplateConfig(Trickle.java:57)
>
>   at com.bdss.nifi.trickle.Trickle.
> main(Trickle.java:212)
>
>
>
> I don’t have any security mechanisms enabled yet. I have no problems
> creating Database Connection Pooling Service Controller from NiFi gui and
> using it with ExecuteSQL processor. Below is my code which tries to
> generate DBCP Controller Service.
>
> processgroupsApi.createControllerService(groupId,
> controllerServiceEntity) call in createControllerService method maps
> directly to POST/process-groups/{id}/controller-services and the rest of
> the logic should be straight-forward.  What am I missing in my code? Thanks
> very much!
>
>
>
> public ControllerServiceEntity createControllerService(String groupId,
>
>
> ControllerServiceEntity  controllerServiceEntity) {
>
>
>
>   try {
>
>   // processgroupsApi.createControllerService
> wraps POST/process-groups/{id}/controller-services
>
>  controllerServiceEntity =
> processgroupsApi.createControllerService(groupId,
> controllerServiceEntity);
>
>   } catch (ApiException e) {
>
>  printException("
> ControllerServicesApi#createControllerService", e);
>
>   }
>
>   return controllerServiceEntity;
>
>}
>
>
>
>public ControllerServiceEntity createDbcp(String groupId,
>
>  String dbUrl,
>
>  String
> driverClass,
>
>  String
> driverDirectory,
>
>  String dbUser,
>
>  String pwd,
>
>  int
> maxWaitTime,
>
>  int
> maxConnectionTotal,
>
>  String
> validationQuery ) {
>
>
>
>   Map props = new HashMap<>();
>
>
>
>   props.put("Database Connection URL",
> "dbUrl");
>
>   props.put("Database Driver Class Name",
> "driverClass");
>
>   props.put("Database Driver Location(s)",
> "driverDirectory");
>
>   props.put("Database User", "dbUser");
>
>   props.put("Password", "pwd");
>
>   props.put("Max Wait Time", "maxWaitTime");
>
>   props.put("Max Total Connections",
> "maxConnectionTotal");
>
>   props.put("Validation query",
> "validationQuery");
>
>
>
>   RevisionDTO revision = new RevisionDTO()
>
>
> 

Re: NiFi Variables

2018-03-24 Thread Charlie Meyer
Take a look at your browsers developer tools when you set variables and
mimic the calls in code. We do this using a swagger generated client and it
works well.

On Sat, Mar 24, 2018, 20:26 scott  wrote:

> Hello community,
>
> I'm looking for a way to edit or add to the new "variables" feature
> programmatically, such as through the API or other. For instance, I'd
> like to use a variable to configure the remote host for my SFTP collection,
> and then be able to change the value through an automated job when the
> remote host changes. This would be especially useful for processors that
> don't allow an input relationship.
>
> Any suggestions or comments would be welcome.
>
> Thanks,
>
> Scott
>


nifi-toolkit docker images

2018-10-30 Thread Charlie Meyer
Hi,

Are docker images for nifi-toolkit regularly published still? The latest
tag I see at https://hub.docker.com/r/apache/nifi-toolkit/tags/ references
1.7.0. Will a 1.8.0 image be pushed or is the recommended that users roll
their own?

Thanks!


NiFi registry: 0.2.0 or 0.3.0

2018-11-06 Thread Charlie Meyer
Hi,

On https://nifi.apache.org/registry.html I see 0.2.0 as the most recent
release, but https://cwiki.apache.org/confluence/display/NIFI/Release+Notes
has 0.3.0 shown as released in September.

Is 0.2.0 or 0.3.0 the latest stable release?

Thanks!


Re: wrapping json around while keeping a single flowFile for Kafka

2018-09-21 Thread Charlie Meyer
Hi Boris,

I had to solve a very similar problem and ended up solving it by writing a
custom processor. I just quickly wrote an onTrigger method which read in
the contents of the flowfile, parsed the JSON, did the modifications, then
transferred the flowfile out with the updated body. While I was originally
concerned about the performance, but it turns out that was an unfounded
worry for our use case.

Cheers!

-Charlie

On Fri, Sep 21, 2018 at 2:25 PM Boris Tyukin  wrote:

> Hey guys,
>
> I have a flow returning thousands of records from RDBMS and I convert
> returned AVRO to JSON and get something like below:
>
> [
>   {"col1":"value11", "col2":"value21", "col3:"value31"},
>   {"col1":"value12", "col2":"value22", "col3:"value32"},
> ...
> ]
>
> So still a single flowFile. Now I need to wrap every record in array
> around like that (an oversimplified example here):
>
> [
> {"payload":   {"col1":"value11", "col2":"value21", "col3:"value31"},
>   "meta": {"info": "system1", "timestamp":"2010-10-01 12:23:33"}
> }|
> {"payload":{"col1":"value12", "col2":"value22", "col3:"value32"} ,
>   "meta": {"info": "system1", "timestamp":"2010-10-01 12:23:33"}
> }
> |
> ]
>
> Basically, I want to
> 1) remove root level [] and replace a comma with a pipe (See below why)
> 2) keep a single flowFile without splitting but wrap source records under
> payload dictionary and adding another dictionary meta with some attributes.
> 3) do not want to define schema upfront because it might change in future
>
> I put pipe because I then want to publish these records to Kafka, using
> demarcation option - it works much faster for me than splitting avro/json
> into individual flowfiles.
>
> Thanks for any ideas,
> Boris
>
>
>
>
>
>
>
>


Re: ExecuteSqlRecord, using SQL Server rowversion as max value column

2018-11-20 Thread Charlie Meyer
Sure thing and thanks for the quick response. I opened up an improvement
request [1]. For this table, I do not control the schema and unfortunately
this is the only column that meets the requirements for incremental
fetching. I'll keep looking into other possible workarounds pending Matt's
input.

thanks again!

[1] https://issues.apache.org/jira/browse/NIFI-5835

On Tue, Nov 20, 2018 at 11:13 PM Andy LoPresto  wrote:

> Hi Charlie,
>
> Looking at this issue briefly, it seems that the NiFi code explicitly
> lists the accepted datatypes which can be used, and rowversion is not
> enumerated. Therefore it throws an exception. I suggest you open a feature
> request on our Jira page to support this. While it seems proprietary to
> Microsoft SQL versions, it says on the documentation page:
>
> Is a data type that exposes automatically generated, unique binary numbers
> within a database. rowversion is generally used as a mechanism for
> version-stamping table rows. The storage size is 8 bytes. The rowversion data
> type is just an incrementing number and does not preserve a date or a time.
>
> I think we could handle this datatype the same way we handle INTEGER,
> SMALLINT, TINYINT (or TIMESTAMP, as that is the functional equivalent from
> MS SQL which is now deprecated) in that switch statement, as it is simply
> an incrementing 8 byte natural number. However, I would welcome input from
> someone like Matt Burgess to see if maybe there is a translation that can
> be done in the Microsoft-specific driver to a generic integer datatype
> before it reaches this logic. I would expect
> SQLServerResultSetMetaData#getColumnType(int column) to perform this
> translation; perhaps the version of the driver needs to be updated?
>
> For now, can you use a timestamp or other supported datatype to perform
> your incremental fetch?
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Nov 20, 2018, at 9:00 PM, Charlie Meyer <
> charlie.me...@civitaslearning.com> wrote:
>
> Hi,
>
> I'm attempting to do incremental fetch from a Microsoft SQL Server
> database and would like to use rowversion [1] as my maximum value column.
> When I configured the processor to use that column, it threw an exception
> [2]. Are there known issues with using the rowversion for this purpose or
> workarounds?
>
> Thanks!
>
> Charlie
>
> [1]
> https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql?view=sql-server-2017
>
> [2]
> https://github.com/apache/nifi/blob/d8d220ccb86d1797f56f34649d70a1acff278eb5/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java#L456
>
>
>


ExecuteSqlRecord, using SQL Server rowversion as max value column

2018-11-20 Thread Charlie Meyer
Hi,

I'm attempting to do incremental fetch from a Microsoft SQL Server database
and would like to use rowversion [1] as my maximum value column. When I
configured the processor to use that column, it threw an exception [2]. Are
there known issues with using the rowversion for this purpose or
workarounds?

Thanks!

Charlie

[1]
https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql?view=sql-server-2017

[2]
https://github.com/apache/nifi/blob/d8d220ccb86d1797f56f34649d70a1acff278eb5/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java#L456