Re: EncryptContent issues after NIFI-1257 and NIFI-1259

2017-05-03 Thread Athar
Hi Mike,

Thank you for quick response. But I have requirement where different users
provide ASCII-armored format Keys (pubring.asc) and I have to encrypt the
data through PGP algorithm by using those key. I can convert the
ASCII-armored  keys into binary through GPG commands. But now next challenge
is "Public Keyring File" property doesn't support expression language.  


Thanks
Athar Iqbal



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/EncryptContent-issues-after-NIFI-1257-and-NIFI-1259-tp8581p15657.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: NiFi-Neo4j Issues

2017-05-03 Thread Matt Burgess
Can you ping the Neo4j node from the NiFi node? Can you telnet from the NiFi 
node to the Neo4j port? I suspect it's a firewall issue too, just want to take 
NiFi out of the equation for now.

Regards,
Matt


> On May 3, 2017, at 6:22 PM, dale.chang13  wrote:
> 
> Hi All,
> 
> At the bottom you can find my question. Note, I am positive this is more a
> network issue, but I cannot seem to figure out the solution. I tried posting
> this to the  Neo4j Google+ board
>   , and as a relevant 
> NiFi-Neo4j Github issue    (I
> know this is not an official NiFi-supported processor, so you can handle
> this thread how you wish); however, no luck. First, here is the structure of
> my architecture:
> 
> Windows 8 host machine with Windows Hyper-V Manager. Hyper-V Manager has 6
> RHEL nodes running a Hadoop cluster--Hortonworks. I have NiFi
> 1.0.0.2.0.1.0-12 (no SSL enabled) on HDF (Hortonworks Data Flow). On the
> Windows host machine, I have Neo4j Enterprise (trial) edition 3.1.3.
> 
> Windows/Neo4j configuration:
> - I found that my windows IP address i (windows.ip.address)
> - I started Neo4j EE and confirmed that the browser client works.
> - I went into the neo4j.conf file and enabled the property
> "dbms.bolt.connector.listen_address=0.0.0.0:7687, which allows Neo4j to
> listen on all networks
> - I also went into the Neo4j browser, went down to the settings tab, and
> configured the database URI to be bolt://(windows.ip.address):7687 (default
> is bolt://localhost:7687)
> - Additionally I did a netstat -a to see that the TCP connection is active
> and I have a service listening to 0.0.0.0:7687 and [::]:7687
> 
> Linux/NiFi configuration:
> - According to the NiFi-Neo4j Github I linked above, I created a
> Neo4jBoltSessionPool Controller Service and specified the Bolt DB Connection
> URL to be a variety of things (0.0.0.0:7687, localhost:7687,
> (windows.ip.address):7687, (foreign.ip.address):7687, etc)
> - Created a PutCypher processor and put in a simple LOAD CSV WITH HEADERS
> FROM ${file_path} cypher query
> 
> When I pass in a FlowFile (I have confirmed that the file path is saved as
> the file_path FlowFile attribute) and run the PutCypher processor, I get an
> error saying that the neo4j driver was "unable to connect to
> (ip.address):7687, ensure the database is running and that there is a
> working network connection to it"
> 
> Is my configuration wrong or could it be that the Windows firewall is
> preventing communication to the bolt address:port?
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/NiFi-Neo4j-Issues-tp15654.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


NiFi-Neo4j Issues

2017-05-03 Thread dale.chang13
Hi All,

At the bottom you can find my question. Note, I am positive this is more a
network issue, but I cannot seem to figure out the solution. I tried posting
this to the  Neo4j Google+ board
  , and as a relevant 
NiFi-Neo4j Github issue    (I
know this is not an official NiFi-supported processor, so you can handle
this thread how you wish); however, no luck. First, here is the structure of
my architecture:

Windows 8 host machine with Windows Hyper-V Manager. Hyper-V Manager has 6
RHEL nodes running a Hadoop cluster--Hortonworks. I have NiFi
1.0.0.2.0.1.0-12 (no SSL enabled) on HDF (Hortonworks Data Flow). On the
Windows host machine, I have Neo4j Enterprise (trial) edition 3.1.3.

Windows/Neo4j configuration:
- I found that my windows IP address i (windows.ip.address)
- I started Neo4j EE and confirmed that the browser client works.
- I went into the neo4j.conf file and enabled the property
"dbms.bolt.connector.listen_address=0.0.0.0:7687, which allows Neo4j to
listen on all networks
- I also went into the Neo4j browser, went down to the settings tab, and
configured the database URI to be bolt://(windows.ip.address):7687 (default
is bolt://localhost:7687)
- Additionally I did a netstat -a to see that the TCP connection is active
and I have a service listening to 0.0.0.0:7687 and [::]:7687

Linux/NiFi configuration:
- According to the NiFi-Neo4j Github I linked above, I created a
Neo4jBoltSessionPool Controller Service and specified the Bolt DB Connection
URL to be a variety of things (0.0.0.0:7687, localhost:7687,
(windows.ip.address):7687, (foreign.ip.address):7687, etc)
- Created a PutCypher processor and put in a simple LOAD CSV WITH HEADERS
FROM ${file_path} cypher query

When I pass in a FlowFile (I have confirmed that the file path is saved as
the file_path FlowFile attribute) and run the PutCypher processor, I get an
error saying that the neo4j driver was "unable to connect to
(ip.address):7687, ensure the database is running and that there is a
working network connection to it"

Is my configuration wrong or could it be that the Windows firewall is
preventing communication to the bolt address:port?



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NiFi-Neo4j-Issues-tp15654.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Closing in on a NiFi 1.2.0 release?

2017-05-03 Thread Bryan Bende
Quick update... I ran into two issues that will need to be addressed to
create the RC.

I've created JIRAs for them and tagged them as 1.2:

https://issues.apache.org/jira/browse/NIFI-3795
https://issues.apache.org/jira/browse/NIFI-3793


On Wed, May 3, 2017 at 2:41 PM, Bryan Bende  wrote:

> Looks like all of the JIRAs have been resolved and we are in a good place.
>
> I'll begin kicking off the RC process.
>
> On Tue, May 2, 2017 at 5:48 PM, Andre  wrote:
>
>> All,
>>
>> For some reason my canvas did not refresh after a process bounce (which
>> generally occurs) but reloading page allows for modifications.
>>
>> Cheers
>>
>> On Wed, May 3, 2017 at 7:43 AM, Andre  wrote:
>>
>>> folks,
>>>
>>> I was just working to debug the final thorns found reviewing NIFI-3726
>>> and noticed an odd behavior and wanted to confirm.
>>>
>>> If I recall correctly in the past users could simply replace a processor
>>> NAR file and even if that NAR existed the flow would continue to work.
>>>
>>> I just replaced
>>>
>>> cp ~/nifi/nifi-nar-bundles/nifi-cybersecurity-bundle/nifi-cyber
>>> security-nar/target/nifi-cybersecurity-nar-1.2.0-SNAPSHOT.nar
>>> ~/devel/nifi-1.2.0-SNAPSHOT/lib/nifi-cybersecurity-nar-1.2.0
>>> -SNAPSHOT.nar
>>>
>>> (note the different ~/nifi ~/devel used to ensure I don't explode the
>>> rest of the already compiled components).
>>>
>>> When I try to make changes to the flow I am displayed with the following
>>> error:
>>>
>>> [image: Inline image 1]
>>>
>>> This happens even when I try to drag and drop connected processors
>>> around the canvas.
>>>
>>>
>>> Oddly enough I can still add and delete components to the canvas but
>>> whatever touches the tainted processor cannot be modified at all.
>>>
>>> Examples of messages:
>>>
>>> *Attempt to move*
>>>
>>> Component Position
>>> [5, cb0a31ac-015b-1000-7473-873a47eb702e, 
>>> cb0a52ab-015b-1000-e43a-f6293a9ae99d]
>>> is not the most up-to-date revision. This component appears to have been
>>> modified
>>>
>>>
>>> *Attempt to delete a downstream processor*
>>> Error
>>> [1, cb0a31ac-015b-1000-7473-873a47eb702e, 
>>> cb0b2ae4-015b-1000-35a8-9eaf6a45fc6a]
>>> is not the most up-to-date revision. This component appears to have been
>>> modified
>>>
>>>
>>> I don't have a 1.1.0 instance around me at the moment but I vaguely
>>> remember being able to do that in the past.
>>>
>>> Can someone confirm this is new and expected behavior?
>>>
>>> Cheers
>>>
>>>
>>> On Wed, May 3, 2017 at 5:54 AM, Andy LoPresto 
>>> wrote:
>>>
 I’ll review & merge as soon as they are available.

 Andy LoPresto
 alopre...@apache.org
 *alopresto.apa...@gmail.com *
 PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

 On May 2, 2017, at 3:51 PM, Bryan Bende  wrote:

 Thanks Drew. These seem like good candidates for the release.

 On Tue, May 2, 2017 at 3:42 PM, Andrew Lim 
 wrote:

 There are three doc updates/additions that would be great to include in
 the RC:

 https://issues.apache.org/jira/browse/NIFI-3701
 https://issues.apache.org/jira/browse/NIFI-3773
 https://issues.apache.org/jira/browse/NIFI-3774

 Sarah Olson and I have been working on these.  We should have PRs
 submitted for them very soon.

 -Drew


 On May 2, 2017, at 2:11 PM, Aldrin Piri  wrote:

 Haven't had much luck in getting our Docker efforts incorporated into
 Docker Hub.  As a result I have created an issue to track that
 integration
 [1] and resolved the original issue.

 We can evaluate our options and figure out the best path forward.  At
 this
 time procedures are not yet well established within ASF to support
 configuring these builds.

 [1] https://issues.apache.org/jira/browse/NIFI-3772

 On Tue, May 2, 2017 at 11:13 AM, Andrew Lim 
 wrote:

 I will be making updates to the Release Notes and Migration Guidance doc
 regarding the TLS 1.2 version support.  Tracked by:

 https://issues.apache.org/jira/browse/NIFI-3720


 -Drew


 On May 2, 2017, at 11:08 AM, Joe Witt  wrote:

 Those are great updates.  I'd recommend we avoid highlighting the
 versions of UI components though.

 Thanks


 On Tue, May 2, 2017 at 11:03 AM, Scott Aslan 

 wrote:

 Hey Bryan,

 Please include the following in the release notes:


 - Core UI
- Circular references have been removed and the code modularized.
- Upgraded Node version to 6.9.3.
- Upgraded npm version to 3.10.10.
- Upgraded jQuery version to 3.1.1.
- Upgraded D3 version to 3.5.17.

Re: [DISCUSS] NiFi MiNiFi C++ 0.2.0 Release

2017-05-03 Thread Aldrin Piri
Kevin,

Thanks for the heads up.

While I started the RM process I also noticed some issues with license for
unused thirdparty modules in one of our thirdparty modules.  I have a PR
[1] up for review to fix this as well as the associated JIRA [2].

[1] https://github.com/apache/nifi-minifi-cpp/pull/89
[2] http://issues.apache.org/jira/browse/MINIFI-293

On Wed, May 3, 2017 at 5:11 PM, Kevin Doran  wrote:

> Hi Aldrin,
>
> One other issue came up in testing, which is that using the config.yml in
> the README file throws the following error:
>
> HW13384:nifi-minifi-cpp-0.2.0 brosander$ bin/minifi.sh run
> libc++abi.dylib: terminating with uncaught exception of type
> YAML::TypedBadConversion std::__1::char_traits, std::__1::allocator > >: bad conversion
> bin/minifi.sh: line 257: 25630 Abort trap: 6   ${minifi_executable}
>
> This is due to missing “source name” and “destination name” fields in
> connections, which changes for MINIFI-275 make required fields.
>
> I’ve opened a JIRA, MINIFI- 294 [1], to capture the work needed to resolve
> this and am working on it now to include in the cpp-0.2.0 release
>
> [1] https://issues.apache.org/jira/browse/MINIFI-294
>
> Thanks,
> Kevin
>
> On 5/3/17, 11:10, "Jeremy Dyer"  wrote:
>
> Thanks Aldrin. I'm working on wrapping up that final issue now
>
> On Wed, May 3, 2017 at 10:55 AM, Aldrin Piri 
> wrote:
>
> > Looks like we have one last item scheduled [1] for this release
> version
> > with review under way since the last message.  We've also uncovered
> and
> > remedied a few build and test issues during that same time period
> which
> > will make for nice additions.  Upon conclusion of the review process
> for
> > the remaining item, I will move forward conducting the release.
> >
> > Thanks!
> >
> > --aldrin
> >
> > [1]
> > https://issues.apache.org/jira/browse/MINIFI-286?jql=
> > fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> > 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> > 20BY%20priority%20DESC
> >
> > On Thu, Apr 13, 2017 at 10:19 AM, Aldrin Piri 
> > wrote:
> >
> > > Hey folks,
> > >
> > > We've had a good bit of progress on MiNiFi C++ and think we have
> reached
> > a
> > > point where it makes sense to capture some of the good strides
> that have
> > > been made so far and start another release.
> > >
> > > There are currently three issues open [1]. Two of which have
> patches,
> > near
> > > completion, and the third which may be a candidate for an 0.3.0
> target.
> > >
> > > I would be happy to carry out release duties unless there are
> other folks
> > > that feel so inclined.  I have also created a JIRA [2] to aid in
> tracking
> > > any additional concerns or dependencies for the process.
> > >
> > > Thanks for your consideration!
> > >
> > > --aldrin
> > >
> > > [1] https://issues.apache.org/jira/browse/MINIFI-227?jql=
> > > fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> > > 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> > > 20BY%20priority%20DESC
> > > [2] https://issues.apache.org/jira/browse/MINIFI-267
> > >
> >
>
>
>
>


Re: [DISCUSS] NiFi MiNiFi C++ 0.2.0 Release

2017-05-03 Thread Kevin Doran
Hi Aldrin,

One other issue came up in testing, which is that using the config.yml in the 
README file throws the following error:

HW13384:nifi-minifi-cpp-0.2.0 brosander$ bin/minifi.sh run
libc++abi.dylib: terminating with uncaught exception of type 
YAML::TypedBadConversion >: bad conversion
bin/minifi.sh: line 257: 25630 Abort trap: 6   ${minifi_executable}

This is due to missing “source name” and “destination name” fields in 
connections, which changes for MINIFI-275 make required fields.

I’ve opened a JIRA, MINIFI- 294 [1], to capture the work needed to resolve this 
and am working on it now to include in the cpp-0.2.0 release

[1] https://issues.apache.org/jira/browse/MINIFI-294 

Thanks,
Kevin

On 5/3/17, 11:10, "Jeremy Dyer"  wrote:

Thanks Aldrin. I'm working on wrapping up that final issue now

On Wed, May 3, 2017 at 10:55 AM, Aldrin Piri  wrote:

> Looks like we have one last item scheduled [1] for this release version
> with review under way since the last message.  We've also uncovered and
> remedied a few build and test issues during that same time period which
> will make for nice additions.  Upon conclusion of the review process for
> the remaining item, I will move forward conducting the release.
>
> Thanks!
>
> --aldrin
>
> [1]
> https://issues.apache.org/jira/browse/MINIFI-286?jql=
> fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> 20BY%20priority%20DESC
>
> On Thu, Apr 13, 2017 at 10:19 AM, Aldrin Piri 
> wrote:
>
> > Hey folks,
> >
> > We've had a good bit of progress on MiNiFi C++ and think we have reached
> a
> > point where it makes sense to capture some of the good strides that have
> > been made so far and start another release.
> >
> > There are currently three issues open [1]. Two of which have patches,
> near
> > completion, and the third which may be a candidate for an 0.3.0 target.
> >
> > I would be happy to carry out release duties unless there are other 
folks
> > that feel so inclined.  I have also created a JIRA [2] to aid in 
tracking
> > any additional concerns or dependencies for the process.
> >
> > Thanks for your consideration!
> >
> > --aldrin
> >
> > [1] https://issues.apache.org/jira/browse/MINIFI-227?jql=
> > fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> > 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> > 20BY%20priority%20DESC
> > [2] https://issues.apache.org/jira/browse/MINIFI-267
> >
>





Re: Convert CSV File to JSON

2017-05-03 Thread Matt Burgess
If all the CSV files had the same number of columns, you might be able to use 
ExtractText -> SplitText -> ReplaceText to achieve this. However since your CSV 
files will have different column names, I will assume that there can be a 
different number of columns per file.

In the upcoming NiFi 1.2.0 release I believe you'll be able to use 
ConvertRecord for this, with a CSV reader and a JSON writer. In the meantime, 
if you are familiar with a scripting language such as Groovy, JavaScript, 
Python/Jython, Lua, JRuby, or Clojure you can use ExecuteScript for this. I 
have an example [1] of using Groovy to split a pipe-delimited file, it could be 
altered to use commas and to write out JSON instead.

Regards,
Matt

[1] 
http://funnifi.blogspot.com/2016/02/executescript-explained-split-fields.html


> On May 3, 2017, at 1:07 PM, "suman@cuddle.ai"  wrote:
> 
> Hi ,
> I have a csv file which is provided by client having different column name.
> I wanted to covert the csv contents to JSON.
> 
> Please advice
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Convert-CSV-File-to-JSON-tp15643.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Data Load

2017-05-03 Thread Matt Burgess
Anil,

When you say huge volumes, do you mean a large number of tables, or large 
tables, or both?

For a large number of tables, you will likely want to upgrade to the upcoming 
NiFi release so you can use ListDatabaseTables -> GenerateTableFetch -> 
ExecuteSQL for the source part, although in the meantime you could use 
ListDatabaseTables -> ReplaceText (to set the SELECT query) -> ExecuteSQL.

For large tables on a single NiFi instance, I recommend using 
QueryDatabaseTable. Whether you have a max-value column or not, QDT lets you 
fetch smaller batches of rows per flow file, versus ExecuteSQL which puts the 
whole result set in one flow file, which can lead to memory issues.

For scalability with large tables, I recommend a NiFi cluster of 3-10 nodes, 
using a flow of GenerateTableFetch -> RPG -> Input Port -> ExecuteSQL for the 
source part. In 1.2.0 you'll be able to have ListDatabaseTables at the front, 
to support a large number of tables. The RPG -> Input Port part is to 
distribute the flow files across the cluster, the downstream flow is executed 
in parallel with a subset of the incoming flow files (rather than each copy of 
the flow getting every flow file).

Regards,
Matt


> On May 3, 2017, at 1:58 PM, Anil Rai  wrote:
> 
> Hi Matt,
> 
> I quickly developed this and this is how i could do this
> 
> DataLake<-ExecuteSQL->ConvertAveroToJson->SplitJson->EvaluateJsonPath->ReplaceText->PutSQL->Postgres(onCloud)
> 
> The problem is, this will not scale for huge volumes. Any thoughts?
> 
> Regards
> Anil
> 
> 
>> On Tue, May 2, 2017 at 12:07 PM, Matt Burgess  wrote:
>> 
>> Yes that sounds like your best bet, assuming you have the "Maximum
>> Value Column" present in the table you want to migrate.  Then a flow
>> might look like:
>> 
>> QueryDatabaseTable -> ConvertAvroToJSON -> ConvertJSONToSQL -> PutSQL
>> 
>> In this flow the target tables would need to be created beforehand.
>> You might be able to do that with pg_dump or with some fancy SQL that
>> you could send to PutSQL in a separate (do-ahead) flow [1].  For
>> multiple tables, you will need one QueryDatabaseTable for each table;
>> depending on the number of tables and the latency for getting/putting
>> rows, you may be able to share the downstream processing. If that
>> creates a bottleneck, you may want a copy of the above flow for each
>> table.  This is drastically improved in NiFi 1.2.0, as you can use
>> ListDatabaseTables -> GenerateTableFetch -> RPG -> Input Port ->
>> ExecuteSQL to perform the migration in parallel across a NiFi cluster.
>> 
>> Regards,
>> Matt
>> 
>> [1] https://serverfault.com/questions/231952/is-there-a-
>> mysql-equivalent-of-show-create-table-in-postgres
>> 
>> 
>>> On Tue, May 2, 2017 at 11:18 AM, Anil Rai  wrote:
>>> Thanks Matt for the quick reply. We are using nifi 1.0 release as of now.
>>> It's a postgres DB on both sides (on prem and on cloud)
>>> and yes incremental load is what i am looking for.
>>> so with that, you recommend # 2 option?
>>> 
>>> On Tue, May 2, 2017 at 11:00 AM, Matt Burgess 
>> wrote:
>>> 
 Anil,
 
 Is this a "one-time" migration, meaning you would take the on-prem
 tables and put them on the cloud DB just once? Or would this be an
 incremental operation, where you do the initial move and then take any
 "new" rows from the source and apply them to the target?  For the
 latter, there are a couple of options:
 
 1) Rebuild the cloud DB periodically. You can use ExecuteSQL ->
 [processors] -> PutSQL after perhaps deleting your target
 DB/tables/etc.  This could be time-consuming and expensive. The
 processors in-between probably include ConvertAvroToJSON and
 ConvertJSONToSQL.
 2) Use QueryDatabaseTable or (GenerateTableFetch -> ExecuteSQL) to get
 the source data. For this your table would need a column whose values
 always increase, that column would comprise the value of the "Maximum
 Value Column" property in the aforementioned processors' configuration
 dialogs. You would need one QueryDatabaseTable or GenerateTableFetch
 for each table in your DB.
 
 In addition to these current solutions, as of the upcoming NiFi 1.2.0
 release, you have the following options:
 3) If the source database is MySQL, you can use the CaptureChangeMySQL
 processor to get binary log events flowing through various processors
 into PutDatabaseRecord to place them at the source. This pattern is
 true Change Data Capture (CDC) versus the other two options above.
 4) Option #2 will be improved such that GenerateTableFetch will accept
 incoming flow files, so you can use ListDatabaseTables ->
 GenerateTableFetch -> ExecuteSQL to handle multiple tables with one
 flow.
 
 If this is a one-time migration, a data flow tool might not be the
 best choice, you could consider something like 

Re: Closing in on a NiFi 1.2.0 release?

2017-05-03 Thread Bryan Bende
Looks like all of the JIRAs have been resolved and we are in a good place.

I'll begin kicking off the RC process.

On Tue, May 2, 2017 at 5:48 PM, Andre  wrote:

> All,
>
> For some reason my canvas did not refresh after a process bounce (which
> generally occurs) but reloading page allows for modifications.
>
> Cheers
>
> On Wed, May 3, 2017 at 7:43 AM, Andre  wrote:
>
>> folks,
>>
>> I was just working to debug the final thorns found reviewing NIFI-3726
>> and noticed an odd behavior and wanted to confirm.
>>
>> If I recall correctly in the past users could simply replace a processor
>> NAR file and even if that NAR existed the flow would continue to work.
>>
>> I just replaced
>>
>> cp ~/nifi/nifi-nar-bundles/nifi-cybersecurity-bundle/nifi-cyber
>> security-nar/target/nifi-cybersecurity-nar-1.2.0-SNAPSHOT.nar
>> ~/devel/nifi-1.2.0-SNAPSHOT/lib/nifi-cybersecurity-nar-1.2.0-SNAPSHOT.nar
>>
>> (note the different ~/nifi ~/devel used to ensure I don't explode the
>> rest of the already compiled components).
>>
>> When I try to make changes to the flow I am displayed with the following
>> error:
>>
>> [image: Inline image 1]
>>
>> This happens even when I try to drag and drop connected processors around
>> the canvas.
>>
>>
>> Oddly enough I can still add and delete components to the canvas but
>> whatever touches the tainted processor cannot be modified at all.
>>
>> Examples of messages:
>>
>> *Attempt to move*
>>
>> Component Position
>> [5, cb0a31ac-015b-1000-7473-873a47eb702e, 
>> cb0a52ab-015b-1000-e43a-f6293a9ae99d]
>> is not the most up-to-date revision. This component appears to have been
>> modified
>>
>>
>> *Attempt to delete a downstream processor*
>> Error
>> [1, cb0a31ac-015b-1000-7473-873a47eb702e, 
>> cb0b2ae4-015b-1000-35a8-9eaf6a45fc6a]
>> is not the most up-to-date revision. This component appears to have been
>> modified
>>
>>
>> I don't have a 1.1.0 instance around me at the moment but I vaguely
>> remember being able to do that in the past.
>>
>> Can someone confirm this is new and expected behavior?
>>
>> Cheers
>>
>>
>> On Wed, May 3, 2017 at 5:54 AM, Andy LoPresto 
>> wrote:
>>
>>> I’ll review & merge as soon as they are available.
>>>
>>> Andy LoPresto
>>> alopre...@apache.org
>>> *alopresto.apa...@gmail.com *
>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>>
>>> On May 2, 2017, at 3:51 PM, Bryan Bende  wrote:
>>>
>>> Thanks Drew. These seem like good candidates for the release.
>>>
>>> On Tue, May 2, 2017 at 3:42 PM, Andrew Lim 
>>> wrote:
>>>
>>> There are three doc updates/additions that would be great to include in
>>> the RC:
>>>
>>> https://issues.apache.org/jira/browse/NIFI-3701
>>> https://issues.apache.org/jira/browse/NIFI-3773
>>> https://issues.apache.org/jira/browse/NIFI-3774
>>>
>>> Sarah Olson and I have been working on these.  We should have PRs
>>> submitted for them very soon.
>>>
>>> -Drew
>>>
>>>
>>> On May 2, 2017, at 2:11 PM, Aldrin Piri  wrote:
>>>
>>> Haven't had much luck in getting our Docker efforts incorporated into
>>> Docker Hub.  As a result I have created an issue to track that
>>> integration
>>> [1] and resolved the original issue.
>>>
>>> We can evaluate our options and figure out the best path forward.  At
>>> this
>>> time procedures are not yet well established within ASF to support
>>> configuring these builds.
>>>
>>> [1] https://issues.apache.org/jira/browse/NIFI-3772
>>>
>>> On Tue, May 2, 2017 at 11:13 AM, Andrew Lim 
>>> wrote:
>>>
>>> I will be making updates to the Release Notes and Migration Guidance doc
>>> regarding the TLS 1.2 version support.  Tracked by:
>>>
>>> https://issues.apache.org/jira/browse/NIFI-3720
>>>
>>>
>>> -Drew
>>>
>>>
>>> On May 2, 2017, at 11:08 AM, Joe Witt  wrote:
>>>
>>> Those are great updates.  I'd recommend we avoid highlighting the
>>> versions of UI components though.
>>>
>>> Thanks
>>>
>>>
>>> On Tue, May 2, 2017 at 11:03 AM, Scott Aslan 
>>>
>>> wrote:
>>>
>>> Hey Bryan,
>>>
>>> Please include the following in the release notes:
>>>
>>>
>>> - Core UI
>>>- Circular references have been removed and the code modularized.
>>>- Upgraded Node version to 6.9.3.
>>>- Upgraded npm version to 3.10.10.
>>>- Upgraded jQuery version to 3.1.1.
>>>- Upgraded D3 version to 3.5.17.
>>>- Reduced download size by removing bundled dependencies.
>>> - User Experience Improvements
>>> - Ever wish that it was easier to align components on the canvas? Me
>>>too...and now you can!
>>>- We now provide deep links to any component(s) on the canvas. This
>>>will help make collaborating and sharing more natural.
>>>- Users will enjoy a better understanding of the scope of
>>>
>>> Controller
>>>
>>>Services through an improved 

Re: Data Load

2017-05-03 Thread Anil Rai
Hi Matt,

I quickly developed this and this is how i could do this

DataLake<-ExecuteSQL->ConvertAveroToJson->SplitJson->EvaluateJsonPath->ReplaceText->PutSQL->Postgres(onCloud)

The problem is, this will not scale for huge volumes. Any thoughts?

Regards
Anil


On Tue, May 2, 2017 at 12:07 PM, Matt Burgess  wrote:

> Yes that sounds like your best bet, assuming you have the "Maximum
> Value Column" present in the table you want to migrate.  Then a flow
> might look like:
>
> QueryDatabaseTable -> ConvertAvroToJSON -> ConvertJSONToSQL -> PutSQL
>
> In this flow the target tables would need to be created beforehand.
> You might be able to do that with pg_dump or with some fancy SQL that
> you could send to PutSQL in a separate (do-ahead) flow [1].  For
> multiple tables, you will need one QueryDatabaseTable for each table;
> depending on the number of tables and the latency for getting/putting
> rows, you may be able to share the downstream processing. If that
> creates a bottleneck, you may want a copy of the above flow for each
> table.  This is drastically improved in NiFi 1.2.0, as you can use
> ListDatabaseTables -> GenerateTableFetch -> RPG -> Input Port ->
> ExecuteSQL to perform the migration in parallel across a NiFi cluster.
>
> Regards,
> Matt
>
> [1] https://serverfault.com/questions/231952/is-there-a-
> mysql-equivalent-of-show-create-table-in-postgres
>
>
> On Tue, May 2, 2017 at 11:18 AM, Anil Rai  wrote:
> > Thanks Matt for the quick reply. We are using nifi 1.0 release as of now.
> > It's a postgres DB on both sides (on prem and on cloud)
> > and yes incremental load is what i am looking for.
> > so with that, you recommend # 2 option?
> >
> > On Tue, May 2, 2017 at 11:00 AM, Matt Burgess 
> wrote:
> >
> >> Anil,
> >>
> >> Is this a "one-time" migration, meaning you would take the on-prem
> >> tables and put them on the cloud DB just once? Or would this be an
> >> incremental operation, where you do the initial move and then take any
> >> "new" rows from the source and apply them to the target?  For the
> >> latter, there are a couple of options:
> >>
> >> 1) Rebuild the cloud DB periodically. You can use ExecuteSQL ->
> >> [processors] -> PutSQL after perhaps deleting your target
> >> DB/tables/etc.  This could be time-consuming and expensive. The
> >> processors in-between probably include ConvertAvroToJSON and
> >> ConvertJSONToSQL.
> >> 2) Use QueryDatabaseTable or (GenerateTableFetch -> ExecuteSQL) to get
> >> the source data. For this your table would need a column whose values
> >> always increase, that column would comprise the value of the "Maximum
> >> Value Column" property in the aforementioned processors' configuration
> >> dialogs. You would need one QueryDatabaseTable or GenerateTableFetch
> >> for each table in your DB.
> >>
> >> In addition to these current solutions, as of the upcoming NiFi 1.2.0
> >> release, you have the following options:
> >> 3) If the source database is MySQL, you can use the CaptureChangeMySQL
> >> processor to get binary log events flowing through various processors
> >> into PutDatabaseRecord to place them at the source. This pattern is
> >> true Change Data Capture (CDC) versus the other two options above.
> >> 4) Option #2 will be improved such that GenerateTableFetch will accept
> >> incoming flow files, so you can use ListDatabaseTables ->
> >> GenerateTableFetch -> ExecuteSQL to handle multiple tables with one
> >> flow.
> >>
> >> If this is a one-time migration, a data flow tool might not be the
> >> best choice, you could consider something like Flyway [1] instead.
> >>
> >> Regards,
> >> Matt
> >>
> >> [1] https://flywaydb.org/documentation/command/migrate
> >>
> >> On Tue, May 2, 2017 at 10:41 AM, Anil Rai 
> wrote:
> >> > I have a simple use case.
> >> >
> >> > DB (On Premise) and DB (On Cloud).
> >> >
> >> > I want to use nifi to extract data from on prem DB (huge volumes) and
> >> > insert into the same table structure that is hosted on cloud.
> >> >
> >> > I could use ExecuteSQL on both sides of the fence (to extract from on
> >> prem
> >> > and insert onto cloud). What processors are needed in between (if at
> >> all)?
> >> > As i am not doing any transformations at allit is just extract and
> >> load
> >> > use case
> >>
>


Save CSV file contents to database tables

2017-05-03 Thread suman....@cuddle.ai
Hi ,
We receive csv files from client. Each csv files contains different headers
. We want to save the csv content in different tables . each table per csv
and column name as per csv header.

Please advice.



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Save-CSV-file-contents-to-database-tables-tp15644.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Convert CSV File to JSON

2017-05-03 Thread suman....@cuddle.ai
Hi ,
I have a csv file which is provided by client having different column name.
I wanted to covert the csv contents to JSON.

Please advice



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Convert-CSV-File-to-JSON-tp15643.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: EncryptContent issues after NIFI-1257 and NIFI-1259

2017-05-03 Thread Michael Moser
Hello,

I believe the EncryptContent "Public Keyring File" property is expecting
the binary key that you generated in step 1.  You do not need to export the
public key into ASCII format.

Kind Regards,
-- Mike


On Wed, May 3, 2017 at 6:40 AM, Athar  wrote:

> I am getting this issue in even nifi 1.0.0 .  I am using "PGP_ASCII_ARMOR"
> encryption algorithm.
>
> I performed the following steps.
> 1 )  I  created the binary key using "GnuPG v2.0.14"  and executed the
> "PGP"
> encryption algorithm. Its
> executing properly.
> 2) I exported the public key in ASCII format  and configure
> "PGP_ASCII_ARMOR".  Its displaying  "Invalid header encountered"
>
>  n15629/nifi_Error.png>
>
>
>
>
> --
> View this message in context: http://apache-nifi-developer-
> list.39713.n7.nabble.com/EncryptContent-issues-after-
> NIFI-1257-and-NIFI-1259-tp8581p15629.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


Re: failed to replicate request GET /NIFI-API/flow/process-groups/root to saxx:9090

2017-05-03 Thread pradeepbill
ok, increased the timeout to 20 sec, and now I can see the console for
sometime , less than a min, and with the same exception NIFI UI errors
out.Please advice.



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/failed-to-replicate-request-GET-NIFI-API-flow-process-groups-root-to-saxx-9090-tp15612p15642.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread suman....@cuddle.ai
Hi ,
I tried with Header Line count 0 in SplitText Processor still getting the
same result. I am trying with the header row.

City,Count
Mumbai,10
Mumbai,10
Pune,10
Pune,10

Do i need to add any processor





--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15641.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: [DISCUSS] NiFi MiNiFi C++ 0.2.0 Release

2017-05-03 Thread Jeremy Dyer
Thanks Aldrin. I'm working on wrapping up that final issue now

On Wed, May 3, 2017 at 10:55 AM, Aldrin Piri  wrote:

> Looks like we have one last item scheduled [1] for this release version
> with review under way since the last message.  We've also uncovered and
> remedied a few build and test issues during that same time period which
> will make for nice additions.  Upon conclusion of the review process for
> the remaining item, I will move forward conducting the release.
>
> Thanks!
>
> --aldrin
>
> [1]
> https://issues.apache.org/jira/browse/MINIFI-286?jql=
> fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> 20BY%20priority%20DESC
>
> On Thu, Apr 13, 2017 at 10:19 AM, Aldrin Piri 
> wrote:
>
> > Hey folks,
> >
> > We've had a good bit of progress on MiNiFi C++ and think we have reached
> a
> > point where it makes sense to capture some of the good strides that have
> > been made so far and start another release.
> >
> > There are currently three issues open [1]. Two of which have patches,
> near
> > completion, and the third which may be a candidate for an 0.3.0 target.
> >
> > I would be happy to carry out release duties unless there are other folks
> > that feel so inclined.  I have also created a JIRA [2] to aid in tracking
> > any additional concerns or dependencies for the process.
> >
> > Thanks for your consideration!
> >
> > --aldrin
> >
> > [1] https://issues.apache.org/jira/browse/MINIFI-227?jql=
> > fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> > 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> > 20BY%20priority%20DESC
> > [2] https://issues.apache.org/jira/browse/MINIFI-267
> >
>


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread Matt Burgess
Your original example has 5 rows, a header and 4 value rows. Did you
remove the header row for your current testing?  If not, and you use a
Header Line Count of 1, then each flow file probably has two lines in
it, the header and the value row, and ReplaceText is probably matching
the header lines and thus using the hardcoded "values" of City and
Count.  Instead you may want a Header Line Count of zero and
route/remove the header row from the rest of the value row flow files.

Regards,
Matt

On Wed, May 3, 2017 at 10:42 AM, suman@cuddle.ai
 wrote:
> I am just testing currently with a sample file containing only 4 rows.
>
>
>
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15637.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: [DISCUSS] NiFi MiNiFi C++ 0.2.0 Release

2017-05-03 Thread Aldrin Piri
Looks like we have one last item scheduled [1] for this release version
with review under way since the last message.  We've also uncovered and
remedied a few build and test issues during that same time period which
will make for nice additions.  Upon conclusion of the review process for
the remaining item, I will move forward conducting the release.

Thanks!

--aldrin

[1]
https://issues.apache.org/jira/browse/MINIFI-286?jql=fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20priority%20DESC

On Thu, Apr 13, 2017 at 10:19 AM, Aldrin Piri  wrote:

> Hey folks,
>
> We've had a good bit of progress on MiNiFi C++ and think we have reached a
> point where it makes sense to capture some of the good strides that have
> been made so far and start another release.
>
> There are currently three issues open [1]. Two of which have patches, near
> completion, and the third which may be a candidate for an 0.3.0 target.
>
> I would be happy to carry out release duties unless there are other folks
> that feel so inclined.  I have also created a JIRA [2] to aid in tracking
> any additional concerns or dependencies for the process.
>
> Thanks for your consideration!
>
> --aldrin
>
> [1] https://issues.apache.org/jira/browse/MINIFI-227?jql=
> fixVersion%20%3D%20cpp-0.2.0%20AND%20project%20%3D%
> 20MINIFI%20AND%20resolution%20%3D%20Unresolved%20ORDER%
> 20BY%20priority%20DESC
> [2] https://issues.apache.org/jira/browse/MINIFI-267
>


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread suman....@cuddle.ai
I am just testing currently with a sample file containing only 4 rows.



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15637.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread Matt Burgess
Do all the files look like that, or just the first one? If it is just
the first one, then it is likely due to the header row in your CSV
file. You may want to use RouteOnAttribute after SplitText to get rid
of the header row (using fragment.index = 0), or I think you can use
ReplaceText before SplitText to get rid of the header (but it might
take longer).

If all the files look like that, then the ExtractText is not putting
the right values into the corresponding attributes.

Regards,
Matt

On Wed, May 3, 2017 at 10:20 AM, suman@cuddle.ai
 wrote:
> Still not able to insert the data in table.
>
> I tested with PutFile instead of PutSql and file contains
> INSERT INTO CITY(city,count) VALUES('city',count)
>
>
>
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15634.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread suman....@cuddle.ai
SplitText by line 1 and header count 1
Extract Text : City : (.*),.*
 Count : .*,(.*)
ReplaceText : INSERT INTO CITY(City,Count) values ('${City}',${Count})




--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15635.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread suman....@cuddle.ai
Still not able to insert the data in table.

I tested with PutFile instead of PutSql and file contains 
INSERT INTO CITY(city,count) VALUES('city',count)



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15634.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: NiFi unsecure cluster setup issue on Windows.

2017-05-03 Thread Andre
Hi,



could it be that the windows firewall is enabled and blocking the ZK
communication between nodes?

Cheers

On Wed, May 3, 2017 at 10:53 PM, shahbazatta  wrote:

> Hi,
> I am trying to setup the 3 node NiFi but i am unable to set it up. I
> already
> setup it on ubuntu and it is working fine but the same configurations not
> working on windows machines.
>
> I tried restarting all nodes, nifi service, etc but nothing works:
>
> Failed to determine which node is elected active Cluster Coordinator:
> ZooKeeper reports the address as winifi01:
>
> whereas, winifi01 node showing cluster of 1/1. i also verify the hosts file
> and zookeeper properties
>
> zookeer.properties
> server.1=winifi01:2888:3888
> server.2=winifi02:2888:3888
> server.3=winifi03:2888:3888
>
>
>
>
>
> --
> View this message in context: http://apache-nifi-developer-
> list.39713.n7.nabble.com/NiFi-unsecure-cluster-setup-issue-
> on-Windows-tp15631.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


Re: [GitHub] nifi issue #1736: Nifi 3774

2017-05-03 Thread Sarah Olson
Thanks @YolandaMDavis. 
Yes. It's fine to apply. 

Sarah Olson
m: 415-298-5573

Sent from my iPhone

> On May 3, 2017, at 6:07 AM, YolandaMDavis  wrote:
> 
> Github user YolandaMDavis commented on the issue:
> 
>https://github.com/apache/nifi/pull/1736
> 
>@thesolson did the merge but corrected one small thing, tick marks on 
> controller were still missing. hope is was ok to apply.
> 
> 
> ---
> If your project is set up for it, you can reply to this email and have your
> reply appear on GitHub as well. If your project does not have this feature
> enabled and wishes so, or if the feature is enabled but not working, please
> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
> with INFRA.
> ---



NiFi unsecure cluster setup issue on Windows.

2017-05-03 Thread shahbazatta
Hi,
I am trying to setup the 3 node NiFi but i am unable to set it up. I already
setup it on ubuntu and it is working fine but the same configurations not
working on windows machines.

I tried restarting all nodes, nifi service, etc but nothing works:

Failed to determine which node is elected active Cluster Coordinator:
ZooKeeper reports the address as winifi01:

whereas, winifi01 node showing cluster of 1/1. i also verify the hosts file
and zookeeper properties

zookeer.properties
server.1=winifi01:2888:3888
server.2=winifi02:2888:3888
server.3=winifi03:2888:3888





--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NiFi-unsecure-cluster-setup-issue-on-Windows-tp15631.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Not able to ingest the csv data to RDBS Database

2017-05-03 Thread suman....@cuddle.ai
Hi,
Thanks for helping . I have modified the flow according to your suggestion.
My flow consists of below processors.

GetFile-->SplitText-->ExtractText-->ReplaceText-->PutSql.

I have only 4 rows in my csv then why the SplitText queue contains huge MB
of data.

Also in PutSql how to specify the table name where the data needs to be
inserted.

The table needs to be present before?

My file

City,Count
Mumbai,10
Mumbai,10
Pune,10
Pune,10





--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-ingest-the-csv-data-to-RDBS-Database-tp15610p15630.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: EncryptContent issues after NIFI-1257 and NIFI-1259

2017-05-03 Thread Athar
I am getting this issue in even nifi 1.0.0 .  I am using "PGP_ASCII_ARMOR"
encryption algorithm.

I performed the following steps.
1 )  I  created the binary key using "GnuPG v2.0.14"  and executed the "PGP"
encryption algorithm. Its 
executing properly. 
2) I exported the public key in ASCII format  and configure
"PGP_ASCII_ARMOR".  Its displaying  "Invalid header encountered"


  
  



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/EncryptContent-issues-after-NIFI-1257-and-NIFI-1259-tp8581p15629.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.