[jira] [Commented] (NIFI-2656) Allow bootstrap process to prompt for password/key

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848073#comment-15848073
 ] 

ASF GitHub Bot commented on NIFI-2656:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1302
  
Hi @skrewz just checking in to see if you need any assistance or if it's 
just been a matter of bandwidth on this. Thanks. 


> Allow bootstrap process to prompt for password/key
> --
>
> Key: NIFI-2656
> URL: https://issues.apache.org/jira/browse/NIFI-2656
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Configuration, Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Minor
>  Labels: bootstrap, config, encryption, security
> Fix For: 1.2.0
>
> Attachments: NIFI-2656.-K_support.1.patch
>
>
> The bootstrap process {{RunNiFi.java}} is currently responsible for reading 
> the key from {{bootstrap.conf}} and sending it to the running NiFi process 
> {{NiFi.java}} to be used for sensitive property decryption. This exposes the 
> key in two places:
> * Plaintext in {{bootstrap.conf}}
> * In the process invocation
> Running the following command ({{ps -aef | grep -i nifi}}) will result in the 
> following output:
> {code}
> ...
>   501 11597 11596   0  6:51PM ttys0010:08.55 
> /Users/alopresto/.jenv/versions/1.8/bin/java -classpath 
> /Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./conf:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/bcprov-jdk15on-1.54.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/commons-lang3-3.4.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/jcl-over-slf4j-1.7.12.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/jul-to-slf4j-1.7.12.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/log4j-over-slf4j-1.7.12.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/logback-classic-1.1.3.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/logback-core-1.1.3.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-api-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-documentation-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-framework-api-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-nar-utils-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-properties-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-properties-loader-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/nifi-runtime-1.0.0-SNAPSHOT.jar:/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./lib/slf4j-api-1.7.12.jar
>  -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m 
> -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
> -Djava.awt.headless=true -XX:+UseG1GC 
> -Djava.protocol.handler.pkgs=sun.net.www.protocol 
> -Dnifi.properties.file.path=/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/./conf/nifi.properties
>  -Dnifi.bootstrap.listen.port=58213 -Dapp=NiFi 
> -Dorg.apache.nifi.bootstrap.config.log.dir=/Users/alopresto/Workspace/nifi/nifi-assembly/target/nifi-1.0.0-SNAPSHOT-bin/nifi-1.0.0-SNAPSHOT/logs
>  org.apache.nifi.NiFi -k 
> 0123456789ABCDEFFEDCBA98765432100123456789ABCDEFFEDCBA9876543210
> ...
> {code}
> To allow for a more secure invocation, the NiFi process could pause and 
> prompt for the password/key entry in a secure console if it is not provided 
> in the invocation arguments from bootstrap (or if a special flag is 
> provided). While this would require manual intervention to start the process, 
> it would not be default behavior. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1302: NIFI-2656: replace -k [password] with -K [passwordfile].

2017-01-31 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1302
  
Hi @skrewz just checking in to see if you need any assistance or if it's 
just been a matter of bandwidth on this. Thanks. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3162) RPG proxy and Remote Group Port configuration changes should be audited

2017-01-31 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3162:

Description: 
Since NiFi 1.0.0, several configurations have been added to RemoteProcessGroup 
such as Transport Protocol and Proxy settings.

Currently, configuration updates against these new settings are not audited.

In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  will 
have new configurations by NIFI-1202.

This JIRA ticket tracks the work of adding audit events for these new settings.

h3. Investigate current behavior

In order to fix properly, I've tested to know what works and what doesn't. Here 
is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it should 
be the same for versions after 1.0).

|| Operation performed || Created Audit Type || Audited Operation || Need fix? 
||
| Create RPG | RPG | Add | No |
| Enable transmission | RPG | Start | No |
| Disable transmission | RPG | Stop | No |
| Delete RPG | RPG? | Remove? | Different issue |
| Edit RPG config | RPG | Configure | Yes fix-1 |
| Enable/Disable individual remote port | (none) | (none) | Yes fix-2 |
| Edit individual remote port config | (none) | (none) | Yes fix-3 |

h3. Fix-1: Edit RPG config

Currently, this is partially audited for 'Communications Timeout' and 'Yield 
Duration'.
We need to track edit for these as well: Transport Protocol, HTTP Proxy Server 
Hostname, Port, User, Password

h3. Fix-2: Enable/Disable individual remote port

>From 'Remote ports' context menu of a RPG, each remote port can be 
>enabled/disabled individually. Currently this operation is not audited.

There're two ways to fix this, one is using the existing REMOTE_PROCESS_GROUP 
table in the nifi-flow-audit h2 database. Use 'Configure' Operation type with 
'Name=.transmission' and 'Value=enabled/disabled'.
Another way is to create a REMOTE_PROCESS_GROUP_PORT table, but this requires 
some additional migration code to create the table.
I think the former approach is reasonable.

h3. Fix-3: Edit individual remote port config

Currently, user can configure 'Concurrent Tasks' and 'Compressed', but no audit 
event is created for these operations. Same as Fix-2, we might be able to use 
existing REMOTE_PROCESS_GROUP table, using 'Configure' Operation type with 
'Name=.' dot notation and 
'Value='.

h3. NiFi history shows audit record as 'Not authorized' if the component has 
been removed

Once the target component (i.e RemoteProcessGroup) is removed from a flow, its 
audit records as shown as 'Not authorized'. This can be problematic since user 
won't be able to know who deleted the component.

  was:
Since NiFi 1.0.0, several configurations have been added to RemoteProcessGroup 
such as Transport Protocol and Proxy settings.

Currently, configuration updates against these new settings are not audited.

In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  will 
have new configurations by NIFI-1202.

This JIRA ticket tracks the work of adding audit events for these new settings.

h3. Investigate current behavior

In order to fix properly, I've tested to know what works and what doesn't. Here 
is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it should 
be the same for versions after 1.0).

|| Operation performed || Created Audit Type || Audited Operation || Need fix? 
||
| Create RPG | RPG | Add | No |
| Enable transmission | RPG | Start | No |
| Disable transmission | RPG | Stop | No |
| Delete RPG | RPG? | Remove? | Different issue |
| Edit RPG config | RPG | Configure | Yes fix-1 |
| Enable/Disable individual remote port | (none) | (none) | Yes fix-2 |
| Edit individual remote port config | (none) | (none) | Yes fix-3 |

h3. Fix-1: Edit RPG config

Currently, this is partially audited for 'Communications Timeout' and 'Yield 
Duration'.
We need to track edit for these as well: Transport Protocol, HTTP Proxy Server 
Hostname, Port, User, Password

h3. Fix-2: Enable/Disable individual remote port

>From 'Remote ports' context menu of a RPG, each remote port can be 
>enabled/disabled individually. Currently this operation is not audited.




> RPG proxy and Remote Group Port configuration changes should be audited
> ---
>
> Key: NIFI-3162
> URL: https://issues.apache.org/jira/browse/NIFI-3162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Since NiFi 1.0.0, several configurations have been added to 
> RemoteProcessGroup such as Transport Protocol and Proxy settings.
> Currently, configuration updates against these new settings are not audited.
> In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  
> will have new 

[jira] [Updated] (NIFI-3162) RPG proxy and Remote Group Port configuration changes should be audited

2017-01-31 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3162:

Description: 
Since NiFi 1.0.0, several configurations have been added to RemoteProcessGroup 
such as Transport Protocol and Proxy settings.

Currently, configuration updates against these new settings are not audited.

In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  will 
have new configurations by NIFI-1202.

This JIRA ticket tracks the work of adding audit events for these new settings.

h3. Investigate current behavior

In order to fix properly, I've tested to know what works and what doesn't. Here 
is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it should 
be the same for versions after 1.0).

|| Operation performed || Created Audit Type || Audited Operation || Need fix? 
||
| Create RPG | RPG | Add | No |
| Enable transmission | RPG | Start | No |
| Disable transmission | RPG | Stop | No |
| Delete RPG | RPG? | Remove? | Different issue |
| Edit RPG config | RPG | Configure | Yes fix-1 |
| Enable/Disable individual remote port | (none) | (none) | Yes fix-2 |
| Edit individual remote port config | (none) | (none) | Yes fix-3 |

h3. Fix-1: Edit RPG config

Currently, this is partially audited for 'Communications Timeout' and 'Yield 
Duration'.
We need to track edit for these as well: Transport Protocol, HTTP Proxy Server 
Hostname, Port, User, Password

h3. Fix-2: Enable/Disable individual remote port

>From 'Remote ports' context menu of a RPG, each remote port can be 
>enabled/disabled individually. Currently this operation is not audited.



  was:
Since NiFi 1.0.0, several configurations have been added to RemoteProcessGroup 
such as Transport Protocol and Proxy settings.

Currently, configuration updates against these new settings are not audited.

In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  will 
have new configurations by NIFI-1202.

This JIRA ticket tracks the work of adding audit events for these new settings.

h3. Investigate current behavior

In order to fix properly, I've tested to know what works and what doesn't. Here 
is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it should 
be the same for versions after 1.0).

|| Operation performed || Created Audit Type || Audited Operation || Need fix? 
||
| Create RPG | RemoteProcessGroup | Add | No |
| Enable transmission | RemoteProcessGroup | Start | No |
| Disable transmission | RemoteProcessGroup | Stop | No |
| Delete RPG | RemoteProcessGroup? | Remove? | Different issue |
| Edit RPG config | RemoteProcessGroup | Configure | Yes |
| Enable/Disable individual remote port | (none) | (none) | Yes |
| Edit individual remote port config | (none) | (none) | Yes |


> RPG proxy and Remote Group Port configuration changes should be audited
> ---
>
> Key: NIFI-3162
> URL: https://issues.apache.org/jira/browse/NIFI-3162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Since NiFi 1.0.0, several configurations have been added to 
> RemoteProcessGroup such as Transport Protocol and Proxy settings.
> Currently, configuration updates against these new settings are not audited.
> In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  
> will have new configurations by NIFI-1202.
> This JIRA ticket tracks the work of adding audit events for these new 
> settings.
> h3. Investigate current behavior
> In order to fix properly, I've tested to know what works and what doesn't. 
> Here is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it 
> should be the same for versions after 1.0).
> || Operation performed || Created Audit Type || Audited Operation || Need 
> fix? ||
> | Create RPG | RPG | Add | No |
> | Enable transmission | RPG | Start | No |
> | Disable transmission | RPG | Stop | No |
> | Delete RPG | RPG? | Remove? | Different issue |
> | Edit RPG config | RPG | Configure | Yes fix-1 |
> | Enable/Disable individual remote port | (none) | (none) | Yes fix-2 |
> | Edit individual remote port config | (none) | (none) | Yes fix-3 |
> h3. Fix-1: Edit RPG config
> Currently, this is partially audited for 'Communications Timeout' and 'Yield 
> Duration'.
> We need to track edit for these as well: Transport Protocol, HTTP Proxy 
> Server Hostname, Port, User, Password
> h3. Fix-2: Enable/Disable individual remote port
> From 'Remote ports' context menu of a RPG, each remote port can be 
> enabled/disabled individually. Currently this operation is not audited.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3162) RPG proxy and Remote Group Port configuration changes should be audited

2017-01-31 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3162:

Description: 
Since NiFi 1.0.0, several configurations have been added to RemoteProcessGroup 
such as Transport Protocol and Proxy settings.

Currently, configuration updates against these new settings are not audited.

In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  will 
have new configurations by NIFI-1202.

This JIRA ticket tracks the work of adding audit events for these new settings.

h3. Investigate current behavior

In order to fix properly, I've tested to know what works and what doesn't. Here 
is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it should 
be the same for versions after 1.0).

|| Operation performed || Created Audit Type || Audited Operation || Need fix? 
||
| Create RPG | RemoteProcessGroup | Add | No |
| Enable transmission | RemoteProcessGroup | Start | No |
| Disable transmission | RemoteProcessGroup | Stop | No |
| Delete RPG | RemoteProcessGroup? | Remove? | Different issue |
| Edit RPG config | RemoteProcessGroup | Configure | Yes |
| Enable/Disable individual remote port | (none) | (none) | Yes |
| Edit individual remote port config | (none) | (none) | Yes |

  was:
Since NiFi 1.0.0, several configurations have been added to RemoteProcessGroup 
such as Transport Protocol and Proxy settings.

Currently, configuration updates against these new settings are not audited.

In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  will 
have new configurations by NIFI-1202.

This JIRA ticket tracks the work of adding audit events for these new settings.


> RPG proxy and Remote Group Port configuration changes should be audited
> ---
>
> Key: NIFI-3162
> URL: https://issues.apache.org/jira/browse/NIFI-3162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Since NiFi 1.0.0, several configurations have been added to 
> RemoteProcessGroup such as Transport Protocol and Proxy settings.
> Currently, configuration updates against these new settings are not audited.
> In addition to these RemoteProcessGroup settings, RemoteProcessGroupPort  
> will have new configurations by NIFI-1202.
> This JIRA ticket tracks the work of adding audit events for these new 
> settings.
> h3. Investigate current behavior
> In order to fix properly, I've tested to know what works and what doesn't. 
> Here is the result of current behavior (measured with 1.2.0-SNAPSHOT, but it 
> should be the same for versions after 1.0).
> || Operation performed || Created Audit Type || Audited Operation || Need 
> fix? ||
> | Create RPG | RemoteProcessGroup | Add | No |
> | Enable transmission | RemoteProcessGroup | Start | No |
> | Disable transmission | RemoteProcessGroup | Stop | No |
> | Delete RPG | RemoteProcessGroup? | Remove? | Different issue |
> | Edit RPG config | RemoteProcessGroup | Configure | Yes |
> | Enable/Disable individual remote port | (none) | (none) | Yes |
> | Edit individual remote port config | (none) | (none) | Yes |



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3373) Add nifi.flow.configuration.archive.max.count property

2017-01-31 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3373:

Assignee: Koji Kawamura
  Status: Patch Available  (was: Open)

> Add nifi.flow.configuration.archive.max.count property
> --
>
> Key: NIFI-3373
> URL: https://issues.apache.org/jira/browse/NIFI-3373
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Currently we can limit the number of flow.xml.gz archive files by:
> * total archive size (nifi.flow.configuration.archive.max.storage)
> * archive file age (nifi.flow.configuration.archive.max.time)
> In addition to these conditions to manage old archives, there's a demand that 
> simply limiting number of archive files regardless time or size constraint.
> https://lists.apache.org/thread.html/4d2d9cec46ee896318a5492bf020f60c28396e2850c077dad40d45d2@%3Cusers.nifi.apache.org%3E
> We can provide that by adding new property 
> 'nifi.flow.configuration.archive.max.count', so that If specified, only N 
> latest config files can be archived.
> Make those properties optional, and process in following order:
> - If max.count is specified, any archive other than the latest (N-1) is 
> removed
> - If max.time is specified, any archive that is older than max.time is removed
> - If max.storage is specified, old archives are deleted while total size is 
> greater than the configuration
> - Create new archive, keep the latest archive regardless of above limitations
> To illustrate how flow.xml archiving works, here are simulations with the 
> updated logic, where the size of flow.xml keeps increasing:
> h3. CASE-1
> archive.max.storage=10MB
> archive.max.count = 5
> ||Time || flow.xml || archives || archive total ||
> |t1 | f1 5MB  | f1 | 5MB|
> |t2 | f2 5MB  | f1, f2 | 10MB|
> |t3 | f3 5MB  | f2, f3 | 10MB|
> |t4 | f4 10MB | f4 | 10MB|
> |t5 | f5 15MB | f5 | 15MB|
> |t6 | f6 20MB | f6 | 20MB|
> |t7 | f7 25MB | t7 | 25MB|
> * t3: The oldest f1 is removed, because f1 + f2 + f3 > 10MB.
> * t5: Even if flow.xml size exceeds max.storage, the latest archive is
> created. f4 is removed because f4 + f5 > 10MB. WAR message is logged because 
> f5 is greater than 10MB.
> In this case, NiFi will keep logging WAR message
> indicating archive storage size is exceeding limit, from t5.
> After t5, NiFi will only keep the latest flow.xml.
> h3. CASE-2
> If at least 5 archives need to be kept no matter what, then set
> blank max.storage and max.time.
> archive.max.storage=
> archive.max.time=
> archive.max.count = 5 // Only limit archives by count
> |Time || flow.xml || archives || archive total ||
> |t1 | f1 5MB  | f1 | 5MB|
> |t2 | f2 5MB  | f1, f2 | 10MB|
> |t3 | f3 5MB  | f1, f2, f3 | 15MB|
> |t4 | f4 10MB | f1, f2, f3, f4 | 25MB|
> |t5 | f5 15MB | f1, f2, f3, f4, f5 | 40MB|
> |t6 | f6 20MB | f2, f3, f4, f5, f6 | 55MB|
> |t7 | f7 25MB | f3, f4, f5, f6, (f7) | 50MB, (75MB)|
> |t8 | f8 30MB | f3, f4, f5, f6 | 50MB|
> * From t6, oldest archive is removed to keep number of archives <= 5
> * At t7, if the disk has only 60MB space, f7 won't be archived. And
> after this point, archive mechanism stop working (Trying to create new
> archive, but keep getting exception: no space left on device).
> In either case above, once flow.xml has grown to that size, some human
> intervention would be needed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3373) Add nifi.flow.configuration.archive.max.count property

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847910#comment-15847910
 ] 

ASF GitHub Bot commented on NIFI-3373:
--

GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/1460

NIFI-3373: Add nifi.flow.configuration.archive.max.count

- Add 'nifi.flow.configuration.archive.max.count' in nifi.properties
- Change default archive limit so that it uses archive max time(30 days)
  and storage (500MB) if no limitation is specified
- Simplified logic to delete old archives

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-3373

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1460.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1460


commit 05dce3f1d3081b737a2a3112795156d5cbd640a2
Author: Koji Kawamura 
Date:   2017-01-24T00:23:01Z

NIFI-3373: Add nifi.flow.configuration.archive.max.count

- Add 'nifi.flow.configuration.archive.max.count' in nifi.properties
- Change default archive limit so that it uses archive max time(30 days)
  and storage (500MB) if no limitation is specified
- Simplified logic to delete old archives




> Add nifi.flow.configuration.archive.max.count property
> --
>
> Key: NIFI-3373
> URL: https://issues.apache.org/jira/browse/NIFI-3373
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Koji Kawamura
>
> Currently we can limit the number of flow.xml.gz archive files by:
> * total archive size (nifi.flow.configuration.archive.max.storage)
> * archive file age (nifi.flow.configuration.archive.max.time)
> In addition to these conditions to manage old archives, there's a demand that 
> simply limiting number of archive files regardless time or size constraint.
> https://lists.apache.org/thread.html/4d2d9cec46ee896318a5492bf020f60c28396e2850c077dad40d45d2@%3Cusers.nifi.apache.org%3E
> We can provide that by adding new property 
> 'nifi.flow.configuration.archive.max.count', so that If specified, only N 
> latest config files can be archived.
> Make those properties optional, and process in following order:
> - If max.count is specified, any archive other than the latest (N-1) is 
> removed
> - If max.time is specified, any archive that is older than max.time is removed
> - If max.storage is specified, old archives are deleted while total size is 
> greater than the configuration
> - Create new archive, keep the latest archive regardless of above limitations
> To illustrate how flow.xml archiving works, here are simulations with the 
> updated logic, where the size of flow.xml keeps increasing:
> h3. CASE-1
> archive.max.storage=10MB
> archive.max.count = 5
> ||Time || flow.xml || archives || archive total ||
> |t1 | 

[GitHub] nifi pull request #1460: NIFI-3373: Add nifi.flow.configuration.archive.max....

2017-01-31 Thread ijokarumawak
GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/1460

NIFI-3373: Add nifi.flow.configuration.archive.max.count

- Add 'nifi.flow.configuration.archive.max.count' in nifi.properties
- Change default archive limit so that it uses archive max time(30 days)
  and storage (500MB) if no limitation is specified
- Simplified logic to delete old archives

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-3373

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1460.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1460


commit 05dce3f1d3081b737a2a3112795156d5cbd640a2
Author: Koji Kawamura 
Date:   2017-01-24T00:23:01Z

NIFI-3373: Add nifi.flow.configuration.archive.max.count

- Add 'nifi.flow.configuration.archive.max.count' in nifi.properties
- Change default archive limit so that it uses archive max time(30 days)
  and storage (500MB) if no limitation is specified
- Simplified logic to delete old archives




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1450: NIFI-3339b Add getDataSource() to DBCPService, second vers...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1450
  
Hi @ToivoAdams , thank you for your contribution. I reviewed it and saw the 
sample code using Spring JDBC template. Then I felt that this can be done 
outside of DBCPService. Adding getDataSource to DBCPService would be overkill.

Instead of adding getDataSource method, how about adding a test method in 
DBCPServiceTest like below:

```java
/**
 * Test database queries using Derby through Spring JDBC template.
 * Connect, create table, insert, select, drop table.
 * This is more of an example to use DBCPService with Spring JDBC.
 */
@Test
public void testSpringJDBCTemplate() throws InitializationException, 
SQLException {
final TestRunner runner = 
TestRunners.newTestRunner(TestProcessor.class);
final DBCPConnectionPool service = new DBCPConnectionPool();
runner.addControllerService("test-good1", service);

// remove previous test database, if any
final File dbLocation = new File(DB_LOCATION);
dbLocation.delete();

// set embedded Derby database connection url
runner.setProperty(service, DBCPConnectionPool.DATABASE_URL, 
"jdbc:derby:" + DB_LOCATION + ";create=true");
runner.setProperty(service, DBCPConnectionPool.DB_USER, "tester");
runner.setProperty(service, DBCPConnectionPool.DB_PASSWORD, 
"testerp");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVERNAME, 
"org.apache.derby.jdbc.EmbeddedDriver");

runner.enableControllerService(service);

runner.assertValid(service);
final DBCPService dbcpService = (DBCPService) 
runner.getProcessContext().getControllerServiceLookup().getControllerService("test-good1");
Assert.assertNotNull(dbcpService);

// Create a jdbcTemplate. Wrap dbcpService so that it can act as a 
DataSource.
JdbcTemplate jdbcTemplate = new JdbcTemplate(new BasicDataSource() {
@Override
public Connection getConnection() throws SQLException {
return dbcpService.getConnection();
}

@Override
public Connection getConnection(String user, String pass) 
throws SQLException {
throw new UnsupportedOperationException("User and password 
can not be overwritten.");
}
});

try {
jdbcTemplate.execute(dropTable);
} catch (final Exception e) {
// table may not exist, this is not serious problem.
}

jdbcTemplate.execute(createTable);

jdbcTemplate.update("insert into restaurants values (1, 'Irifunes', 
'San Mateo')");
jdbcTemplate.update("insert into restaurants values (2, 'Estradas', 
'Daly City')");
jdbcTemplate.update("insert into restaurants values (3, 'Prime Rib 
House', 'San Francisco')");

int nrOfRows = jdbcTemplate.queryForObject("select count(*) from 
restaurants", Integer.class);
assertEquals(3, nrOfRows);
}
```

This way, we can let other developers know that there're users using 
DBCPService in their custom processors integrated with Spring JDBC framework. 
Also if DBCPService changes its signature or behavior in the future, we can 
detect that breaking change by this test.

How do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3339) Add getDataSource() to DBCPService

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847869#comment-15847869
 ] 

ASF GitHub Bot commented on NIFI-3339:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1417
  
@ToivoAdams Would you close this PR as you opened new PR #1450 for the same 
JIRA. Usually, you just need to add commit to your branch and push it, in order 
to update a PR which is already submitted. 

https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-Pushchangestoyourpersonal,GitHubrepositoryremote

If you'd like to rewrite a PR completely, you can squash commits then push 
it to your remote branch with '-f' option.
https://ariejan.net/2011/07/05/git-squash-your-latests-commits-into-one/

Thanks!


> Add getDataSource() to DBCPService
> --
>
> Key: NIFI-3339
> URL: https://issues.apache.org/jira/browse/NIFI-3339
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Toivo Adams
>Assignee: Toivo Adams
>Priority: Minor
>
> Currently DBCPService returns only Connection. 
> Sometimes DataSource is needed, for example Spring JdbcTemplate, 
> SimpleJdbcCall need DataSource.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847844#comment-15847844
 ] 

ASF GitHub Bot commented on NIFI-2881:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98810570
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
+String maxPropKey = maxProp.getKey().toLowerCase();
+String fullyQualifiedMaxPropKey = getStateKey(tableName, 
maxPropKey);
+if (!statePropertyMap.containsKey(fullyQualifiedMaxPropKey)) {
+String newMaxPropValue;
+// If we can't find the value at the fully-qualified key 
name, it is possible (under a previous scheme)
+// the value has been stored under a key that is only the 
column name. Fall back to check the column name,
+// but store the new initial max value under the 
fully-qualified key.
+if (statePropertyMap.containsKey(maxPropKey)) {
+newMaxPropValue = statePropertyMap.get(maxPropKey);
+} else {
+newMaxPropValue = maxProp.getValue();
--- End diff --

I was suggesting that for QueryDatabaseTable and Initial max value, instead 
of GenerateTableFetch. Because not being able to know the column type seemed an 
obstacle to support incoming Flow Files in QueryDatabaseTable ([PR 
comment](https://github.com/apache/nifi/pull/1407#issuecomment-274592165) and 
[JIRA 
comment](https://issues.apache.org/jira/browse/NIFI-2881?focusedCommentId=15811850=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15811850)).

However, I agree with you that, as you've written, since GenerateTableFetch 
and ExecuteSQL can be used if user need to pass arguments from incoming flow 
files, it's not necessary to support incoming flow files in QueryDatabaseTable.

If we are not going to add incoming flow file support for 
QueryDatabaseTable in this PR, maybe we should add some docs in 
QueryDatabaseTable capability description to guide users to refer 
GenerateTableFetch and ExecuteSQL, too.


> Allow Database Fetch processor(s) to accept incoming flow files and use 
> Expression Language
> ---
>
> Key: NIFI-2881
> URL: https://issues.apache.org/jira/browse/NIFI-2881
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> The QueryDatabaseTable and GenerateTableFetch processors do not allow 
> Expression Language to be used in the properties, mainly because they also do 
> not allow incoming connections. This means if the user desires to fetch from 
> multiple tables, they currently need one instance of the processor for each 
> table, and those table names must be hard-coded.
> To support the same capabilities for multiple tables and more flexible 
> configuration via Expression Language, these processors should have 
> properties that accept Expression Language, and GenerateTableFetch should 
> accept (optional) incoming connections.
> Conversation about the behavior of the processors is welcomed and encouraged. 
> For example, if an incoming flow file is available, do we also still run the 
> incremental fetch logic for tables that aren't specified by this flow file, 
> or do we just do incremental fetching when the processor is scheduled but 
> there is no incoming flow file. The latter implies a denial-of-service could 
> take place, by flooding the processor with flow files and not letting it do 
> its original job of querying the table, keeping track of maximum values, etc.
> This is likely a breaking change to the processors because of how state 
> management is implemented. Currently since the table name is hard coded, only 
> the column name comprises the key in the 

[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98810570
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
+String maxPropKey = maxProp.getKey().toLowerCase();
+String fullyQualifiedMaxPropKey = getStateKey(tableName, 
maxPropKey);
+if (!statePropertyMap.containsKey(fullyQualifiedMaxPropKey)) {
+String newMaxPropValue;
+// If we can't find the value at the fully-qualified key 
name, it is possible (under a previous scheme)
+// the value has been stored under a key that is only the 
column name. Fall back to check the column name,
+// but store the new initial max value under the 
fully-qualified key.
+if (statePropertyMap.containsKey(maxPropKey)) {
+newMaxPropValue = statePropertyMap.get(maxPropKey);
+} else {
+newMaxPropValue = maxProp.getValue();
--- End diff --

I was suggesting that for QueryDatabaseTable and Initial max value, instead 
of GenerateTableFetch. Because not being able to know the column type seemed an 
obstacle to support incoming Flow Files in QueryDatabaseTable ([PR 
comment](https://github.com/apache/nifi/pull/1407#issuecomment-274592165) and 
[JIRA 
comment](https://issues.apache.org/jira/browse/NIFI-2881?focusedCommentId=15811850=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15811850)).

However, I agree with you that, as you've written, since GenerateTableFetch 
and ExecuteSQL can be used if user need to pass arguments from incoming flow 
files, it's not necessary to support incoming flow files in QueryDatabaseTable.

If we are not going to add incoming flow file support for 
QueryDatabaseTable in this PR, maybe we should add some docs in 
QueryDatabaseTable capability description to guide users to refer 
GenerateTableFetch and ExecuteSQL, too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-01-31 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3335:

Description: 
NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
Values for columns, to enable the user to "pick up where they left off" if 
something happened with a flow, a NiFi instance, etc. where the state was 
stored but the processing did not complete successfully.

This feature would also be helpful in GenerateTableFetch, which also supports 
max-value columns.

Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
max values can be specified via flow file attributes. Because if a table name 
is dynamically passed via flow file attribute and Expression Language, user 
won't be able to configure dynamic processor attribute in advance for each 
possible table.

Add dynamic properties ('initial.maxvalue.' same as 
QueryDatabaseTable) to specify initial max values statically, and also use 
incoming flow file attributes named 'initial.maxvalue.' if 
any. 

  was:
NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
Values for columns, to enable the user to "pick up where they left off" if 
something happened with a flow, a NiFi instance, etc. where the state was 
stored but the processing did not complete successfully.

This feature would also be helpful in GenerateTableFetch, which also supports 
max-value columns.

Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
max values can be specified via flow file attributes. Because if a table name 
is dynamically passed via flow file attribute and Expression Language, user 
won't be able to configure dynamic processor attribute in advance for each 
possible table.

Add dynamic properties ('initial.maxvalue.{max_value_column}' same as 
QueryDatabaseTable) to specify initial max values statically, and also use 
incoming flow file attributes named 'initial.maxvalue.{max_value_column}' if 
any. 


> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847792#comment-15847792
 ] 

ASF GitHub Bot commented on NIFI-2881:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98807002
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
--- End diff --

Sure, I've updated NIFI-3335 description.


> Allow Database Fetch processor(s) to accept incoming flow files and use 
> Expression Language
> ---
>
> Key: NIFI-2881
> URL: https://issues.apache.org/jira/browse/NIFI-2881
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> The QueryDatabaseTable and GenerateTableFetch processors do not allow 
> Expression Language to be used in the properties, mainly because they also do 
> not allow incoming connections. This means if the user desires to fetch from 
> multiple tables, they currently need one instance of the processor for each 
> table, and those table names must be hard-coded.
> To support the same capabilities for multiple tables and more flexible 
> configuration via Expression Language, these processors should have 
> properties that accept Expression Language, and GenerateTableFetch should 
> accept (optional) incoming connections.
> Conversation about the behavior of the processors is welcomed and encouraged. 
> For example, if an incoming flow file is available, do we also still run the 
> incremental fetch logic for tables that aren't specified by this flow file, 
> or do we just do incremental fetching when the processor is scheduled but 
> there is no incoming flow file. The latter implies a denial-of-service could 
> take place, by flooding the processor with flow files and not letting it do 
> its original job of querying the table, keeping track of maximum values, etc.
> This is likely a breaking change to the processors because of how state 
> management is implemented. Currently since the table name is hard coded, only 
> the column name comprises the key in the state. This would have to be 
> extended to have a compound key that represents table name, max-value column 
> name, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98807002
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
--- End diff --

Sure, I've updated NIFI-3335 description.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-01-31 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3335:

Description: 
NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
Values for columns, to enable the user to "pick up where they left off" if 
something happened with a flow, a NiFi instance, etc. where the state was 
stored but the processing did not complete successfully.

This feature would also be helpful in GenerateTableFetch, which also supports 
max-value columns.

Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
max values can be specified via flow file attributes. Because if a table name 
is dynamically passed via flow file attribute and Expression Language, user 
won't be able to configure dynamic processor attribute in advance for each 
possible table.

Add dynamic properties ('initial.maxvalue.{max_value_column}' same as 
QueryDatabaseTable) to specify initial max values statically, and also use 
incoming flow file attributes named 'initial.maxvalue.{max_value_column}' if 
any. 

  was:
NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
Values for columns, to enable the user to "pick up where they left off" if 
something happened with a flow, a NiFi instance, etc. where the state was 
stored but the processing did not complete successfully.

This feature would also be helpful in GenerateTableFetch, which also supports 
max-value columns.


> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.{max_value_column}' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.{max_value_column}' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3424:

Fix Version/s: 0.8.0
   Status: Patch Available  (was: Open)

I tested the 0.x branch before, and confirmed the bug exists.  After this patch 
to 0.x, I verified that the bug is fixed.

> CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is 
> not set
> 
>
> Key: NIFI-3424
> URL: https://issues.apache.org/jira/browse/NIFI-3424
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Michael Moser
>Assignee: Michael Moser
> Fix For: 0.8.0
>
>
> If I view a Provenance Event and click the Replay button, all works as 
> expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
> the FlowFile in the queue while NiFi is restarted, upon restart I see the 
> following error in the log when trying to process the FlowFile:
> 2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due 
> to java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> 2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
> StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
> section=1], offset=0, length=10240],offset=0,name=,size=10240] on 
> behalf of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
>  ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847674#comment-15847674
 ] 

ASF GitHub Bot commented on NIFI-3424:
--

GitHub user mosermw opened a pull request:

https://github.com/apache/nifi/pull/1459

NIFI-3424: NIFI-3040: Fixed bug where we were generating a …

…RepositoryRecord with an 'UPDATE' type instead of a 'CREATE' time for 
Replay of FlowFiles. This caused FlowFile to have no attributes when restored 
from the FlowFile Repository.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mosermw/nifi NIFI-3424

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1459.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1459


commit a73ee10df5021f83a35b53eec71468da40f8cc5d
Author: Mark Payne 
Date:   2016-11-15T17:59:02Z

NIFI-3424: NIFI-3040: Fixed bug where we were generating a RepositoryRecord 
with an 'UPDATE' type instead of a 'CREATE' time for Replay of FlowFiles. This 
caused FlowFile to have no attributes when restored from the FlowFile 
Repository.




> CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is 
> not set
> 
>
> Key: NIFI-3424
> URL: https://issues.apache.org/jira/browse/NIFI-3424
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>
> If I view a Provenance Event and click the Replay button, all works as 
> expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
> the FlowFile in the queue while NiFi is restarted, upon restart I see the 
> following error in the log when trying to process the FlowFile:
> 2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due 
> to java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> 2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
> StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
> section=1], offset=0, length=10240],offset=0,name=,size=10240] on 
> behalf of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
>  ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> 

[GitHub] nifi pull request #1459: NIFI-3424: NIFI-3040: Fixed bug where we were gener...

2017-01-31 Thread mosermw
GitHub user mosermw opened a pull request:

https://github.com/apache/nifi/pull/1459

NIFI-3424: NIFI-3040: Fixed bug where we were generating a …

…RepositoryRecord with an 'UPDATE' type instead of a 'CREATE' time for 
Replay of FlowFiles. This caused FlowFile to have no attributes when restored 
from the FlowFile Repository.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mosermw/nifi NIFI-3424

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1459.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1459


commit a73ee10df5021f83a35b53eec71468da40f8cc5d
Author: Mark Payne 
Date:   2016-11-15T17:59:02Z

NIFI-3424: NIFI-3040: Fixed bug where we were generating a RepositoryRecord 
with an 'UPDATE' type instead of a 'CREATE' time for Replay of FlowFiles. This 
caused FlowFile to have no attributes when restored from the FlowFile 
Repository.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (NIFI-3015) NiFi service starts from root user after installation

2017-01-31 Thread Andre (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andre reopened NIFI-3015:
-

> NiFi service starts from root user after installation
> -
>
> Key: NIFI-3015
> URL: https://issues.apache.org/jira/browse/NIFI-3015
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.1.0
> Environment: Centos 7.2
>Reporter: Artem Yermakov
>Assignee: Andre
>Priority: Critical
>
> When install NiFi using command nifi.sh install, and then start NiFi by 
> command service nifi start, NiFi will start from user root.
> I suggest to run it from user nifi which is created during rpm installation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (NIFI-3015) NiFi service starts from root user after installation

2017-01-31 Thread Andre (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andre reassigned NIFI-3015:
---

Assignee: Joseph Witt  (was: Andre)

> NiFi service starts from root user after installation
> -
>
> Key: NIFI-3015
> URL: https://issues.apache.org/jira/browse/NIFI-3015
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.1.0
> Environment: Centos 7.2
>Reporter: Artem Yermakov
>Assignee: Joseph Witt
>Priority: Critical
>
> When install NiFi using command nifi.sh install, and then start NiFi by 
> command service nifi start, NiFi will start from user root.
> I suggest to run it from user nifi which is created during rpm installation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3015) NiFi service starts from root user after installation

2017-01-31 Thread Andre (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andre updated NIFI-3015:

Status: Patch Available  (was: Reopened)

> NiFi service starts from root user after installation
> -
>
> Key: NIFI-3015
> URL: https://issues.apache.org/jira/browse/NIFI-3015
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.1.0
> Environment: Centos 7.2
>Reporter: Artem Yermakov
>Assignee: Andre
>Priority: Critical
>
> When install NiFi using command nifi.sh install, and then start NiFi by 
> command service nifi start, NiFi will start from user root.
> I suggest to run it from user nifi which is created during rpm installation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847538#comment-15847538
 ] 

Michael Moser commented on NIFI-3424:
-

I would like to port the bug fix from [~markap14] in master over to the 0.x 
branch.

> CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is 
> not set
> 
>
> Key: NIFI-3424
> URL: https://issues.apache.org/jira/browse/NIFI-3424
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>
> If I view a Provenance Event and click the Replay button, all works as 
> expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
> the FlowFile in the queue while NiFi is restarted, upon restart I see the 
> following error in the log when trying to process the FlowFile:
> 2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due 
> to java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> 2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
> StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
> section=1], offset=0, length=10240],offset=0,name=,size=10240] on 
> behalf of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
>  ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-3424:
---

Assignee: Michael Moser  (was: Mark Payne)

> CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is 
> not set
> 
>
> Key: NIFI-3424
> URL: https://issues.apache.org/jira/browse/NIFI-3424
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>
> If I view a Provenance Event and click the Replay button, all works as 
> expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
> the FlowFile in the queue while NiFi is restarted, upon restart I see the 
> following error in the log when trying to process the FlowFile:
> 2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due 
> to java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> 2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
> StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
> section=1], offset=0, length=10240],offset=0,name=,size=10240] on 
> behalf of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
>  ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3424:

Fix Version/s: (was: 1.1.0)

> CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is 
> not set
> 
>
> Key: NIFI-3424
> URL: https://issues.apache.org/jira/browse/NIFI-3424
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Michael Moser
>Assignee: Mark Payne
>
> If I view a Provenance Event and click the Replay button, all works as 
> expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
> the FlowFile in the queue while NiFi is restarted, upon restart I see the 
> following error in the log when trying to process the FlowFile:
> 2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due 
> to java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> 2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
> StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
> section=1], offset=0, length=10240],offset=0,name=,size=10240] on 
> behalf of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
>  ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3424:

Affects Version/s: 0.7.1

> CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is 
> not set
> 
>
> Key: NIFI-3424
> URL: https://issues.apache.org/jira/browse/NIFI-3424
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Michael Moser
>Assignee: Mark Payne
>
> If I view a Provenance Event and click the Replay button, all works as 
> expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
> the FlowFile in the queue while NiFi is restarted, upon restart I see the 
> following error in the log when trying to process the FlowFile:
> 2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due 
> to java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> 2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
> StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
> section=1], offset=0, length=10240],offset=0,name=,size=10240] on 
> behalf of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
>  ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
>  ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3424) CLONE for 0.x - Unable to generate Provenance Event because FlowFile UUID is not set

2017-01-31 Thread Michael Moser (JIRA)
Michael Moser created NIFI-3424:
---

 Summary: CLONE for 0.x - Unable to generate Provenance Event 
because FlowFile UUID is not set
 Key: NIFI-3424
 URL: https://issues.apache.org/jira/browse/NIFI-3424
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Michael Moser
Assignee: Mark Payne
 Fix For: 1.1.0


If I view a Provenance Event and click the Replay button, all works as 
expected, if NiFi continues to run. However, if I replay a FlowFile and leave 
the FlowFile in the queue while NiFi is restarted, upon restart I see the 
following error in the log when trying to process the FlowFile:

2016-11-15 12:40:21,658 ERROR [Timer-Driven Process Thread-4] 
o.a.n.c.r.StandardProvenanceReporter Failed to generate Provenance Event due to 
java.lang.IllegalStateException: Cannot create Provenance Event Record because 
FlowFile UUID is not set
2016-11-15 12:40:21,664 WARN [Timer-Driven Process Thread-4] 
o.a.n.c.r.StandardProcessSession Unable to generate Provenance Event for 
StandardFlowFileRecord[uuid=,claim=StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=1479231329248-1, container=default, 
section=1], offset=0, length=10240],offset=0,name=,size=10240] on behalf 
of UpdateAttribute[id=69080060-0158-1000-4b41-9b0e329b0c59] due to {}
java.lang.IllegalStateException: Cannot create Provenance Event Record because 
FlowFile UUID is not set
at 
org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:723)
 ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:744)
 ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:401)
 ~[nifi-data-provenance-utils-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.repository.StandardProvenanceReporter.generateDropEvent(StandardProvenanceReporter.java:104)
 ~[nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:255)
 [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
 [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
 [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1089)
 [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_60]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_60]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_60]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3423) List based processors don't support source directories with high file count.

2017-01-31 Thread Matthew Clarke (JIRA)
Matthew Clarke created NIFI-3423:


 Summary: List based processors don't support source directories 
with high file count.
 Key: NIFI-3423
 URL: https://issues.apache.org/jira/browse/NIFI-3423
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.1.1
Reporter: Matthew Clarke


NiFi FlowFile attributes/metadata lives in heap.  The List based processors 
return a complete listing from the target and then creates a FlowFile for each 
File in that returned listing. The FlowFiles being created are not committed to 
the list processor's success relationship until all have been created.  So you 
end up running out of NiFi JVM heap memory before that can happen when the 
returned listing is very large.

It would be nice if the list based processors could commit batches (10,000)  of 
FlowFiles at a time from the returned listing instead of trying to commit them 
all at once to help avoid heap exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3422) Run Once Scheduling

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847267#comment-15847267
 ] 

ASF GitHub Bot commented on NIFI-3422:
--

GitHub user NazIrizarry opened a pull request:

https://github.com/apache/nifi/pull/1458

NIFI-3422 Run Once Scheduling

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NazIrizarry/nifi NIFI-3422

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1458.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1458


commit 9c298395900e1840752fed7e7af191fbab497a45
Author: Naz Irizarry 
Date:   2017-01-31T18:33:32Z

- NIFI-3422 - Initial run once scheduling feature




> Run Once Scheduling
> ---
>
> Key: NIFI-3422
> URL: https://issues.apache.org/jira/browse/NIFI-3422
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI
>Affects Versions: 1.2.0
>Reporter: Nazario
>Priority: Minor
>  Labels: features
> Fix For: 1.2.0
>
>
> A run once scheduling option allows a Processor to run once and then 
> automatically stop.  This is convenient when developing and debugging flows,  
> or when building "visual scripts" for ad-hoc process integration or iterative 
> analytics. Individual processors set to `Run once' can be selected on the 
> canvas with a shift-click.  Then clicking `Start' on the Operate Palette will 
> start those processors which will run once and stop.  Then one can modify 
> processing parameters and repeat.  Interactive analytics in particular 
> benefit from this scheduling mode as they often require human review of 
> results at the end of a flow followed by adjustment of flow parameters before 
> running the next analytic flow.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1458: NIFI-3422 Run Once Scheduling

2017-01-31 Thread NazIrizarry
GitHub user NazIrizarry opened a pull request:

https://github.com/apache/nifi/pull/1458

NIFI-3422 Run Once Scheduling

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NazIrizarry/nifi NIFI-3422

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1458.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1458


commit 9c298395900e1840752fed7e7af191fbab497a45
Author: Naz Irizarry 
Date:   2017-01-31T18:33:32Z

- NIFI-3422 - Initial run once scheduling feature




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-3300) Zookeeper Migrator should allow importing of data to a new root node

2017-01-31 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3300:
--
Status: Patch Available  (was: Open)

> Zookeeper Migrator should allow importing of data to a new root node
> 
>
> Key: NIFI-3300
> URL: https://issues.apache.org/jira/browse/NIFI-3300
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.1.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Minor
>
> ZooKeeper Migrator exports data from ZooKeeper using the absolute path from 
> the root of ZooKeeper.  This prevents the importing of data to a new root 
> node for NiFi, since the path given during the import will have the entire 
> path of the exported data appended to the new root.
> For example, if "/nifi/components" is exported from a ZooKeeper server, the 
> exported data will include the "/nifi/components" path.  When that data is 
> imported to a different ZooKeeper server where the root NiFi node is 
> "/nifi2", and the user imports that data to "/nifi2", nodes will be created 
> under "/nifi2/nifi/components".
> The ZooKeeper Migrator should export data in such a way that, with the given 
> example, the source nodes under "/nifi/components" should be exported without 
> the "/nifi/components" portion of the path, so that those nodes could be 
> imported to the destination root path, such as "/nifi2/components".
> Usage of ZooKeeper client's "chroot" capability should be used in favor of 
> the custom pathing code in the ZooKeeper Migrator.
> This will require documentation updates in the ZooKeeper Migrator section of 
> the System Administration Guide.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3317) Under load, ThreadPoolRequestReplicator is experiencing SocketTimeoutExceptions

2017-01-31 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck resolved NIFI-3317.
---
Resolution: Not A Problem

Increasing socket timeouts resolves this, I was not able to determine a deeper 
issue.

> Under load, ThreadPoolRequestReplicator is experiencing 
> SocketTimeoutExceptions
> ---
>
> Key: NIFI-3317
> URL: https://issues.apache.org/jira/browse/NIFI-3317
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> In a cluster under load (4 nodes, ~80 active threads per node) requests for 
> starting and stopping processors (and, for example, the timed UI thread 
> requesting status) can fail due to socket timeouts during request 
> replication.  Sometimes, this results in a dialog shown on the UI that has a 
> title of "Update Resource", but the dialog itself is empty.
> {code}2017-01-10 17:46:21,878 INFO [StandardProcessScheduler Thread-8] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Stopped scheduling 
> GenerateFlowFile[id=87de49bc-0158-1000-6329-33d0dd20bd3a] to run
> 2017-01-10 17:46:21,921 WARN [Replicate Request Thread-9] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request PUT 
> /nifi-api/processors/87dd1f69-0158-1000-0e85-aaf63af20c89 to m10wn03.local
> domain:9091 due to {}
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
> at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
>  ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.handle(Client.java:652) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
>  ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.WebResource$Builder.put(WebResource.java:529) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:590)
>  ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:770)
>  ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method) 
> ~[na:1.8.0_111]
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
> ~[na:1.8.0_111]
> at java.net.SocketInputStream.read(SocketInputStream.java:170) 
> ~[na:1.8.0_111]
> at java.net.SocketInputStream.read(SocketInputStream.java:141) 
> ~[na:1.8.0_111]
> at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) 
> ~[na:1.8.0_111]
> at sun.security.ssl.InputRecord.read(InputRecord.java:503) 
> ~[na:1.8.0_111]
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973) 
> ~[na:1.8.0_111]
> at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930) 
> ~[na:1.8.0_111]
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) 
> ~[na:1.8.0_111]
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> ~[na:1.8.0_111]
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) 
> ~[na:1.8.0_111]
> at java.io.BufferedInputStream.read(BufferedInputStream.java:345) 
> ~[na:1.8.0_111]
> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) 
> ~[na:1.8.0_111]
> at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) 
> ~[na:1.8.0_111]
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
>  ~[na:1.8.0_111]
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
>  ~[na:1.8.0_111]
> at 
> 

[jira] (NIFI-3420) NIFI Should support generating Hadoop-readable Lz4 outside of HDFS Write

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847220#comment-15847220
 ] 

ASF GitHub Bot commented on NIFI-3420:
--

Github user ilganeli closed the pull request at:

https://github.com/apache/nifi/pull/1457


> NIFI Should support generating Hadoop-readable Lz4 outside of HDFS Write
> 
>
> Key: NIFI-3420
> URL: https://issues.apache.org/jira/browse/NIFI-3420
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Ilya Ganelin
>
> Per https://issues.apache.org/jira/browse/HADOOP-12990 data stored in Lz4 
> format on Hadoop is in a different format from the data generated by the Lz4 
> CLI. The Lz4 CLI can also not be used to generate the Hadoop-compatible 
> format. 
> At the moment, NiFi does not support compression to Lz4 for streaming data.
> Although PutHdfs in the Hadoop processors supports writing out Lz4 to HDFS 
> (assuming the appropriate codec exists), if data is instead being saved to 
> something like S3 or simply streamed, there's no way to generate Lz4 
> compressed data.
> If the Lz4 command line tool is used within a custom processor to perform Lz4 
> conversion, this data will then not be readable on Hadoop if it's 
> subsequently loaded to HDFS.
> A processor can be added that performs the conversion streaming data into the 
> Lz4 format that IS readable on Hadoop by using the Hadoop Lz4 Codec to do the 
> compression. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (NIFI-3420) NIFI Should support generating Hadoop-readable Lz4 outside of HDFS Write

2017-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847219#comment-15847219
 ] 

ASF GitHub Bot commented on NIFI-3420:
--

Github user ilganeli commented on the issue:

https://github.com/apache/nifi/pull/1457
  
I've added unit tests but am closing this issue for now. There is a 
substantial blocker to this approach since it leverages the classes from Hadoop 
which themselves depend on natively compiled and loaded C code. Unless NiFi 
explicitly adds the C-code for the Lz4 codec and manually builds and loads that 
library, we won't be able to use the Codec in Hadoop.

I've also evaluated using the lz4-java library instead but this does not 
generate data in a Hadoop readable format. 


> NIFI Should support generating Hadoop-readable Lz4 outside of HDFS Write
> 
>
> Key: NIFI-3420
> URL: https://issues.apache.org/jira/browse/NIFI-3420
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Ilya Ganelin
>
> Per https://issues.apache.org/jira/browse/HADOOP-12990 data stored in Lz4 
> format on Hadoop is in a different format from the data generated by the Lz4 
> CLI. The Lz4 CLI can also not be used to generate the Hadoop-compatible 
> format. 
> At the moment, NiFi does not support compression to Lz4 for streaming data.
> Although PutHdfs in the Hadoop processors supports writing out Lz4 to HDFS 
> (assuming the appropriate codec exists), if data is instead being saved to 
> something like S3 or simply streamed, there's no way to generate Lz4 
> compressed data.
> If the Lz4 command line tool is used within a custom processor to perform Lz4 
> conversion, this data will then not be readable on Hadoop if it's 
> subsequently loaded to HDFS.
> A processor can be added that performs the conversion streaming data into the 
> Lz4 format that IS readable on Hadoop by using the Hadoop Lz4 Codec to do the 
> compression. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1457: NIFI-3420 Add Hadoop-readable Lz4 compression

2017-01-31 Thread ilganeli
Github user ilganeli commented on the issue:

https://github.com/apache/nifi/pull/1457
  
I've added unit tests but am closing this issue for now. There is a 
substantial blocker to this approach since it leverages the classes from Hadoop 
which themselves depend on natively compiled and loaded C code. Unless NiFi 
explicitly adds the C-code for the Lz4 codec and manually builds and loads that 
library, we won't be able to use the Codec in Hadoop.

I've also evaluated using the lz4-java library instead but this does not 
generate data in a Hadoop readable format. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-352) Consider Sticky Bulletins

2017-01-31 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847174#comment-15847174
 ] 

Andrew Lim commented on NIFI-352:
-

A notification window/system in NiFi is something that I think is sorely 
needed, especially as more and more features are added.  There should be a 
central place where the user can view and interact with issues that are 
affecting the flow or the user's ability to modify the flow.  I agree with 
[~rmoran] that it would make sense conceptually to the user to expand the 
Bulletin implementation for this.  Perhaps a new Jira is needed, since this is 
much larger/different than the concept of "sticky" bulletins.

> Consider Sticky Bulletins
> -
>
> Key: NIFI-352
> URL: https://issues.apache.org/jira/browse/NIFI-352
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Daniel Ueberfluss
>Priority: Minor
>  Labels: dashboard
>
> Consider the implementation of Sticky Bulletins for things like "Controller 
> Failed to Start Processor XYZ"
> A user should have the ability to Confirm the bulletin; until that time it 
> should stay present. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi-minifi-cpp pull request #43: MINIFI-183 Implemented ListenHTTP

2017-01-31 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/43

MINIFI-183 Implemented ListenHTTP

Implemented ListenHTTP. Supports the following features:

- Connection keepalive (to be friendly with clients using connection 
pooling)
- Two-way SSL
- Configurable minimum TLS version
- Pattern-based DN authorization
- Pattern-based extraction of HTTP headers to FlowFile attrs

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFI-183

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/43.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #43


commit 3690e473bf71f91f12fe10affcfcc02b29beb6f0
Author: Andrew Christianson 
Date:   2017-01-23T12:53:27Z

MINIFI-183 Added civet 1.9.1 sources

commit 14b7cc0a77e10818dafca359668d713f6e2b86ae
Author: Andrew Christianson 
Date:   2017-01-23T18:16:53Z

MINIFI-183 Added onScheduled hook

commit 5c342fe4bc89a6fb863b353602bf61d162e7189e
Author: Andrew Christianson 
Date:   2017-01-30T18:31:48Z

MINIFI-183 Added initial two-way TLS-enabled ListenHTTP implementation

commit 572895ef5bd8be09b3f92594a13ac1094871776c
Author: Andrew Christianson 
Date:   2017-01-31T16:40:32Z

MINIFI-183 Implemented connection keepalive and headers as attributes 
pattern




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-352) Consider Sticky Bulletins

2017-01-31 Thread Rob Moran (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Rob Moran commented on  NIFI-352 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Consider Sticky Bulletins  
 
 
 
 
 
 
 
 
 
 
Perhaps it's technically true that the current bulletin system is related, but different; however, I think similarities are far greater and that potentially coming up with a separate way to communicate to users about other issues is not necessary. 
I think we should look at enhancing the current bulletin implementation to expand its coverage as a universal system to alert users of issues and provide general feedback on actions they take.  
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] (NIFI-3422) Run Once Scheduling

2017-01-31 Thread Nazario (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Nazario created an issue 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache NiFi /  NIFI-3422 
 
 
 
  Run Once Scheduling  
 
 
 
 
 
 
 
 
 

Issue Type:
 
  New Feature 
 
 
 

Affects Versions:
 

 1.2.0 
 
 
 

Assignee:
 

 Unassigned 
 
 
 

Components:
 

 Core Framework, Core UI 
 
 
 

Created:
 

 31/Jan/17 15:55 
 
 
 

Fix Versions:
 

 1.2.0 
 
 
 

Labels:
 

 features 
 
 
 

Priority:
 
  Minor 
 
 
 

Reporter:
 
 Nazario 
 
 
 
 
 
 
 
 
 
 
A run once scheduling option allows a Processor to run once and then automatically stop. This is convenient when developing and debugging flows, or when building "visual scripts" for ad-hoc process integration or iterative analytics. Individual processors set to `Run once' can be selected on the canvas with a shift-click. Then clicking `Start' on the Operate Palette will start those processors which 

[jira] (NIFI-3223) Allow PublishAMQP to use NiFi expression language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-3223 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow PublishAMQP to use NiFi _expression_ language  
 
 
 
 
 
 
 
 
 
 
Github user olegz commented on the issue: 
 https://github.com/apache/nifi/pull/1449 
 @pvillard31 yes you can ``` runner.setProperty(MyProcessor.SOME_PROPERTY, "$ {some.attribute.key} 
"); Map attributes = new HashMap<>(); attributes.put("some.attribute.key", "Bonjour Pierre"); runner.enqueue("Hello World\nGoodbye".getBytes(StandardCharsets.UTF_8), attributes); ``` 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[GitHub] nifi issue #1449: NIFI-3223 added support for expression language

2017-01-31 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1449
  
@pvillard31 yes you can
```
runner.setProperty(MyProcessor.SOME_PROPERTY, "${some.attribute.key}");
Map attributes = new HashMap<>();
attributes.put("some.attribute.key", "Bonjour Pierre");
runner.enqueue("Hello World\nGoodbye".getBytes(StandardCharsets.UTF_8), 
attributes);
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-3223) Allow PublishAMQP to use NiFi expression language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-3223 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow PublishAMQP to use NiFi _expression_ language  
 
 
 
 
 
 
 
 
 
 
Github user pvillard31 commented on the issue: 
 https://github.com/apache/nifi/pull/1449 
 Hey @olegz, it looks OK, I'll give it a try. Just wondering if there is an easy way to update unit tests to actually test the use of _expression_ language. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[GitHub] nifi issue #1449: NIFI-3223 added support for expression language

2017-01-31 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1449
  
Hey @olegz, it looks OK, I'll give it a try. Just wondering if there is an 
easy way to update unit tests to actually test the use of expression language.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user mattyb149 commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98684537 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java — @@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory final Map statePropertyMap = new HashMap<>(stateMap.toMap()); 
 //If an initial max value for column(s) has been specified using properties, and this column is not in the state manager, sync them to the state property map 
 

for(final Map.Entry maxProp : maxValueProperties.entrySet()){
 

if (!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
 

statePropertyMap.put(maxProp.getKey().toLowerCase(), maxProp.getValue()); + for (final Map.Entry maxProp : maxValueProperties.entrySet()) { + String maxPropKey = maxProp.getKey().toLowerCase(); + String fullyQualifiedMaxPropKey = getStateKey(tableName, maxPropKey); + if (!statePropertyMap.containsKey(fullyQualifiedMaxPropKey)) { + String newMaxPropValue; + // If we can't find the value at the fully-qualified key name, it is possible (under a previous scheme) + // the value has been stored under a key that is only the column name. Fall back to check the column name, + // but store the new initial max value under the fully-qualified key. + if (statePropertyMap.containsKey(maxPropKey)) { + newMaxPropValue = statePropertyMap.get(maxPropKey); + } 
 else { + newMaxPropValue = maxProp.getValue(); 
 
 
 

End diff –
 
 
 
 
 
 
 I was trying to limit the number of queries GenerateTableFetch has to make, since the real work will be done downstream. In this case, an invariant is that a columnTypeMap entry exists iff a max-value entry exists. Otherwise the first time I'll be doing a SELECT MAX(max-value-column) without a WHERE clause (since there is no max-value entry) so I don't have to worry about type conversions for literals. The type of the value(s) is the same as the type of the MAX(values), so the first time I can fill both the columnType map and the max-value map. 
 Is there a path where the current approach does not suffice? If so then I can certainly add an extra query. 
 
 
 
   

[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98684537
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
+String maxPropKey = maxProp.getKey().toLowerCase();
+String fullyQualifiedMaxPropKey = getStateKey(tableName, 
maxPropKey);
+if (!statePropertyMap.containsKey(fullyQualifiedMaxPropKey)) {
+String newMaxPropValue;
+// If we can't find the value at the fully-qualified key 
name, it is possible (under a previous scheme)
+// the value has been stored under a key that is only the 
column name. Fall back to check the column name,
+// but store the new initial max value under the 
fully-qualified key.
+if (statePropertyMap.containsKey(maxPropKey)) {
+newMaxPropValue = statePropertyMap.get(maxPropKey);
+} else {
+newMaxPropValue = maxProp.getValue();
--- End diff --

I was trying to limit the number of queries GenerateTableFetch has to make, 
since the real work will be done downstream.  In this case, an invariant is 
that a columnTypeMap entry exists iff a max-value entry exists. Otherwise the 
first time I'll be doing a SELECT MAX(max-value-column) without a WHERE clause 
(since there is no max-value entry) so I don't have to worry about type 
conversions for literals.  The type of the value(s) is the same as the type of 
the MAX(values), so the first time I can fill both the columnType map and the 
max-value map.

Is there a path where the current approach does not suffice? If so then I 
can certainly add an extra query.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user mattyb149 commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98681382 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java — @@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory final Map statePropertyMap = new HashMap<>(stateMap.toMap()); 
 //If an initial max value for column(s) has been specified using properties, and this column is not in the state manager, sync them to the state property map 
 

for(final Map.Entry maxProp : maxValueProperties.entrySet()){
 

if (!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
 

statePropertyMap.put(maxProp.getKey().toLowerCase(), maxProp.getValue()); + for (final Map.Entry maxProp : maxValueProperties.entrySet()) { 
 
 
 

End diff –
 
 
 
 
 
 
 Agreed. For GenerateTableFetch, initial max values are more useful as flow file attributes, although for static tables / max-value columns (which don't require an incoming connection), it would still be nice to be able to add them as dynamic properties. Do you mind adding your observation(s) to NIFI-3335(https://issues.apache.org/jira/browse/NIFI-3335)? 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
   

[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user mattyb149 commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98680344 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java — @@ -202,40 +246,60 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory ResultSetMetaData rsmd = resultSet.getMetaData(); for (int i = 2; i <= rsmd.getColumnCount(); i++) { String resultColumnName = rsmd.getColumnName.toLowerCase(); + String fullyQualifiedStateKey = getStateKey(tableName, resultColumnName); + String resultColumnCurrentMax = statePropertyMap.get(fullyQualifiedStateKey); + if (StringUtils.isEmpty(resultColumnCurrentMax) && !isDynamicTableName)  { + // If we can't find the value at the fully-qualified key name and the table name is static, it is possible (under a previous scheme) + // the value has been stored under a key that is only the column name. Fall back to check the column name; either way, when a new + // maximum value is observed, it will be stored under the fully-qualified key from then on. + resultColumnCurrentMax = statePropertyMap.get(resultColumnName); + } 
 + int type = rsmd.getColumnType; + if (isDynamicTableName)  { + // We haven't pre-populated the column type map if the table name is dynamic, so do it here + columnTypeMap.put(fullyQualifiedStateKey, type); + } 
 try { 
 

String newMaxValue = getMaxValueFromRow(resultSet, i, type, statePropertyMap.get(resultColumnName.toLowerCase()), dbAdapter.getName()); + String newMaxValue = getMaxValueFromRow(resultSet, i, type, resultColumnCurrentMax, dbAdapter.getName()); if (newMaxValue != null) { - statePropertyMap.put(resultColumnName, newMaxValue); + statePropertyMap.put(fullyQualifiedStateKey, newMaxValue); } 
 } catch (ParseException | IOException pie)  { // Fail the whole thing here before we start creating flow files and such throw new ProcessException(pie); } 
 + } } else  { // Something is very wrong here, one row (even if count is zero) should be returned throw new SQLException("No rows returned from metadata query: " + selectQuery); }
 

} catch (SQLException e) {
 

logger.error("Unable to execute SQL select query {} due to {}", new Object[] {selectQuery, e} 
);
 

throw new ProcessException(e);
 

}
 

final int numberOfFetches = (partitionSize == 0) ? rowCount : (rowCount / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
 
   

[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98680344
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -202,40 +246,60 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 ResultSetMetaData rsmd = resultSet.getMetaData();
 for (int i = 2; i <= rsmd.getColumnCount(); i++) {
 String resultColumnName = 
rsmd.getColumnName(i).toLowerCase();
+String fullyQualifiedStateKey = 
getStateKey(tableName, resultColumnName);
+String resultColumnCurrentMax = 
statePropertyMap.get(fullyQualifiedStateKey);
+if (StringUtils.isEmpty(resultColumnCurrentMax) && 
!isDynamicTableName) {
+// If we can't find the value at the 
fully-qualified key name and the table name is static, it is possible (under a 
previous scheme)
+// the value has been stored under a key that 
is only the column name. Fall back to check the column name; either way, when a 
new
+// maximum value is observed, it will be 
stored under the fully-qualified key from then on.
+resultColumnCurrentMax = 
statePropertyMap.get(resultColumnName);
+}
+
 int type = rsmd.getColumnType(i);
+if (isDynamicTableName) {
+// We haven't pre-populated the column type 
map if the table name is dynamic, so do it here
+columnTypeMap.put(fullyQualifiedStateKey, 
type);
+}
 try {
-String newMaxValue = 
getMaxValueFromRow(resultSet, i, type, 
statePropertyMap.get(resultColumnName.toLowerCase()), dbAdapter.getName());
+String newMaxValue = 
getMaxValueFromRow(resultSet, i, type, resultColumnCurrentMax, 
dbAdapter.getName());
 if (newMaxValue != null) {
-statePropertyMap.put(resultColumnName, 
newMaxValue);
+
statePropertyMap.put(fullyQualifiedStateKey, newMaxValue);
 }
 } catch (ParseException | IOException pie) {
 // Fail the whole thing here before we start 
creating flow files and such
 throw new ProcessException(pie);
 }
+
 }
 } else {
 // Something is very wrong here, one row (even if 
count is zero) should be returned
 throw new SQLException("No rows returned from metadata 
query: " + selectQuery);
 }
-} catch (SQLException e) {
-logger.error("Unable to execute SQL select query {} due to 
{}", new Object[]{selectQuery, e});
-throw new ProcessException(e);
-}
-final int numberOfFetches = (partitionSize == 0) ? rowCount : 
(rowCount / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
 
+final int numberOfFetches = (partitionSize == 0) ? 
rowCount : (rowCount / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
 
-// Generate SQL statements to read "pages" of data
-for (int i = 0; i < numberOfFetches; i++) {
-FlowFile sqlFlowFile;
+// Generate SQL statements to read "pages" of data
+for (int i = 0; i < numberOfFetches; i++) {
+Integer limit = partitionSize == 0 ? null : 
partitionSize;
+Integer offset = partitionSize == 0 ? null : i * 
partitionSize;
+final String query = 
dbAdapter.getSelectStatement(tableName, columnNames, whereClause, 
StringUtils.join(maxValueColumnNameList, ", "), limit, offset);
+FlowFile sqlFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+sqlFlowFile = session.write(sqlFlowFile, out -> 
out.write(query.getBytes()));
+session.transfer(sqlFlowFile, REL_SUCCESS);
+}
+
+if (fileToProcess != null) {
+session.remove(fileToProcess);
+}
+} catch (SQLException e) {
+if (fileToProcess != null) {
+logger.error("Unable to execute SQL select query {} 
due to {}, routing {} to failure", new 

[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user mattyb149 commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98680239 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java — @@ -117,9 +124,11 @@ + "can be used to retrieve only those rows that have been added/updated since the last retrieval. Note that some " + "JDBC types such as bit/boolean are not conducive to maintaining maximum value, so columns of these " + "types should not be listed in this property, and will result in error(s) during processing. If no columns " 
 

+ "are provided, all rows from the table will be considered, which could have a performance impact.") + + "are provided, all rows from the table will be considered, which could have a performance impact.\nNOTE: If _expression_ Language is " + + "present for this property and it refers to flow file attribute(s), then the Table Name property must also contain _expression_ Language.") 
 
 
 

End diff –
 
 
 
 
 
 
 Good point, I will update the note. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
  

[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98680239
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -117,9 +124,11 @@
 + "can be used to retrieve only those rows that have 
been added/updated since the last retrieval. Note that some "
 + "JDBC types such as bit/boolean are not conducive to 
maintaining maximum value, so columns of these "
 + "types should not be listed in this property, and 
will result in error(s) during processing. If no columns "
-+ "are provided, all rows from the table will be 
considered, which could have a performance impact.")
++ "are provided, all rows from the table will be 
considered, which could have a performance impact.\nNOTE: If Expression 
Language is "
++ "present for this property and it refers to flow 
file attribute(s), then the Table Name property must also contain Expression 
Language.")
--- End diff --

Good point, I will update the note.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-1962) NPE in Expression Language toDate()

2017-01-31 Thread Pierre Villard (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Pierre Villard updated  NIFI-1962 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache NiFi /  NIFI-1962 
 
 
 
  NPE in _expression_ Language toDate()  
 
 
 
 
 
 
 
 
 

Change By:
 
 Pierre Villard 
 
 
 

Resolution:
 
 Fixed 
 
 
 

Fix Version/s:
 
 1.2.0 
 
 
 

Status:
 
 Patch Available Resolved 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] (NIFI-3179) MergeContent extracts demarcator property value bytes without specifying charset encoding

2017-01-31 Thread Pierre Villard (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Pierre Villard updated  NIFI-3179 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache NiFi /  NIFI-3179 
 
 
 
  MergeContent extracts demarcator property value bytes without specifying charset encoding  
 
 
 
 
 
 
 
 
 

Change By:
 
 Pierre Villard 
 
 
 

Resolution:
 
 Fixed 
 
 
 

Status:
 
 Patch Available Resolved 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[GitHub] nifi pull request #1452: NIFI-3179 Added support for default UTF-8 char enco...

2017-01-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1452


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-3179) MergeContent extracts demarcator property value bytes without specifying charset encoding

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-3179 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: MergeContent extracts demarcator property value bytes without specifying charset encoding  
 
 
 
 
 
 
 
 
 
 
Github user pvillard31 commented on the issue: 
 https://github.com/apache/nifi/pull/1452 
 Thanks @olegz, merged to master. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] (NIFI-3179) MergeContent extracts demarcator property value bytes without specifying charset encoding

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-3179 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: MergeContent extracts demarcator property value bytes without specifying charset encoding  
 
 
 
 
 
 
 
 
 
 
Github user asfgit closed the pull request at: 
 https://github.com/apache/nifi/pull/1452 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] (NIFI-3179) MergeContent extracts demarcator property value bytes without specifying charset encoding

2017-01-31 Thread ASF subversion and git services (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF subversion and git services commented on  NIFI-3179 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: MergeContent extracts demarcator property value bytes without specifying charset encoding  
 
 
 
 
 
 
 
 
 
 
Commit 390754c5754f821e66d519a269cd0ee56f5e3622 in nifi's branch refs/heads/master from Oleg Zhurakousky [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=390754c ] 
NIFI-3179 Added support for default UTF-8 char encoding removed deprected usage of BAOS and BAIS 
This closes #1452. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[GitHub] nifi issue #1452: NIFI-3179 Added support for default UTF-8 char encoding

2017-01-31 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1452
  
Thanks @olegz, merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98604895
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -202,40 +246,60 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 ResultSetMetaData rsmd = resultSet.getMetaData();
 for (int i = 2; i <= rsmd.getColumnCount(); i++) {
 String resultColumnName = 
rsmd.getColumnName(i).toLowerCase();
+String fullyQualifiedStateKey = 
getStateKey(tableName, resultColumnName);
+String resultColumnCurrentMax = 
statePropertyMap.get(fullyQualifiedStateKey);
+if (StringUtils.isEmpty(resultColumnCurrentMax) && 
!isDynamicTableName) {
+// If we can't find the value at the 
fully-qualified key name and the table name is static, it is possible (under a 
previous scheme)
+// the value has been stored under a key that 
is only the column name. Fall back to check the column name; either way, when a 
new
+// maximum value is observed, it will be 
stored under the fully-qualified key from then on.
+resultColumnCurrentMax = 
statePropertyMap.get(resultColumnName);
+}
+
 int type = rsmd.getColumnType(i);
+if (isDynamicTableName) {
+// We haven't pre-populated the column type 
map if the table name is dynamic, so do it here
+columnTypeMap.put(fullyQualifiedStateKey, 
type);
+}
 try {
-String newMaxValue = 
getMaxValueFromRow(resultSet, i, type, 
statePropertyMap.get(resultColumnName.toLowerCase()), dbAdapter.getName());
+String newMaxValue = 
getMaxValueFromRow(resultSet, i, type, resultColumnCurrentMax, 
dbAdapter.getName());
 if (newMaxValue != null) {
-statePropertyMap.put(resultColumnName, 
newMaxValue);
+
statePropertyMap.put(fullyQualifiedStateKey, newMaxValue);
 }
 } catch (ParseException | IOException pie) {
 // Fail the whole thing here before we start 
creating flow files and such
 throw new ProcessException(pie);
 }
+
 }
 } else {
 // Something is very wrong here, one row (even if 
count is zero) should be returned
 throw new SQLException("No rows returned from metadata 
query: " + selectQuery);
 }
-} catch (SQLException e) {
-logger.error("Unable to execute SQL select query {} due to 
{}", new Object[]{selectQuery, e});
-throw new ProcessException(e);
-}
-final int numberOfFetches = (partitionSize == 0) ? rowCount : 
(rowCount / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
 
+final int numberOfFetches = (partitionSize == 0) ? 
rowCount : (rowCount / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
 
-// Generate SQL statements to read "pages" of data
-for (int i = 0; i < numberOfFetches; i++) {
-FlowFile sqlFlowFile;
+// Generate SQL statements to read "pages" of data
+for (int i = 0; i < numberOfFetches; i++) {
+Integer limit = partitionSize == 0 ? null : 
partitionSize;
+Integer offset = partitionSize == 0 ? null : i * 
partitionSize;
+final String query = 
dbAdapter.getSelectStatement(tableName, columnNames, whereClause, 
StringUtils.join(maxValueColumnNameList, ", "), limit, offset);
+FlowFile sqlFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+sqlFlowFile = session.write(sqlFlowFile, out -> 
out.write(query.getBytes()));
+session.transfer(sqlFlowFile, REL_SUCCESS);
+}
+
+if (fileToProcess != null) {
+session.remove(fileToProcess);
+}
+} catch (SQLException e) {
+if (fileToProcess != null) {
+logger.error("Unable to execute SQL select query {} 
due to {}, routing {} to failure", 

[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98614712
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
+String maxPropKey = maxProp.getKey().toLowerCase();
+String fullyQualifiedMaxPropKey = getStateKey(tableName, 
maxPropKey);
+if (!statePropertyMap.containsKey(fullyQualifiedMaxPropKey)) {
+String newMaxPropValue;
+// If we can't find the value at the fully-qualified key 
name, it is possible (under a previous scheme)
+// the value has been stored under a key that is only the 
column name. Fall back to check the column name,
+// but store the new initial max value under the 
fully-qualified key.
+if (statePropertyMap.containsKey(maxPropKey)) {
+newMaxPropValue = statePropertyMap.get(maxPropKey);
+} else {
+newMaxPropValue = maxProp.getValue();
--- End diff --

For the unknown column type issue when initial max value  and dynamic max 
value columns as described in [this comment](#issuecomment-274592165), how 
about adding a query using "1 = 0" WHERE clause here to populate columnTypeMap 
if a type for the column doesn't exist in the map?
Since this line can only be achieved at the initial execution for a given 
table, it should be safe to do so like AbstractDatabaseFetchProcessor.setup() 
does.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user ijokarumawak commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98614712 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java — @@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory final Map statePropertyMap = new HashMap<>(stateMap.toMap()); 
 //If an initial max value for column(s) has been specified using properties, and this column is not in the state manager, sync them to the state property map 
 

for(final Map.Entry maxProp : maxValueProperties.entrySet()){
 

if (!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
 

statePropertyMap.put(maxProp.getKey().toLowerCase(), maxProp.getValue()); + for (final Map.Entry maxProp : maxValueProperties.entrySet()) { + String maxPropKey = maxProp.getKey().toLowerCase(); + String fullyQualifiedMaxPropKey = getStateKey(tableName, maxPropKey); + if (!statePropertyMap.containsKey(fullyQualifiedMaxPropKey)) { + String newMaxPropValue; + // If we can't find the value at the fully-qualified key name, it is possible (under a previous scheme) + // the value has been stored under a key that is only the column name. Fall back to check the column name, + // but store the new initial max value under the fully-qualified key. + if (statePropertyMap.containsKey(maxPropKey)) { + newMaxPropValue = statePropertyMap.get(maxPropKey); + } 
 else { + newMaxPropValue = maxProp.getValue(); 
 
 
 

End diff –
 
 
 
 
 
 
 For the unknown column type issue when initial max value and dynamic max value columns as described in [this comment](#issuecomment-274592165), how about adding a query using "1 = 0" WHERE clause here to populate columnTypeMap if a type for the column doesn't exist in the map? Since this line can only be achieved at the initial execution for a given table, it should be safe to do so like AbstractDatabaseFetchProcessor.setup() does. 
 
 
 
 
 
 
 
 
 
 
   

[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user ijokarumawak commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98615920 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java — @@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory final Map statePropertyMap = new HashMap<>(stateMap.toMap()); 
 //If an initial max value for column(s) has been specified using properties, and this column is not in the state manager, sync them to the state property map 
 

for(final Map.Entry maxProp : maxValueProperties.entrySet()){
 

if (!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
 

statePropertyMap.put(maxProp.getKey().toLowerCase(), maxProp.getValue()); + for (final Map.Entry maxProp : maxValueProperties.entrySet()) { 
 
 
 

End diff –
 
 
 
 
 
 
 When we add incoming flow file support, it maybe useful if Initial max values can be specified via flow file attributes. Because if table name is dynamic, user won't be able to configure dynamic processor attribute in advance for each possible table. 
 Example Flow File (table.name='table-1', maxvalue.columns='update_date', initial.maxvalue.update_date='20170101') 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98604070
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -117,9 +124,11 @@
 + "can be used to retrieve only those rows that have 
been added/updated since the last retrieval. Note that some "
 + "JDBC types such as bit/boolean are not conducive to 
maintaining maximum value, so columns of these "
 + "types should not be listed in this property, and 
will result in error(s) during processing. If no columns "
-+ "are provided, all rows from the table will be 
considered, which could have a performance impact.")
++ "are provided, all rows from the table will be 
considered, which could have a performance impact.\nNOTE: If Expression 
Language is "
++ "present for this property and it refers to flow 
file attribute(s), then the Table Name property must also contain Expression 
Language.")
--- End diff --

What does the 'NOTE' imply? There's no validation code for this condition, 
so I understand it as a recommendation. Is it correct?

I think it's possible that an user intentionally use static Table name and 
dynamic Max value columns together, if they would like to fetch `brand new` 
items by `created_timestamp`, and also `updated` items by `updated_timestamp` 
from the same table. This works fine.

What doesn't work (or dangerous) is probably a situation that different Max 
value columns are passed with the same table name and those columns overlap 
each other, as such usage will corrupt state management. 

- E.g.1 : FF1(table=T1, maxColumn=b, a) but FF2(table=T1, maxColumn=a 
only). This can be problematic.
- E.g.2 : FF1(table=T1, maxColumn=b, a), FF2(table=T1, maxColumn=c). This 
actually work fine.

If the note was added to explain this risk, then the 'NOTE' can be written 
like: "NOTE: It's important to use consistent Max value columns for a given 
table for increment fetch to work properly"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1407: NIFI-2881: Added EL support to DB Fetch processors,...

2017-01-31 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1407#discussion_r98615920
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -212,9 +215,21 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
 //If an initial max value for column(s) has been specified using 
properties, and this column is not in the state manager, sync them to the state 
property map
-for(final Map.Entry maxProp : 
maxValueProperties.entrySet()){
-if 
(!statePropertyMap.containsKey(maxProp.getKey().toLowerCase())) {
-statePropertyMap.put(maxProp.getKey().toLowerCase(), 
maxProp.getValue());
+for (final Map.Entry maxProp : 
maxValueProperties.entrySet()) {
--- End diff --

When we add incoming flow file support, it maybe useful if Initial max 
values can be specified via flow file attributes. Because if table name is 
dynamic, user won't be able to configure dynamic processor attribute in advance 
for each possible table.

Example Flow File (table.name='table-1', maxvalue.columns='update_date', 
initial.maxvalue.update_date='20170101')



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-31 Thread ASF GitHub Bot (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 ASF GitHub Bot commented on  NIFI-2881 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Allow Database Fetch processor(s) to accept incoming flow files and use _expression_ Language  
 
 
 
 
 
 
 
 
 
 
Github user ijokarumawak commented on a diff in the pull request: 
 https://github.com/apache/nifi/pull/1407#discussion_r98604895 
 — Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java — @@ -202,40 +246,60 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory ResultSetMetaData rsmd = resultSet.getMetaData(); for (int i = 2; i <= rsmd.getColumnCount(); i++) { String resultColumnName = rsmd.getColumnName.toLowerCase(); + String fullyQualifiedStateKey = getStateKey(tableName, resultColumnName); + String resultColumnCurrentMax = statePropertyMap.get(fullyQualifiedStateKey); + if (StringUtils.isEmpty(resultColumnCurrentMax) && !isDynamicTableName)  { + // If we can't find the value at the fully-qualified key name and the table name is static, it is possible (under a previous scheme) + // the value has been stored under a key that is only the column name. Fall back to check the column name; either way, when a new + // maximum value is observed, it will be stored under the fully-qualified key from then on. + resultColumnCurrentMax = statePropertyMap.get(resultColumnName); + } 
 + int type = rsmd.getColumnType; + if (isDynamicTableName)  { + // We haven't pre-populated the column type map if the table name is dynamic, so do it here + columnTypeMap.put(fullyQualifiedStateKey, type); + } 
 try { 
 

String newMaxValue = getMaxValueFromRow(resultSet, i, type, statePropertyMap.get(resultColumnName.toLowerCase()), dbAdapter.getName()); + String newMaxValue = getMaxValueFromRow(resultSet, i, type, resultColumnCurrentMax, dbAdapter.getName()); if (newMaxValue != null) { - statePropertyMap.put(resultColumnName, newMaxValue); + statePropertyMap.put(fullyQualifiedStateKey, newMaxValue); } 
 } catch (ParseException | IOException pie)  { // Fail the whole thing here before we start creating flow files and such throw new ProcessException(pie); } 
 + } } else  { // Something is very wrong here, one row (even if count is zero) should be returned throw new SQLException("No rows returned from metadata query: " + selectQuery); }
 

} catch (SQLException e) {
 

logger.error("Unable to execute SQL select query {} due to {}", new Object[] {selectQuery, e} 
);
 

throw new ProcessException(e);
 

}
 

final int numberOfFetches = (partitionSize == 0) ? rowCount : (rowCount / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);