[jira] [Commented] (NIFI-1769) Add support for SSE-KMS and S3 Signature Version 4 Authentication AWS

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102709#comment-16102709
 ] 

ASF GitHub Bot commented on NIFI-1769:
--

Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/1126
  
@baank - Thanks for being willing to work on this.  A new pull request 
might be easier.


> Add support for SSE-KMS and S3 Signature Version 4 Authentication AWS
> -
>
> Key: NIFI-1769
> URL: https://issues.apache.org/jira/browse/NIFI-1769
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.5.1
>Reporter: Michiel Moonen
>Priority: Minor
>  Labels: newbie, patch, security
>
> Currently there is no support for SSE-KMS S3 Signature Version 4 
> Authentication. This is necessary for enhanced security features



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1126: NIFI-1769: Implemented SSE with KMS.

2017-07-26 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/1126
  
@baank - Thanks for being willing to work on this.  A new pull request 
might be easier.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-26 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102702#comment-16102702
 ] 

Y Wikander commented on NIFI-4169:
--

[~ijokarumawak], I found adding getSessionIds() to class WebSocketMessageRouter 
to be a simpler approach than creating a sendBroadcastMessage().
I moved broadcasting support into PutWebSocket (and pulled it out of 
WebSocketMessageRouter.sendMessage).
Documentation changes and Unit Tests remain.

PutWebSocket documentation on how to use broadcasting, let alone that there 
were two different types, is lacking. I'll try to rectify that.

> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4170) PutWebSocket processor does not support 'Penalty duration'' setting

2017-07-26 Thread Y Wikander (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y Wikander updated NIFI-4170:
-
Fix Version/s: 1.3.0
   Status: Patch Available  (was: Open)

Patch submittal for this issue. See 
[^0001-websocket-PutWebSocket-processor-support-Penalty-dur.patch] 

> PutWebSocket processor does not support 'Penalty duration'' setting
> ---
>
> Key: NIFI-4170
> URL: https://issues.apache.org/jira/browse/NIFI-4170
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Minor
>  Labels: patch
> Fix For: 1.3.0
>
> Attachments: 
> 0001-websocket-PutWebSocket-processor-support-Penalty-dur.patch, 
> 0002-websocket-PutWebSocket-processor-support-Penalty-dur.patch
>
>
> PutWebSocket processor does not support 'Penalty duration' setting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1126: NIFI-1769: Implemented SSE with KMS.

2017-07-26 Thread baank
Github user baank commented on the issue:

https://github.com/apache/nifi/pull/1126
  
This is currently a blocker for us and I have the resources to get this 
over the line.

Please re-open and I will work to ensure the pull request is of a 
sufficient quality to get merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102327#comment-16102327
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
I can confirm that currently you can dblclick on a connection's stats box 
to open the config/view config. You can also (currently) double click any bends 
in a connection to open the config/view config for that connection.


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-26 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
I can confirm that currently you can dblclick on a connection's stats box 
to open the config/view config. You can also (currently) double click any bends 
in a connection to open the config/view config for that connection.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-26 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1610#comment-1610
 ] 

Joseph Witt commented on NIFI-3376:
---

Ok.  We should spawn off a JIRA to focus on providing better monitoring/insight 
to the behavior/logic of the content repository as it relates to reachable 
items, unreachable but archived, unreachable and eligible to be deleted, etc..

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-26 Thread yuri1969
Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@scottyaslan It seems I omitted `quickSelect` activation for both start and 
end "nodes". Please, can you confirm that a double-click displays the 
connection configuration dialog when performed on a connection's mid "node"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102174#comment-16102174
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@scottyaslan It seems I omitted `quickSelect` activation for both start and 
end "nodes". Please, can you confirm that a double-click displays the 
connection configuration dialog when performed on a connection's mid "node"?


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-26 Thread Brandon Zachary (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102152#comment-16102152
 ] 

Brandon Zachary commented on NIFI-3376:
---

Content Repo has it's own dedicated storage of 14GB.

nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=false
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/

And everything about that section of the comments is the default with the 
exception of the archiving which we turned off. 

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4170) PutWebSocket processor does not support 'Penalty duration' and 'Yield duration' settings

2017-07-26 Thread Y Wikander (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y Wikander updated NIFI-4170:
-
Description: PutWebSocket processor does not support 'Penalty duration' 
setting.  (was: PutWebSocket processor does not support 'Penalty duration' and 
'Yield duration' settings.

I'm assuming that calling content.yield() will also cover 'Penalty duration'.)

> PutWebSocket processor does not support 'Penalty duration' and 'Yield 
> duration' settings
> 
>
> Key: NIFI-4170
> URL: https://issues.apache.org/jira/browse/NIFI-4170
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-websocket-PutWebSocket-processor-support-Penalty-dur.patch, 
> 0002-websocket-PutWebSocket-processor-support-Penalty-dur.patch
>
>
> PutWebSocket processor does not support 'Penalty duration' setting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4170) PutWebSocket processor does not support 'Penalty duration'' setting

2017-07-26 Thread Y Wikander (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y Wikander updated NIFI-4170:
-
Summary: PutWebSocket processor does not support 'Penalty duration'' 
setting  (was: PutWebSocket processor does not support 'Penalty duration' and 
'Yield duration' settings)

> PutWebSocket processor does not support 'Penalty duration'' setting
> ---
>
> Key: NIFI-4170
> URL: https://issues.apache.org/jira/browse/NIFI-4170
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-websocket-PutWebSocket-processor-support-Penalty-dur.patch, 
> 0002-websocket-PutWebSocket-processor-support-Penalty-dur.patch
>
>
> PutWebSocket processor does not support 'Penalty duration' setting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102121#comment-16102121
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@yuri1969 everything looks good but I noticed that when double clicking on 
a connection nothing happens. What I would expect to happen is that the 
'Connection Configuration Dialog' should open. I think a quickSelect needs to 
be activated somewhere in the nf-connection.js when a connection is double 
clicked... 


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-26 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@yuri1969 everything looks good but I noticed that when double clicking on 
a connection nothing happens. What I would expect to happen is that the 
'Connection Configuration Dialog' should open. I think a quickSelect needs to 
be activated somewhere in the nf-connection.js when a connection is double 
clicked... 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4215) Avro schemas with records that have a field of themselves fail to parse, causing stackoverflow exception

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102091#comment-16102091
 ] 

ASF GitHub Bot commented on NIFI-4215:
--

Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/2034
  
Thanks @pvillard31!

@Wesley-Lawrence Don't worry about squashing yet, we can do that as a final 
step.


> Avro schemas with records that have a field of themselves fail to parse, 
> causing stackoverflow exception
> 
>
> Key: NIFI-4215
> URL: https://issues.apache.org/jira/browse/NIFI-4215
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Wesley L Lawrence
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: nifi-4215.patch
>
>
> Noticed this while attempting to use the AvroSchemaRegsitry with some complex 
> schema. Boiled down, Avro lets you define a schema such as;
> {code}
> { 
>   "namespace": "org.apache.nifi.testing", 
>   "name": "CompositRecord", 
>   "type": "record", 
>   "fields": [ 
> { 
>   "name": "id", 
>   "type": "int" 
> }, 
> { 
>   "name": "value", 
>   "type": "string" 
> }, 
> { 
>   "name": "parent", 
>   "type": [
> "null",
> "CompositRecord"
>   ]
> } 
>   ] 
> }
> {code}
> The AvroSchemaRegistry (AvroTypeUtil specifically) will fail to parse, and 
> generate a stackoverflow exception.
> I've whipped up a fix, tested it out in 1.4.0, and am just running through 
> the contrib build before I submit a patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-26 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102090#comment-16102090
 ] 

Joseph Witt commented on NIFI-3376:
---

Brandon,

How large is the disk/partition you have nifi using for content storage?  Is 
that location/partition dedicated to nifi content?

Can you show your settings for the following from nifi.properties


nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2034: NIFI-4215 Fixed stackoverflow error when NiFi tries to par...

2017-07-26 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/2034
  
Thanks @pvillard31!

@Wesley-Lawrence Don't worry about squashing yet, we can do that as a final 
step.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-26 Thread Brandon Zachary (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102070#comment-16102070
 ] 

Brandon Zachary commented on NIFI-3376:
---

Hello, found this ticket and I think it also matches an issue that I'm 
currently running into, where we use the MergeContent processor to blob 
together files based on a specific flowfile attribute. The files being merged 
are relatively small around 10KB average per file. We then get files that come 
in that can be upwards of 1 to 2 GB per file, that normally get processed and 
are removed from the graph. However, we are running into the issue described 
above where the content_repo isn't matching (approx) what's on the graph. I 
reproduced the issue using another instance on 1.2.0 version where I had about 
344MB/5,470 flowfiles queued, but 7.4GB existed in the content_repo and they 
weren't every purged. Waited for approx 30-45 minutes (still never purged). If 
the solution described above is deemed too expensive, just wondering if there 
were other avenues being discussed to fix this problem, because eventually we 
end up not catching this until our content_repo is 100% full and eventually 
backs up input from upstream.

Thanks!

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102025#comment-16102025
 ] 

ASF GitHub Bot commented on NIFI-3335:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2039#discussion_r129648705
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -198,6 +201,25 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 // set as the current state map (after the session has been 
committed)
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
+// If an initial max value for column(s) has been specified 
using properties, and this column is not in the state manager, sync them to the 
state property map
+final Map maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
--- End diff --

Yeah that's a better idea, should've been done in the original PR but I 
missed it during my review and just cut-pasted it back to the abstract class to 
share.
I won't be able to move all that common code block as it needs the table 
name which can come from a flow file. But I can call 
getDefaultMaxValueProperties() at schedule-time vs trigger time.


> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4215) Avro schemas with records that have a field of themselves fail to parse, causing stackoverflow exception

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102027#comment-16102027
 ] 

ASF GitHub Bot commented on NIFI-4215:
--

Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2034
  
@pvillard31 Yup, you're right, that solves the issue for me .

I got so caught up in the second `RightCurly` definitons saying `}` should 
be alone, and other `if`s being that way, I didn't see the correct style for 
`if ... else`s.

I'll fix that line, re-add the original `RightCurly` definition back in, 
and push a new squashed commit.

Thanks again, both of you.


> Avro schemas with records that have a field of themselves fail to parse, 
> causing stackoverflow exception
> 
>
> Key: NIFI-4215
> URL: https://issues.apache.org/jira/browse/NIFI-4215
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Wesley L Lawrence
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: nifi-4215.patch
>
>
> Noticed this while attempting to use the AvroSchemaRegsitry with some complex 
> schema. Boiled down, Avro lets you define a schema such as;
> {code}
> { 
>   "namespace": "org.apache.nifi.testing", 
>   "name": "CompositRecord", 
>   "type": "record", 
>   "fields": [ 
> { 
>   "name": "id", 
>   "type": "int" 
> }, 
> { 
>   "name": "value", 
>   "type": "string" 
> }, 
> { 
>   "name": "parent", 
>   "type": [
> "null",
> "CompositRecord"
>   ]
> } 
>   ] 
> }
> {code}
> The AvroSchemaRegistry (AvroTypeUtil specifically) will fail to parse, and 
> generate a stackoverflow exception.
> I've whipped up a fix, tested it out in 1.4.0, and am just running through 
> the contrib build before I submit a patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2034: NIFI-4215 Fixed stackoverflow error when NiFi tries to par...

2017-07-26 Thread Wesley-Lawrence
Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2034
  
@pvillard31 Yup, you're right, that solves the issue for me 😃.

I got so caught up in the second `RightCurly` definitons saying `}` should 
be alone, and other `if`s being that way, I didn't see the correct style for 
`if ... else`s.

I'll fix that line, re-add the original `RightCurly` definition back in, 
and push a new squashed commit.

Thanks again, both of you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2039: NIFI-3335: Add initial.maxvalue support to Generate...

2017-07-26 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2039#discussion_r129648705
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -198,6 +201,25 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 // set as the current state map (after the session has been 
committed)
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
+// If an initial max value for column(s) has been specified 
using properties, and this column is not in the state manager, sync them to the 
state property map
+final Map maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
--- End diff --

Yeah that's a better idea, should've been done in the original PR but I 
missed it during my review and just cut-pasted it back to the abstract class to 
share.
I won't be able to move all that common code block as it needs the table 
name which can come from a flow file. But I can call 
getDefaultMaxValueProperties() at schedule-time vs trigger time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101965#comment-16101965
 ] 

ASF GitHub Bot commented on NIFI-4232:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2040
  
Reviewing...


> AvroRecordSetWriter not properly handling Arrays of Records
> ---
>
> Key: NIFI-4232
> URL: https://issues.apache.org/jira/browse/NIFI-4232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have JSON coming in that has an Array of complex JSON objects. When I try 
> to convert it to Avro via ConvertRecord, it fails, with the following error:
> {code}
> ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
> StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
> section=2], offset=2192294, 
> length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
> [Ljava.lang.Object; because no compatible types exist in the UNION
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2040: NIFI-4232: Ensure that we handle conversions to Avro Array...

2017-07-26 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2040
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4215) Avro schemas with records that have a field of themselves fail to parse, causing stackoverflow exception

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101962#comment-16101962
 ] 

ASF GitHub Bot commented on NIFI-4215:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2034
  
@Wesley-Lawrence I believe this should be:

java
if() {

} else {

}



> Avro schemas with records that have a field of themselves fail to parse, 
> causing stackoverflow exception
> 
>
> Key: NIFI-4215
> URL: https://issues.apache.org/jira/browse/NIFI-4215
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Wesley L Lawrence
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: nifi-4215.patch
>
>
> Noticed this while attempting to use the AvroSchemaRegsitry with some complex 
> schema. Boiled down, Avro lets you define a schema such as;
> {code}
> { 
>   "namespace": "org.apache.nifi.testing", 
>   "name": "CompositRecord", 
>   "type": "record", 
>   "fields": [ 
> { 
>   "name": "id", 
>   "type": "int" 
> }, 
> { 
>   "name": "value", 
>   "type": "string" 
> }, 
> { 
>   "name": "parent", 
>   "type": [
> "null",
> "CompositRecord"
>   ]
> } 
>   ] 
> }
> {code}
> The AvroSchemaRegistry (AvroTypeUtil specifically) will fail to parse, and 
> generate a stackoverflow exception.
> I've whipped up a fix, tested it out in 1.4.0, and am just running through 
> the contrib build before I submit a patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2034: NIFI-4215 Fixed stackoverflow error when NiFi tries to par...

2017-07-26 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2034
  
@Wesley-Lawrence I believe this should be:

java
if() {

} else {

}



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4215) Avro schemas with records that have a field of themselves fail to parse, causing stackoverflow exception

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101959#comment-16101959
 ] 

ASF GitHub Bot commented on NIFI-4215:
--

Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2034
  
Thanks for taking a look @jvwing!

I didn't want to change it, but I keep getting the following error with it;
```
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:check (check-style) on 
project nifi-avro-record-utils: You have 1 Checkstyle violation. -> [Help 1]
```

Earlier in the maven log;
```
[INFO] --- maven-checkstyle-plugin:2.15:check (check-style) @ 
nifi-avro-record-utils ---
[WARNING] src/main/java/org/apache/nifi/avro/AvroTypeUtil.java[275:17] 
(blocks) RightCurly: '}' should be on the same line.
```

Which references a `}` I added here;
```
273if (knownRecordTypes.containsKey(schemaFullName)) {
274return knownRecordTypes.get(schemaFullName);
275--> }
276else {
```

However, this is the style used everywhere in NiFi, and is the one defined 
by the `RightCurly` section below the one I removed.

I suspect it's something weird in my environment, but removing the default 
`RightCurly` definition fixed my issue, and it looks like it was just left over 
from some old migration, so I figured it could be safely removed.

Out of curiosity, if you run a contrib check, do you get the same error I 
do?


> Avro schemas with records that have a field of themselves fail to parse, 
> causing stackoverflow exception
> 
>
> Key: NIFI-4215
> URL: https://issues.apache.org/jira/browse/NIFI-4215
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Wesley L Lawrence
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: nifi-4215.patch
>
>
> Noticed this while attempting to use the AvroSchemaRegsitry with some complex 
> schema. Boiled down, Avro lets you define a schema such as;
> {code}
> { 
>   "namespace": "org.apache.nifi.testing", 
>   "name": "CompositRecord", 
>   "type": "record", 
>   "fields": [ 
> { 
>   "name": "id", 
>   "type": "int" 
> }, 
> { 
>   "name": "value", 
>   "type": "string" 
> }, 
> { 
>   "name": "parent", 
>   "type": [
> "null",
> "CompositRecord"
>   ]
> } 
>   ] 
> }
> {code}
> The AvroSchemaRegistry (AvroTypeUtil specifically) will fail to parse, and 
> generate a stackoverflow exception.
> I've whipped up a fix, tested it out in 1.4.0, and am just running through 
> the contrib build before I submit a patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2034: NIFI-4215 Fixed stackoverflow error when NiFi tries to par...

2017-07-26 Thread Wesley-Lawrence
Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2034
  
Thanks for taking a look @jvwing!

I didn't want to change it, but I keep getting the following error with it;
```
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:check (check-style) on 
project nifi-avro-record-utils: You have 1 Checkstyle violation. -> [Help 1]
```

Earlier in the maven log;
```
[INFO] --- maven-checkstyle-plugin:2.15:check (check-style) @ 
nifi-avro-record-utils ---
[WARNING] src/main/java/org/apache/nifi/avro/AvroTypeUtil.java[275:17] 
(blocks) RightCurly: '}' should be on the same line.
```

Which references a `}` I added here;
```
273if (knownRecordTypes.containsKey(schemaFullName)) {
274return knownRecordTypes.get(schemaFullName);
275--> }
276else {
```

However, this is the style used everywhere in NiFi, and is the one defined 
by the `RightCurly` section below the one I removed.

I suspect it's something weird in my environment, but removing the default 
`RightCurly` definition fixed my issue, and it looks like it was just left over 
from some old migration, so I figured it could be safely removed.

Out of curiosity, if you run a contrib check, do you get the same error I 
do?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-26 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4232:
-
Fix Version/s: 1.4.0
   Status: Patch Available  (was: Open)

> AvroRecordSetWriter not properly handling Arrays of Records
> ---
>
> Key: NIFI-4232
> URL: https://issues.apache.org/jira/browse/NIFI-4232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have JSON coming in that has an Array of complex JSON objects. When I try 
> to convert it to Avro via ConvertRecord, it fails, with the following error:
> {code}
> ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
> StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
> section=2], offset=2192294, 
> length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
> [Ljava.lang.Object; because no compatible types exist in the UNION
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101933#comment-16101933
 ] 

ASF GitHub Bot commented on NIFI-4232:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2040

NIFI-4232: Ensure that we handle conversions to Avro Arrays properly.…

… Also, if unable to convert a value to the expected object, include in the 
log message the (fully qualified) name of the field that is problematic

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4232

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2040.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2040


commit 6602374cb12567f411915e369ad9b8026c36e42d
Author: Mark Payne 
Date:   2017-07-26T17:04:10Z

NIFI-4232: Ensure that we handle conversions to Avro Arrays properly. Also, 
if unable to convert a value to the expected object, include in the log message 
the (fully qualified) name of the field that is problematic




> AvroRecordSetWriter not properly handling Arrays of Records
> ---
>
> Key: NIFI-4232
> URL: https://issues.apache.org/jira/browse/NIFI-4232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have JSON coming in that has an Array of complex JSON objects. When I try 
> to convert it to Avro via ConvertRecord, it fails, with the following error:
> {code}
> ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
> StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
> section=2], offset=2192294, 
> length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
> [Ljava.lang.Object; because no compatible types exist in the UNION
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2040: NIFI-4232: Ensure that we handle conversions to Avr...

2017-07-26 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2040

NIFI-4232: Ensure that we handle conversions to Avro Arrays properly.…

… Also, if unable to convert a value to the expected object, include in 
the log message the (fully qualified) name of the field that is problematic

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4232

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2040.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2040


commit 6602374cb12567f411915e369ad9b8026c36e42d
Author: Mark Payne 
Date:   2017-07-26T17:04:10Z

NIFI-4232: Ensure that we handle conversions to Avro Arrays properly. Also, 
if unable to convert a value to the expected object, include in the log message 
the (fully qualified) name of the field that is problematic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-26 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4232:


 Summary: AvroRecordSetWriter not properly handling Arrays of 
Records
 Key: NIFI-4232
 URL: https://issues.apache.org/jira/browse/NIFI-4232
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Mark Payne
Assignee: Mark Payne


I have JSON coming in that has an Array of complex JSON objects. When I try to 
convert it to Avro via ConvertRecord, it fails, with the following error:

{code}
ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
section=2], offset=2192294, 
length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
[Ljava.lang.Object; because no compatible types exist in the UNION
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3554) Build does not work if pulling dependencies from a clean environment

2017-07-26 Thread Mark Allerton (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101823#comment-16101823
 ] 

Mark Allerton commented on NIFI-3554:
-

For those having to target NiFi 1.1.1 with their custom processors, place the 
following in the project POM file and this issue goes away:


de.svenkubiak
jBCrypt
0.4.1



> Build does not work if pulling dependencies from a clean environment
> 
>
> Key: NIFI-3554
> URL: https://issues.apache.org/jira/browse/NIFI-3554
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 0.7.1, 1.2.0
> Environment: tested with a MacOS X 10.11.6  (El Capitan), but this 
> problem should happen also with linux based machines.
>Reporter: Pere Urbon-Bayes
>Assignee: Andre F de Miranda
>Priority: Minor
>  Labels: beginner, maven
> Fix For: 0.8.0, 1.2.0, 0.7.3
>
> Attachments: 
> 0001-NIFI-3554-Fixing-the-jBCrypt-dependency-name-from.patch
>
>
> When building NIFI from scratch the build process raised this issue to me:
> {noformat}
> [INFO] --- maven-surefire-plugin:2.18:test (default-test) @ 
> nifi-site-to-site-reporting-task ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Nothing to compile - all classes are up to date
> [INFO] Surefire report directory: 
> /Users/purbon/work/nifi/nifi/nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/target/surefire-reports
> [INFO] Using configured provider 
> org.apache.maven.surefire.junit4.JUnit4Provider
> ---
> [INFO] 
> [INFO] --- maven-surefire-plugin:2.19.1:test (default-test) @ 
> nifi-gcp-processors ---
>  T E S T S
> ---
> [ERROR] Failed to execute goal on project nifi-standard-processors: Could not 
> resolve dependencies for project 
> org.apache.nifi:nifi-standard-processors:jar:1.2.0-SNAPSHOT: Failure to find 
> de.svenkubiak:jBcrypt:jar:0.4.1 in https://repo1.maven.org/maven2 was cached 
> in the local repository, resolution will not be reattempted until the update 
> interval of central has elapsed or updates are forced -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :nifi-standard-processors
> {noformat}
> showing a problem with a dependency that could not be retrieved. 
> After a bit of investigation I noticed that the dependency is using a 
> capitalised name, in contrast from the one currently being in the repository.
> {code:xml}
> 
> de.svenkubiak
> jBCrypt
> 0.4.1
> 
> {code}
> vs the definition now in nifi-standad-processors/pom.xml:
> {code:xml}
> 
>  de.svenkubiak
>  jBcrypt
>  0.4.1
> 
> {code}
> You can reproduce this issue by forcing maven to retrieve this dependency 
> from the repository, for example removing it from your .m2/repository 
> directory. 
> This is my first issue reported to the NIFI community, accept my apologies 
> beforehand if I did something not as expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101742#comment-16101742
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@bbende Saw on the mailing list that you've been away for a little while. 
Any chance you're back and ready to take a look?


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecord

2017-07-26 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@bbende Saw on the mailing list that you've been away for a little while. 
Any chance you're back and ready to take a look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4124) Add a Record API-based PutMongo clone

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101739#comment-16101739
 ] 

ASF GitHub Bot commented on NIFI-4124:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1945
  
Yay? Nay?


> Add a Record API-based PutMongo clone
> -
>
> Key: NIFI-4124
> URL: https://issues.apache.org/jira/browse/NIFI-4124
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>  Labels: mongodb, putmongo, records
>
> A new processor that can use the Record API to put data into Mongo is needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1945: NIFI-4124 Added org.apache.nifi.mongo.PutMongoRecord.

2017-07-26 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1945
  
Yay? Nay?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4231) UI - Rename dynamic properties

2017-07-26 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101672#comment-16101672
 ] 

Joseph Witt commented on NIFI-4231:
---

good point.  It is just one of those 'ease of use' things that smooths the ride.

> UI - Rename dynamic properties
> --
>
> Key: NIFI-4231
> URL: https://issues.apache.org/jira/browse/NIFI-4231
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Pierre Villard
>Priority: Minor
>  Labels: ui, ux
>
> It would be useful if a user could rename the name of a dynamic property in 
> the components. At the moment, since it's only doable for the value of the 
> property, the only option is to delete the property and to recreate it with 
> the desired name and the same value as before.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4231) UI - Rename dynamic properties

2017-07-26 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4231:


 Summary: UI - Rename dynamic properties
 Key: NIFI-4231
 URL: https://issues.apache.org/jira/browse/NIFI-4231
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Reporter: Pierre Villard
Priority: Minor


It would be useful if a user could rename the name of a dynamic property in the 
components. At the moment, since it's only doable for the value of the 
property, the only option is to delete the property and to recreate it with the 
desired name and the same value as before.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4230) Use a better default location when pasting via keystokes outside the context of the originally selected components

2017-07-26 Thread Daniel Chaffelson (JIRA)
Daniel Chaffelson created NIFI-4230:
---

 Summary: Use a better default location when pasting via keystokes 
outside the context of the originally selected components
 Key: NIFI-4230
 URL: https://issues.apache.org/jira/browse/NIFI-4230
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Affects Versions: 1.3.0
Reporter: Daniel Chaffelson
Priority: Minor


When copy/pasting components between different Process Groups, the keystroke 
behavior differs from the right-click behavior. Specifically, it does not paste 
the components on the current focus area of the canvas, but appears to paste 
them relative to their original position in the original context, which 
effectively puts them somewhere random on the focus canvas, requiring the user 
to hunt them down.

It would be optimal to select a better default location somewhere in the focus 
area of the canvas for all paste behaviors consistently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4229) HandleHTTPRequest: store capture groups in regex as properties

2017-07-26 Thread William H. (JIRA)
William H. created NIFI-4229:


 Summary: HandleHTTPRequest: store capture groups in regex as 
properties
 Key: NIFI-4229
 URL: https://issues.apache.org/jira/browse/NIFI-4229
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.3.0
Reporter: William H.
Priority: Minor


It would be great to store capture groups specified in the regex of the 
"Allowed paths" property of the HandleHTTPRequest processor.
It could allow to easily create a REST API, as we could specify some parameters 
in the URL that would be automatically parsed.

Ex : 
{noformat}
^\/device\/([-\w\d]+)\/data$
{noformat}

Would accept the URL {noformat}http://localhost/device/01/data{noformat} and 
automatically create the attribute {noformat}http.request.params.1 = 
"01"{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101248#comment-16101248
 ] 

ASF GitHub Bot commented on NIFI-3335:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2039#discussion_r129494131
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -198,6 +201,25 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 // set as the current state map (after the session has been 
committed)
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
+// If an initial max value for column(s) has been specified 
using properties, and this column is not in the state manager, sync them to the 
state property map
+final Map maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
--- End diff --

Would it make sense to do it in the ``@OnScheduled`` method of the Abstract 
class? You could have the same code in common for QueryDatabaseTable and 
GenerateTableFetch. Besides not sure it needs to be done each time the 
processor is triggered? Thoughts?


> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2039: NIFI-3335: Add initial.maxvalue support to Generate...

2017-07-26 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2039#discussion_r129494131
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -198,6 +201,25 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 // set as the current state map (after the session has been 
committed)
 final Map statePropertyMap = new 
HashMap<>(stateMap.toMap());
 
+// If an initial max value for column(s) has been specified 
using properties, and this column is not in the state manager, sync them to the 
state property map
+final Map maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
--- End diff --

Would it make sense to do it in the ``@OnScheduled`` method of the Abstract 
class? You could have the same code in common for QueryDatabaseTable and 
GenerateTableFetch. Besides not sure it needs to be done each time the 
processor is triggered? Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---