[jira] [Created] (NIFI-12624) ReportingTask: AWSCloudWatchReporterTask

2024-01-16 Thread Jorge Machado (Jira)
Jorge Machado created NIFI-12624:


 Summary: ReportingTask: AWSCloudWatchReporterTask
 Key: NIFI-12624
 URL: https://issues.apache.org/jira/browse/NIFI-12624
 Project: Apache NiFi
  Issue Type: Wish
  Components: Core Framework
Affects Versions: 1.24.0
Reporter: Jorge Machado


Hey everyone, we already have a PrometheusReportingTask and an 
AzureLogAnalyticsReportingTask, it would be great if we could push all the 
metrics from PrometheusReportingTask into AWS Cloudwatch. What do you think ? 
Currently there is not straight up way of doing this. If yes where should we 
put it ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-8227) Enable extend of AbstractDatabaseFetchProcessor without adding the whole nifi-standard processors

2021-02-15 Thread Jorge Machado (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado updated NIFI-8227:

Affects Version/s: 1.12.1

> Enable extend of AbstractDatabaseFetchProcessor without adding the whole 
> nifi-standard processors
> -
>
> Key: NIFI-8227
> URL: https://issues.apache.org/jira/browse/NIFI-8227
> Project: Apache NiFi
>  Issue Type: Wish
>Affects Versions: 1.12.1
>Reporter: Jorge Machado
>Priority: Minor
>
> Hi, 
>  
> I'm trying to extend AbstractDatabaseFetchProcessor, which ends up to be 
> difficult. The only way is to pull the standard processors as dependency and 
> this will duplicate all processors on the canvas. We should refactor so that 
> AbstractDatabaseFetchProcessor is moved to a place where can be extended 
> properlly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8227) Enable extend of AbstractDatabaseFetchProcessor without adding the whole nifi-standard processors

2021-02-15 Thread Jorge Machado (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado updated NIFI-8227:

Priority: Minor  (was: Major)

> Enable extend of AbstractDatabaseFetchProcessor without adding the whole 
> nifi-standard processors
> -
>
> Key: NIFI-8227
> URL: https://issues.apache.org/jira/browse/NIFI-8227
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Jorge Machado
>Priority: Minor
>
> Hi, 
>  
> I'm trying to extend AbstractDatabaseFetchProcessor, which ends up to be 
> difficult. The only way is to pull the standard processors as dependency and 
> this will duplicate all processors on the canvas. We should refactor so that 
> AbstractDatabaseFetchProcessor is moved to a place where can be extended 
> properlly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8227) Enable extend of AbstractDatabaseFetchProcessor without adding the whole nifi-standard processors

2021-02-15 Thread Jorge Machado (Jira)
Jorge Machado created NIFI-8227:
---

 Summary: Enable extend of AbstractDatabaseFetchProcessor without 
adding the whole nifi-standard processors
 Key: NIFI-8227
 URL: https://issues.apache.org/jira/browse/NIFI-8227
 Project: Apache NiFi
  Issue Type: Wish
Reporter: Jorge Machado


Hi, 

 

I'm trying to extend AbstractDatabaseFetchProcessor, which ends up to be 
difficult. The only way is to pull the standard processors as dependency and 
this will duplicate all processors on the canvas. We should refactor so that 
AbstractDatabaseFetchProcessor is moved to a place where can be extended 
properlly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado resolved NIFI-5119.
-
Resolution: Won't Fix

not a bug

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452886#comment-16452886
 ] 

Jorge Machado commented on NIFI-5119:
-

[~markap14] So that was really the issue. I was testing with Nifi 1.5.0 and 
Nifi 1.6.0. I tested it locally on my laptop with Nifi 1.6.0  and  it works 
good. So this is a problem on Nifi 1.5.0 Only. 

 

I will close this PR and this is *not* a bug. Thanks all for the support. 

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452347#comment-16452347
 ] 

Jorge Machado edited comment on NIFI-5119 at 4/25/18 2:36 PM:
--

We are using Kerberos and we have 3 nodes on each of the Nifi Instances. I will 
try that on the Master Node.

But From the Code I think there is a bug. Check the 
StandardProcessorGroup.java# populatePropertiesMap at the end we are returning 

fullPropertyMap which does not contain the sensitive Information. That's my PR 
does


was (Author: jomach):
We are using Kerberos and we have 3 nodes on each of the Nifi Instances. I will 
try that.

But From the Code I think there is a bug. Check the 
StandardProcessorGroup.java# populatePropertiesMap at the end we are returning 

fullPropertyMap which does not contain the sensitive Information. That's my PR 
does

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452347#comment-16452347
 ] 

Jorge Machado commented on NIFI-5119:
-

We are using Kerberos and we have 3 nodes on each of the Nifi Instances. I will 
try that.

But From the Code I think there is a bug. Check the 
StandardProcessorGroup.java# populatePropertiesMap at the end we are returning 

fullPropertyMap which does not contain the sensitive Information. That's my PR 
does

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452248#comment-16452248
 ] 

Jorge Machado edited comment on NIFI-5119 at 4/25/18 1:29 PM:
--

Yes, and when you pull from B if there is sensitive values set they get lost, 
that's the bug


was (Author: jomach):
Yes. and when you pull from B if there is sensitive values set they get lost, 
that's the bug

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452248#comment-16452248
 ] 

Jorge Machado commented on NIFI-5119:
-

Yes. and when you pull from B if there is sensitive values set they get lost, 
that's the bug

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452227#comment-16452227
 ] 

Jorge Machado commented on NIFI-5119:
-

If you follow the 5 steps that I described on the Ticket you should be able to 
see it. If you can't let me know and I will take some time to make a video 
showing it.

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452190#comment-16452190
 ] 

Jorge Machado commented on NIFI-5119:
-

Hi [~joewitt], yes I know we are not there yet. I think we should never save 
sensitive information into the registry. But the point that I'm trying to make 
is that Nifi does not Honor the variables that are already set after an update. 
 I created a PR for it but I'm not able to create a unit test for it. If you 
agree with the PR would be great if someone helped me out with the Unit test 
for this. For now I marked it as expected Exception 

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5028) CLI - Add a command to list sensitive properties to set

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452163#comment-16452163
 ] 

Jorge Machado commented on NIFI-5028:
-

[~joewitt] I Have a fix for it, but no Unit test yet. Yes let's switch to the 
other ticket

> CLI - Add a command to list sensitive properties to set
> ---
>
> Key: NIFI-5028
> URL: https://issues.apache.org/jira/browse/NIFI-5028
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Pierre Villard
>Priority: Major
>
> When versioning a process group, all the sensitives values will be removed 
> for security reasons. When importing the versioned process group for the 
> first time in the final environment, it'll be necessary to manually set the 
> value of the sensitive properties (or the newly added sensitive properties in 
> case of version update).
> It'd be helpful to have a command listing all the sensitives properties 
> contained in the process group that need to be set. Additionally a command 
> allowing the user to set the property would be useful to automate workflow 
> deployments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)
Jorge Machado created NIFI-5119:
---

 Summary: Pulling changes from Registry does not respect sensitive 
Informations on Destination
 Key: NIFI-5119
 URL: https://issues.apache.org/jira/browse/NIFI-5119
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Variable Registry
Affects Versions: 1.6.0, 1.5.0
 Environment: all
Reporter: Jorge Machado
 Fix For: 1.7.0


When pulling changes from registry if a sensitive variable is set then it gets 
reset to its default. 

I have found out a use case that destroys the complete concept of the Registry. 
 # Setup a flow with a sensitive field on Nifi Server A.
 # Push that to Registry
 # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because is 
the first time) 
 # Make changes on Nifi Server B and Push
 # Pull the changes from Nifi Server A. 

 

On Step 5 the sensitive Information from Nifi Server A get's deleted.

This breaks the whole concept IMHO.

Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5119) Pulling changes from Registry does not respect sensitive Informations on Destination

2018-04-25 Thread Jorge Machado (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado updated NIFI-5119:

Issue Type: Bug  (was: Improvement)

> Pulling changes from Registry does not respect sensitive Informations on 
> Destination
> 
>
> Key: NIFI-5119
> URL: https://issues.apache.org/jira/browse/NIFI-5119
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.5.0, 1.6.0
> Environment: all
>Reporter: Jorge Machado
>Priority: Major
> Fix For: 1.7.0
>
>
> When pulling changes from registry if a sensitive variable is set then it 
> gets reset to its default. 
> I have found out a use case that destroys the complete concept of the 
> Registry. 
>  # Setup a flow with a sensitive field on Nifi Server A.
>  # Push that to Registry
>  # Pull the Flow in  Nifi Server B.  (this is expected to be reseted because 
> is the first time) 
>  # Make changes on Nifi Server B and Push
>  # Pull the changes from Nifi Server A. 
>  
> On Step 5 the sensitive Information from Nifi Server A get's deleted.
> This breaks the whole concept IMHO.
> Relates to : NIFI-5028



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5028) CLI - Add a command to list sensitive properties to set

2018-04-25 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451978#comment-16451978
 ] 

Jorge Machado commented on NIFI-5028:
-

Actually it would be nice that we could detect if a sensitive field has a 
variable set. If yes then then we push this. I have found out a use case that 
destroys the complete concept of the Registry. 

 

 1- Setup a flow with a EL on the sensitive field.

 2- push that to Registry

3 - pull the Flow in another Nifi system.  

 

It will automatically set the sensitive information to not set, regardless if 
it was set or not. 

This breaks the whole concept IMHO.

> CLI - Add a command to list sensitive properties to set
> ---
>
> Key: NIFI-5028
> URL: https://issues.apache.org/jira/browse/NIFI-5028
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Pierre Villard
>Priority: Major
>
> When versioning a process group, all the sensitives values will be removed 
> for security reasons. When importing the versioned process group for the 
> first time in the final environment, it'll be necessary to manually set the 
> value of the sensitive properties (or the newly added sensitive properties in 
> case of version update).
> It'd be helpful to have a command listing all the sensitives properties 
> contained in the process group that need to be set. Additionally a command 
> allowing the user to set the property would be useful to automate workflow 
> deployments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4960) Multiple Custom Processors don't work as espected.

2018-03-12 Thread Jorge Machado (JIRA)
Jorge Machado created NIFI-4960:
---

 Summary: Multiple Custom Processors don't work as espected.
 Key: NIFI-4960
 URL: https://issues.apache.org/jira/browse/NIFI-4960
 Project: Apache NiFi
  Issue Type: Bug
  Components: Configuration
Affects Versions: 1.5.0
Reporter: Jorge Machado


I we setting up the Nifi registry are we are getting this error when trying to 
import a flow from Registry into Nifi: 


|Multiple versions of …. exist. No exact match for default:...:unversioned.|
To note that we have multiple versions of the same processor in the lib folder 
(which works for Templates)
Our Manifest inside the nar file looks like this: 
{color:#33}Manifest-Version: 1.0{color}
{color:#33}Implementation-Title: componentName{color}
{color:#33}Implementation-Version: 1.5.9.rc.1{color}
{color:#33}Nar-Dependency-Group: org.apache.nifi{color}
{color:#33}Nar-Version: 1.5.9.rc.1{color}
{color:#33}Nar-Dependency-Version: 1.5.0{color}
{color:#33}Nar-Id: {color}componentName
{color:#33}Nar-Group: com.{color}componentName
{color:#33}Nar-Dependency-Id: nifi-standard-services-api-nar{color}
{color:#33} {color}
{color:#33}I see this on the Nifi Logs: {color}
{color:#33}2018-03-12 08:33:51,304 INFO [NiFi Web Server-94] 
o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: 
Multiple versions of com.gfk.nifi.processor.customlogging.CustomLogger exist. 
No exact match for default:ipmvp-nifi-customlogattribute:unversioned.. 
Returning Conflict response.{color}
 
 
{color:#33}After some debugging I found the bug on BundleUtils and I will 
patch it. {color}
{code:java}
public static BundleDTO createBundleDto(final 
org.apache.nifi.registry.flow.Bundle bundle) {
final BundleDTO dto = new BundleDTO();
dto.setArtifact(bundle.getArtifact());
dto.setGroup(dto.getGroup()); <-- Should be bundle right ? 
dto.setVersion(dto.getVersion()); <-- should be bundle right ?
return dto;
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3380) Multiple Versions of the Same Component

2018-03-02 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383776#comment-16383776
 ] 

Jorge Machado commented on NIFI-3380:
-

I guys, I saw this and I have multiple version of the same custom processor. 
When I try to check this out from the registry it says to me: 

 

Multiple versions of processorName exist. No exact match for 
default:processorName:unversioned.

 

 

How do I set a default version ? 

> Multiple Versions of the Same Component
> ---
>
> Key: NIFI-3380
> URL: https://issues.apache.org/jira/browse/NIFI-3380
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Bryan Bende
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.2.0
>
> Attachments: nifi-example-processors-nar-1.0.nar, 
> nifi-example-processors-nar-2.0.nar, nifi-example-service-api-nar-1.0.nar, 
> nifi-example-service-api-nar-2.0.nar, nifi-example-service-nar-1.0.nar, 
> nifi-example-service-nar-1.1.nar, nifi-example-service-nar-2.0.nar
>
>
> This ticket is to track the work for supporting multiple versions of the same 
> component within NiFi. The overall design for this feature is described in 
> detail at the following wiki page:
> https://cwiki.apache.org/confluence/display/NIFI/Multiple+Versions+of+the+Same+Extension
> This ticket will track only the core NiFi work, and a separate ticket will be 
> created to track enhancements for the NAR Maven Plugin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3472) Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase processors

2017-11-30 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272934#comment-16272934
 ] 

Jorge Machado commented on NIFI-3472:
-

[~jtstorck] sure that is only with ticket cache? I think we could call 
UserGroupInformation.getLoginUser() this will internally call the 
spawnAutoRenewalThreadForUserCreds if I'm not mistaken or read bad the code. 
What would be really great is if the Hadoop team had a how to proper use 
UserGroupInformation, because it seems that everyone tries to do his thing... 
But yes the abstract class from Nifi should Have a thread that takes care of 
renewing the token. 

> Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase 
> processors
> --
>
> Key: NIFI-3472
> URL: https://issues.apache.org/jira/browse/NIFI-3472
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> process flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>   at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>   at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  

[jira] [Commented] (NIFI-3472) Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase processors

2017-11-30 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272642#comment-16272642
 ] 

Jorge Machado commented on NIFI-3472:
-

[~jtstorck]  How is this going ? The UserGroupInformation spwans the Thread 
that renews the ticket for us. Have use used that ? Check: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1037

> Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase 
> processors
> --
>
> Key: NIFI-3472
> URL: https://issues.apache.org/jira/browse/NIFI-3472
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> process flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>   at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>   at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> 

[jira] [Commented] (NIFI-3472) Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase processors

2017-11-07 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242669#comment-16242669
 ] 

Jorge Machado commented on NIFI-3472:
-

Hey, so after a lot of debug I found out that this Could Be a keytab Cache 
Problem, I just rebooted Nifi and Then it Works 

> Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase 
> processors
> --
>
> Key: NIFI-3472
> URL: https://issues.apache.org/jira/browse/NIFI-3472
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> process flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>   at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>   at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> 

[jira] [Commented] (NIFI-3472) PutHDFS Kerberos relogin not working (tgt) after ticket expires

2017-07-31 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106963#comment-16106963
 ] 

Jorge Machado commented on NIFI-3472:
-

I'm hitting the same problem but with the gethdfs processor: 


{code:java}
2017-07-31 10:20:51,657 WARN [Timer-Driven Process Thread-1] 
org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2017-07-31 10:20:51,658 WARN [Timer-Driven Process Thread-1] 
o.a.h.io.retry.RetryInvocationHandler Exception while invoking class 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo
 over server/10.174.22.40:8020. Not retrying because failovers (15) exceeded 
maximum allowed (15)
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "nifi/10.174.22.49"; destination host is: 
"server":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy126.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.GeneratedMethodAccessor421.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at 
org.apache.nifi.processors.hadoop.GetHDFS.selectFiles(GetHDFS.java:444)
at 
org.apache.nifi.processors.hadoop.GetHDFS.performListing(GetHDFS.java:420)
at org.apache.nifi.processors.hadoop.GetHDFS.onTrigger(GetHDFS.java:264)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
at 

[jira] [Commented] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085196#comment-16085196
 ] 

Jorge Machado commented on NIFI-4174:
-

Yeah, you are right. 

One question: Is there a processor that can just execute a list of sql 
statements. ? that would be nice

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> 

[jira] [Resolved] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Jorge Machado (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado resolved NIFI-4174.
-
Resolution: Won't Fix

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> {code}
> Database Connection Pooling Service:
> 

[jira] [Commented] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083577#comment-16083577
 ] 

Jorge Machado commented on NIFI-4174:
-

After some tests I found out that I don't have connection to the DB. 
This should throw an error like: Cannot connect to database instead of 
schedulers error. 

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> 

[jira] [Created] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-11 Thread Jorge Machado (JIRA)
Jorge Machado created NIFI-4174:
---

 Summary: GenerateTableFetch does not work with oracle on Nifi 1.2
 Key: NIFI-4174
 URL: https://issues.apache.org/jira/browse/NIFI-4174
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Jorge Machado
Priority: Minor


I'm trying to extract some data from a oracle DB.  
I'm getting : 


{code:java}
2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
o.a.n.controller.StandardProcessorNode Timed out while waiting for OnScheduled 
of 'GenerateTableFetch' processor to finish. An attempt is made to cancel the 
task via Thread.interrupt(). However it does not guarantee that the task will 
be canceled since the code inside current OnScheduled operation may have been 
written to ignore interrupts which may result in a runaway thread. This could 
lead to more issues, eventually requiring NiFi to be restarted. This is usually 
a bug in the target Processor 
'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to be 
documented, reported and eventually fixed.
2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
o.a.n.p.standard.GenerateTableFetch 
GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
@OnScheduled method due to java.lang.RuntimeException: Timed out while 
executing one of processor's OnScheduled task.; processor will not be scheduled 
to run for 30 seconds: java.lang.RuntimeException: Timed out while executing 
one of processor's OnScheduled task.
java.lang.RuntimeException: Timed out while executing one of processor's 
OnScheduled task.
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
at 
org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
at 
org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
... 9 common frames omitted
2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method due 
to java.lang.RuntimeException: Timed out while executing one of processor's 
OnScheduled task.
java.lang.RuntimeException: Timed out while executing one of processor's 
OnScheduled task.
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
at 
org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
at 
org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
... 9 common frames omitted
{code}

Database Connection Pooling Service:
jdbc:oracle:thin:@somehost:6779:someSID
oracle.jdbc.OracleDriver
/pathTo/ojdbc7.jar
max wait time 500mil


On the processor I have:

Max Wait Time 10 seconds
Partition Size 100

i tryed schedule 0s and 10 000s result is the same




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)