Re: PublishJMS - Failed to determine destination type from destination name (v1.9.2)

2019-11-18 Thread Joe Ferner
A couple things I noticed...

There is a small bug when calling logUnbuildableDestination. It should get
passed entry.getValue() not entry.getKey(). This would give you the
intended warning message where "destination_name" would actually be the
destination name being evaluated. This would help you see if the variable
was being properly evaluated in the flow file attribute.

If this were in place, I think you'd see your first attempt with
"activemq:${myid}" not having its variable evaluated because while the
processor does attempt to support variables in the destination name
attribute, this support doesn't appear to extend into where
buildDestination is being called to set the JMS Destination JMS header
property. In this case, the original flow file attributes are being
generically iterated over and the evaluated destination name is not being
used.

This being said, the reason why your publisher is still working correctly
is the setting of the JMS Destination JMS header property is not necessary
for the message to be properly published. This is more of an informational
property. The publisher is explicitly publishing to the destination name as
specified, including handling variables as intended.

As for why you are still getting this error when you attempt with
"activemq:queue:${myid}". This I'm not sure. Even if the variable is not
evaluated as expected, the presence of "queue" in the value should
eliminate the warning.

On Mon, Nov 18, 2019 at 3:53 PM Santiago Acosta <
santiago.aco...@intermodaltelematics.com> wrote:

> Hi,
>
> I am trying to configure a PublishJMS processor to publish to a QUEUE with
> a variable name using Expression Language. I set the Destination Name to
> "activemq:${myid}" which uses an attribute I added to the flowfile in a
> previous step
>
> My ConsumeJMS processor works great which means that my configuration is
> OK. (More on this later)
>
> When the flowfile arrives at the PublishJMS processor, the following
> bulletin warning shows up:
>
> WARNING
> PublishJMS[id=...] Failed to determine destination type from destination
> name 'jms_destination'. The 'jms_destination' header will not be set
>
> The flowfile passes through as a success, the queue is being created on my
> ActiveMQ service and I can browse the message's contents in the queue.
> Everything seems to be working regardless of the warning.
>
> I took a look inside
>
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java
> where the error message is being generated. The "private Destination
> buildDestination(final String destinationName)" method (line 136) seems to
> be returning null.
>
> I changed the Destination Name to "activemq:queue:${myid}" which does
> contain the word "queue" and should pass the condition "if
> (destinationName.
> toLowerCase().contains("queue"))" described on line 145. I thought the type
> was derived from Destination Type but there is an actual check on the name.
>
> What boggles my mind is that the warning keeps showing up which means that
> I do not know how to diagnose from where I am at.
>
> Do you know if this is just a simple bug/oversight? Should I be worried
> about this warning?
>
> Thank you for your time.
>
> --
> Best regards,
> *Santiago Acosta Arreaza*
>
> Prisma building, 1st floor, Office 1.5
> Fotógrafo José Norberto Rguez. Díaz st., 2
> San Cristobal de La Laguna, SC de Tenerife
> 38204, Spain
>
>
> +34 922 31 56 05
> www.intermodaltelematics.com
>


Re: New Distributed Map Cache Impementations

2019-11-18 Thread Shawn Weeks
Created NIFI-6881 and I've got an almost working implementation. Something I'm 
trying to work out is how to store the keys. Most databases don't support byte 
arrays for primary keys. I know in virtually every case in NiFi the key's will 
be Strings but I'm wondering what to do if they weren't. I could base64 
everything but that will make it hard to figure out the column sizing for keys. 
I'm trying to keep the SQL generic enough that it should work on Oracle, MySQL, 
Postgres, DB2, Derby, H2 and maybe SQL Server if it doesn't do something odd.

Thanks

On 11/14/19, 8:17 AM, "Matt Burgess"  wrote:

Shawn,

There are also Redis and Couchbase distributed map cache clients
already in NiFi. I don't see any Jiras or PRs related to a DynamoDB or
JDBC ones. I thought about making ones for JDBC, Hazelcast and/or
Nitrite [1] (with or without a DataGate server), but never got around
to it. I think DynamoDB and JDBC implementations would be helpful, the
latter could support DynamoDB in the meantime using the Simba JDBC
driver [2].

While thinking about the JDBC one, I figured it might be nice to be
able to cache the table locally for X amount of time or N number of
entries, in case you pre-populate the cache and are just reading it
with the client. Any write (from the NiFi client) would invalidate the
cache and the table would be re-fetched on the next read operation. I
did something similar for the DatabaseRecordLookupService, but that's
a read-only service so I didn't have to worry about writes, I was just
trying to improve performance where possible.

Regards,
Matt

[1] https://www.dizitart.org/nitrite-database.html
[2] https://www.simba.com/drivers/dynamodb-odbc-jdbc/

On Thu, Nov 14, 2019 at 8:31 AM Shawn Weeks  
wrote:
>
> Has anyone already done some work on adding new services for distribute 
map cache? I’m looking at moving to aws and I really don’t want to have to run 
emr just for hbase. I’ve been thinking about starting on either a DynamoDB or 
simple jdbc implementation.
>
> Thanks
> Shawn
>
> Sent from my iPhone




PublishJMS - Failed to determine destination type from destination name (v1.9.2)

2019-11-18 Thread Santiago Acosta
Hi,

I am trying to configure a PublishJMS processor to publish to a QUEUE with
a variable name using Expression Language. I set the Destination Name to
"activemq:${myid}" which uses an attribute I added to the flowfile in a
previous step

My ConsumeJMS processor works great which means that my configuration is
OK. (More on this later)

When the flowfile arrives at the PublishJMS processor, the following
bulletin warning shows up:

WARNING
PublishJMS[id=...] Failed to determine destination type from destination
name 'jms_destination'. The 'jms_destination' header will not be set

The flowfile passes through as a success, the queue is being created on my
ActiveMQ service and I can browse the message's contents in the queue.
Everything seems to be working regardless of the warning.

I took a look inside
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java
where the error message is being generated. The "private Destination
buildDestination(final String destinationName)" method (line 136) seems to
be returning null.

I changed the Destination Name to "activemq:queue:${myid}" which does
contain the word "queue" and should pass the condition "if (destinationName.
toLowerCase().contains("queue"))" described on line 145. I thought the type
was derived from Destination Type but there is an actual check on the name.

What boggles my mind is that the warning keeps showing up which means that
I do not know how to diagnose from where I am at.

Do you know if this is just a simple bug/oversight? Should I be worried
about this warning?

Thank you for your time.

-- 
Best regards,
*Santiago Acosta Arreaza*

Prisma building, 1st floor, Office 1.5
Fotógrafo José Norberto Rguez. Díaz st., 2
San Cristobal de La Laguna, SC de Tenerife
38204, Spain


+34 922 31 56 05
www.intermodaltelematics.com


Re: Nifi 1.10.0 fail to connect cluster with external zk 3.4.10 and 3.5.6

2019-11-18 Thread Joe Witt
Hello

I believe you need to upgrade this external zookeeper as well.  3.5.5 or
newer is ideal.

thanks

On Mon, Nov 18, 2019 at 1:24 PM kfir sahartov 
wrote:

> Hi Nifi team!
>
> I have an unsolved issue. While I have been trying to upgrade nifi 1.9.2 to
> nifi 1.10.0 with external Apache zookeeper 3.4.10 with 2 nifi nodes
> configuration I  got a lot of ConnectionLoss exceptions, from
> org.apache.curator and the cluster failes to connect 2/2 nodes. The best I
> got is 1/2 or 2/2 but failes after some time. The java version I use is
> 1.8.
>
> Can anybody help?
>
> Kfir.
>


Re: Nifi 1.10.0 fail to connect cluster with external zk 3.4.10 and 3.5.6

2019-11-18 Thread Pierre Villard
Hi Kfir,

I didn't try with ZooKeeper 3.5.6 but it worked fine with 3.5.5 (version
used as dep. in NiFi code).
I did experience the same with ZooKeeper 3.4.x.

Could you try with ZK 3.5.5?

Hope this helps,
Pierre


Le lun. 18 nov. 2019 à 19:24, kfir sahartov  a
écrit :

> Hi Nifi team!
>
> I have an unsolved issue. While I have been trying to upgrade nifi 1.9.2 to
> nifi 1.10.0 with external Apache zookeeper 3.4.10 with 2 nifi nodes
> configuration I  got a lot of ConnectionLoss exceptions, from
> org.apache.curator and the cluster failes to connect 2/2 nodes. The best I
> got is 1/2 or 2/2 but failes after some time. The java version I use is
> 1.8.
>
> Can anybody help?
>
> Kfir.
>


Re: Nifi 1.10.0 fail to connect cluster with external zk 3.4.10 and 3.5.6

2019-11-18 Thread Bryan Bende
Hello,

NiFi 1.10.0 requires ZooKeeper 3.5.x.

https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance

Thanks,

Bryan

On Mon, Nov 18, 2019 at 1:24 PM kfir sahartov  wrote:
>
> Hi Nifi team!
>
> I have an unsolved issue. While I have been trying to upgrade nifi 1.9.2 to
> nifi 1.10.0 with external Apache zookeeper 3.4.10 with 2 nifi nodes
> configuration I  got a lot of ConnectionLoss exceptions, from
> org.apache.curator and the cluster failes to connect 2/2 nodes. The best I
> got is 1/2 or 2/2 but failes after some time. The java version I use is 1.8.
>
> Can anybody help?
>
> Kfir.


Nifi 1.10.0 fail to connect cluster with external zk 3.4.10 and 3.5.6

2019-11-18 Thread kfir sahartov
Hi Nifi team!

I have an unsolved issue. While I have been trying to upgrade nifi 1.9.2 to
nifi 1.10.0 with external Apache zookeeper 3.4.10 with 2 nifi nodes
configuration I  got a lot of ConnectionLoss exceptions, from
org.apache.curator and the cluster failes to connect 2/2 nodes. The best I
got is 1/2 or 2/2 but failes after some time. The java version I use is 1.8.

Can anybody help?

Kfir.


NiFi Cluster Joining Issue

2019-11-18 Thread Velumani, Manoj
Hello team,

Hope you are doing great.

I am Manoj from Data Architect team at S Global.

We are working on Apache NiFi Open Source for the past year and we have built a 
SQL P2P Replication solution using Apache NiFi and its working pretty good with 
the solution. But when we started scaling to 100s of table in the NiFi cluster 
we started getting the below error and the cluster disconnects no nodes join 
the cluster. Any help on the reason for this error would be much helpful

2019-11-18 17:20:07,260 INFO [main] o.apache.nifi.controller.FlowController 
Successfully synchronized controller with proposed flow
2019-11-18 17:20:17,280 INFO [main] o.a.nifi.controller.StandardFlowService 
Connecting Node: avgdpsnifigs-3.stage.mktint.global:80
2019-11-18 17:20:17,285 INFO [main] 
o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster 
Coordinator is located at avgdpsnifigs-2.stage.mktint.global:; will use 
this address for sending heartbeat messages
2019-11-18 17:21:35,405 INFO [Write-Ahead Local State Provider Maintenance] 
org.wali.MinimalLockingWriteAheadLog 
org.wali.MinimalLockingWriteAheadLog@3a1540be checkpointed with 1 Records and 0 
Swap Files in 23 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit 
Logs time = 1 millis), max Transaction ID 0
2019-11-18 17:23:18,199 WARN [main] o.a.nifi.controller.StandardFlowService 
Failed to connect to cluster due to: 
org.apache.nifi.cluster.protocol.ProtocolException: Failed unmarshalling 
'CONNECTION_RESPONSE' protocol message from 
avgdpsnifigs-2.stage.mktint.global/10.21.38.97: due to: 
java.net.SocketTimeoutException: Read timed out

Thank you,
Manoj.





The information contained in this message is intended only for the recipient, 
and may be a confidential attorney-client communication or may otherwise be 
privileged and confidential and protected from disclosure. If the reader of 
this message is not the intended recipient, or an employee or agent responsible 
for delivering this message to the intended recipient, please be aware that any 
dissemination or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by 
replying to the message and deleting it from your computer. S Global Inc. 
reserves the right, subject to applicable local law, to monitor, review and 
process the content of any electronic message or information sent to or from 
S Global Inc. e-mail addresses without informing the sender or recipient of 
the message. By sending electronic message or information to S Global Inc. 
e-mail addresses you, as the sender, are consenting to S Global Inc. 
processing any of your personal data therein.


Re: White label Apache NIFI

2019-11-18 Thread Joe Witt
Alex

This is a perfectly fine distro for it.  Yep you definitely can do this.
NiFi is made available under a permissive license
https://www.apache.org/licenses/LICENSE-2.0

You'll need to modify the UI code to do what you want in your own fork, do
a build, and have fun.

Thanks

On Mon, Nov 18, 2019 at 11:11 AM Alex Do  wrote:

> Hello,
>
> I do not know if this is the right contact to ask this, but I was
> wondering if it is possible to white label (remove branding) Apache NIFI?
> We would like to put our company logo where the NIFI logo and drop normally
> are and place powered by Apache NIFI at the bottom.  Let me know, thank you.
>
> Regards,
> Alex Do
> In2itive
>


White label Apache NIFI

2019-11-18 Thread Alex Do
Hello,

I do not know if this is the right contact to ask this, but I was wondering if 
it is possible to white label (remove branding) Apache NIFI?  We would like to 
put our company logo where the NIFI logo and drop normally are and place 
powered by Apache NIFI at the bottom.  Let me know, thank you.

Regards,
Alex Do
In2itive