Re: PutTCP connector not cleaning up dangling connections

2017-09-20 Thread ddewaele
Small update : No garbage / noise on 1.3.0 also so it must be in
1.4.0-SNAPSHOT.

Noticed that the PutTCP processor has changed the way it processes incoming
flowfiles. Might be related to that. 

Looked into the data provenance and noticed  3 bytes EF BF BD (in hex) when
things go bad. (This seems to be the utf-8 encoding of the Unicode character
U+FFFD REPLACEMENT CHARACTER.)

So I'm guessing some kind of encoding / charset issue.








--
Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: PutTCP connector not cleaning up dangling connections

2017-09-19 Thread ddewaele
I've let it run overnight on 1.4.0-SNAPSHOT. Didn't see any hanging
connections and after timeouts they were cleaned up.

However, I noticed something else (perhaps unrelated). About 40% of the
messages that we "get" from the tcp connection contained "noise / garbage"
and didn't pass their checksum. On 1.1.0 we never had that.

If I manually "put" data on the tcp connection (via a telnet session) to
trigger a response I don't see this "noise / garbage". So it seems to
originate  from PutTCP.

Any pointers ?  Going to investigate further today and check the detailed
release notes. (as I am coming from 1.1.0).





--
Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: PutTCP connector not cleaning up dangling connections

2017-09-19 Thread ddewaele
Hi,

Trying it out now. forgot how long it takes to build :)

Will give feedback here.

Thx for the client port logging also ...  that is always useful for
debugging perhaps we can check later in what way we can retrieve it in
the timeout scenarios / standard close scenario

Really hope this makes it into the 1.4.0 release.



--
Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: PutTCP connector not cleaning up dangling connections

2017-09-18 Thread ddewaele
Thx a lot for the quick response. Looking forward to the PR and the
release :)

Would this for example still make the 1.4.0 release ?

It would also be very interesting to log client ports in debug mode 
don't know how easy that is with nio.

There is Keep Alive Timeout = 2min specified on the Moxa, so it means that
the socket on the client (NiFi) is still responding to "keep alive" packets.
(makes sense I guess, as we would need to configure some kind of read
timeout on the moxa to kill off the client).

I guess the fact that we don't see anything in the stack is because the
socket got established in non blocking mode so it is in ESTABLISHED mode but
nobody is around to do any processing on it.








--
Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: PutTCP connector not cleaning up dangling connections

2017-09-18 Thread ddewaele
Thx a lot for the quick response. Looking forward to the PR and the
release :)

Would this for example still make the 1.4.0 release ?

It would also be very interesting to log client ports in debug mode 
don't know how easy that is with nio.

There is Keep Alive Timeout = 2min specified on the Moxa, so it means that
the socket on the client (NiFi) is still responding to "keep alive" packets.
(makes sense I guess, as we would need to configure some kind of read
timeout on the moxa to kill off the client).

I guess the fact that we don't see anything in the stack is because the
socket got established in non blocking mode so it is in ESTABLISHED mode but
nobody is around to do any processing on it.








--
Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: PutTCP connector not cleaning up dangling connections

2017-09-17 Thread ddewaele
Stopping the processor doesn't cleanup the tcp connection. It remains
ESTABLISHED.

There are 2 ways of getting out of it (none of them are ideal).

- Restarting Nifi
- Restarting the Moxa serial ports

I've dumped the output in the following gist :
https://gist.github.com/ddewaele/83705003740674962c1e133fb617f68c

The GetTCP processor you'll see in the thread dump also interacts with the
moxa. It is a Netty based custom processor we created (because there was no
GetTCP at the time). However, we log all interactions (including client
ports) with this processor and all of them end up getting closed correctly.

So the "hanging" connection originated from the built-in PutTCP processor.


Joe Witt wrote
> If you stop the processor manually does it clean them up?
> 
> When the connections appear stuck can you please get a thread dump?
> 
> bin/nifi.sh dump
> 
> The results end up in bootstrap.log.
> 
> Thanks
> Joe
> 
> On Sep 17, 2017 2:22 PM, "ddewaele" 

> ddewaele@

>  wrote:
> 
>> We are using NiFi PutTCP processors to send messages to a number of Moxa
>> onCell ip gateway devices.
>>
>> These Moxa devices are running on a cellular network with not always the
>> most ideal connection. The Moxa only allows for a maximum of 2
>> simultaneous
>> client connections.
>>
>> What we notice is that although we specify connection / read timeouts on
>> both PutTCP and the Moxa, that sometimes a connection get "stuck". (In
>> the
>> moxa network monitoring we see 2 client sockets coming from PutTCP in the
>> ESTABLISHED state that never go away).
>>
>> This doesn't always happen, but often enough for it to be considered a
>> problem, as it requires a restart of the moxa ports to clear the
>> connections
>> (manual step). It typically happens when PutTCP experiences a Timeout.
>>
>> On the PutTCP processors we have the following settings :
>>
>> - Idle Connection Expiration : 30 seconds  (we've set this higher due to
>> bad
>> gprs connection)
>> - Timeout : 10 seconds (this is only used as a timeout for establishing
>> the
>> connection)
>>
>> On the Moxas we have
>>
>> - TCP alive check time : 2min (this should force the Moxa to close the
>> socket)
>>
>> Yet for some reason the connection remains established.
>>
>> Here's what I found out :
>>
>> On the moxa I noticed a connection (with client port 48440) that is in
>> ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa
>> I
>> can see when the connection was established :
>>
>> 2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440
>>
>> I can track that down in Nifi via the logs (unfortunately PutTCP doesn't
>> log
>> client ports, but from the timestamp  I'm sure it's this connection :
>>
>> 2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
>> o.apache.nifi.processors.standard.PutTCP
>> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
>> creating a new one...
>> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
>> o.apache.nifi.processors.standard.PutTCP
>> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
>> and unable to create a new one, transferring
>> StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=
>> StandardContentClaim
>> [resourceClaim=StandardResourceClaim[id=1505641210025-1,
>> container=default,
>> section=1], offset=84519, length=9],offset=0,name=
>> 23934743676390659,size=9]
>> to failure: java.net.SocketTimeoutException: Timed out connecting to
>> 10.32.133.40:4001
>> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
>> o.apache.nifi.processors.standard.PutTCP
>> java.net.SocketTimeoutException: Timed out connecting to
>> 10.32.133.40:4001
>> at
>> org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(
>> SocketChannelSender.java:66)
>> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
>> at
>> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(
>> AbstractPutEventProcessor.java:312)
>> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
>> at
>> org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
>> [nifi-standard-processors-1.1.0.jar:1.1.0]
>> at
>> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.
>> acquireSender(AbstractPutEventProcessor.java:334)
>> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
>> at
>> org.apache.nif

PutTCP connector not cleaning up dangling connections

2017-09-17 Thread ddewaele
We are using NiFi PutTCP processors to send messages to a number of Moxa
onCell ip gateway devices.

These Moxa devices are running on a cellular network with not always the
most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
client connections.

What we notice is that although we specify connection / read timeouts on
both PutTCP and the Moxa, that sometimes a connection get "stuck". (In the
moxa network monitoring we see 2 client sockets coming from PutTCP in the
ESTABLISHED state that never go away).

This doesn't always happen, but often enough for it to be considered a
problem, as it requires a restart of the moxa ports to clear the connections
(manual step). It typically happens when PutTCP experiences a Timeout.

On the PutTCP processors we have the following settings :

- Idle Connection Expiration : 30 seconds  (we've set this higher due to bad
gprs connection)
- Timeout : 10 seconds (this is only used as a timeout for establishing the
connection)

On the Moxas we have

- TCP alive check time : 2min (this should force the Moxa to close the
socket)

Yet for some reason the connection remains established.

Here's what I found out :

On the moxa I noticed a connection (with client port 48440) that is in
ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa I
can see when the connection was established :

2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440

I can track that down in Nifi via the logs (unfortunately PutTCP doesn't log
client ports, but from the timestamp  I'm sure it's this connection :

2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
o.apache.nifi.processors.standard.PutTCP
PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
creating a new one...
2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
o.apache.nifi.processors.standard.PutTCP
PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
and unable to create a new one, transferring
StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=StandardContentClaim
[resourceClaim=StandardResourceClaim[id=1505641210025-1, container=default,
section=1], offset=84519, length=9],offset=0,name=23934743676390659,size=9]
to failure: java.net.SocketTimeoutException: Timed out connecting to
10.32.133.40:4001
2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
o.apache.nifi.processors.standard.PutTCP 
java.net.SocketTimeoutException: Timed out connecting to 10.32.133.40:4001
at
org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(SocketChannelSender.java:66)
~[nifi-processor-utils-1.1.0.jar:1.1.0]
at
org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(AbstractPutEventProcessor.java:312)
~[nifi-processor-utils-1.1.0.jar:1.1.0]
at
org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
[nifi-standard-processors-1.1.0.jar:1.1.0]
at
org.apache.nifi.processor.util.put.AbstractPutEventProcessor.acquireSender(AbstractPutEventProcessor.java:334)
~[nifi-processor-utils-1.1.0.jar:1.1.0]
at
org.apache.nifi.processors.standard.PutTCP.onTrigger(PutTCP.java:176)
[nifi-standard-processors-1.1.0.jar:1.1.0]
at
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
[nifi-framework-core-1.1.0.jar:1.1.0]
at
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
[nifi-framework-core-1.1.0.jar:1.1.0]
at
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
[nifi-framework-core-1.1.0.jar:1.1.0]
at
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
[nifi-framework-core-1.1.0.jar:1.1.0]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_111]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_111]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_111]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_111]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_111]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]


On an OS level I indeed see the TCP connection originated from Nifi :

netstat -tn | grep 48440

tcp  711  0 10.192.2.90:48440   10.32.133.40:4001  
ESTABLISHED


lsof -i TCP:48440

COMMAND  PID USER   FD   TYPEDEVICE SIZE/OFF NODE NAME
java3424 root 1864u  IPv4 404675057  0t0  TCP
NifiServer:48440->10.32.133.40:newoak (ESTABLISHED)

ps -ef | grep 3424

root  3424  3390  8 

Re: NiFi Docker

2017-08-01 Thread ddewaele
Great  also have some ideas about this. I'll log a JIRA and elaborate on
those.

We can then see on how to move this forward. (willing to do a pull request
for this).



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Docker-tp2562p2576.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


NiFi Docker

2017-07-30 Thread ddewaele
Hi,

We're using Nifi in a containerized environment (using docker-compose for
local development and test, and docker swarm in production).

The current docker image doesn't have a lot of environment options, meaning
if you want to run Nifi with some custom config like the one below :

environment:
  - NIFI_WEB_HTTP_HOST=xxx
  - NIFI_CLUSTER_HOST_NAME=xxx
  - NIFI_CLUSTER_HOST_PORT=xxx
  - NIFI_CLUSTER_PORT=xxx
  - NIFI_ZOOKEEPER_CONNECT_STRING=xxx
  - NIFI_CONTENT_REPOSITORY_ARCHIVE_MAX_USAGE_PERCENTAGE=xxx
  - CONFIG_URI_LABEL=xxx

You need to create your own "wrapper" docker image (extending the base
apache nifi one).

Stuff that we typically to in a nifi installation is 

- change values in nifi.properties (a generic solution could be created for
that where nifi property keys could be provided as environment variables to
the docker container)
- install custom processors (nar files)
- custom log configuration (logback.xml)
- custom bootstrap values
- copy flow templates in nifi so they can be used immediately after startup.

Have you ever thought of extending the current docker image, to allow a
little bit more customization like this ones above.

Some other things I noticed :

- the readme on https://hub.docker.com/r/apache/nifi/ doesn't mention docker
usage it all. seems to be the default nifi readme.

- The current docker image contains 2 layers of 1gb each. I'm guessing this
is the result of a) downloading and untarring the nifi distribution and b)
executing the chown command. Is there a reason these are spread out over 2
docker run commands ?

f5f88f68e088: Downloading [=>
]   25.9MB/978MB
5ed4b763cde4: Downloading [=>
]  34.02MB/978MB



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Docker-tp2562.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: NiFi Cluster with lots of SUSPENDED, RECONNECTED, LOST events

2017-06-13 Thread ddewaele
Seems nabble doesn't send the raw-text-formatted log snippets.

Added them in this gist :
https://gist.github.com/ddewaele/67ca6cb95b9c894a9eb8d782b2ad99a2



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Cluster-with-lots-of-SUSPENDED-RECONNECTED-LOST-events-tp2194p2195.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


NiFi Cluster with lots of SUSPENDED, RECONNECTED, LOST events

2017-06-13 Thread ddewaele
We have a node nifi cluster running with 3 zookeeper instances (replicated)
in a Docker Swarm Cluster.

Most of time the cluster is operating fine, but from time to time we notice
that Nifi stops processing messages completely. It eventually resumes after
a while (sometimes after a couple of seconds, sometimes after a couple of
minutes).

When I do a grep o.a.n.c.l.e.CuratorLeaderElectionManager
/srv/nifi/logs/nifi-app.log on the primary node, I see a lof of suspended /
reconnected messages.




Likewise on the other node, I see similar messages



The only real exceptions I'm seeing in the logs are these



I also this on the UI from time to time :

com.sun.jersey.api.client.ClientHandlerException:
java.net.SocketTimeoutException: Read timed out

Is there anything I can do to further debug this ? 
Is it normal to see that many connection state changes ? (the logs are full
of them).
The solution is running on 3 VMs, using Docker Swarm. Nifi is running on 2
of those 3 VMs. We have a zookeeper setup running on all 3 VMs.

I don't see any errors in the zookeeper logs.






--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Cluster-with-lots-of-SUSPENDED-RECONNECTED-LOST-events-tp2194.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread ddewaele
Sorry, payload should also include the nodeId

curl -v -X PUT -d
"{\"node\":{\"nodeId\":\"b89e8418-4b7f-4743-bdf4-4a08a92f3892\",\"status\":\"DISCONNECTING\"}}"
-H "Content-Type: application/json"
http://192.168.122.141:8080/nifi-api/controller/cluster/nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
 



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1982.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread ddewaele
You should be able to put it into DISCONNECTED mode by doing the following
call :

curl -v -X PUT -d "{\"node\":{\"status\":\"DISCONNECTING\”}}” -H
"Content-Type: application/json"
http://192.168.122.141:8080/nifi-api/controller/cluster/nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
 

It should respond with an HTTP 200 and a message saying it went to state
DISCONNECTED.

That way you can access the GUI again and delete the node from the cluster
if you want to.

Tested this workaround with 1.2.0 and works.



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1981.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi clusters : duplicate nodes shown in cluster overview

2017-05-19 Thread ddewaele
We're using docker, and in our failover scenario the machine is rebooted
and/or the docker system is restarted.

We're currently volume mapping the following :

  - /srv/nifi/flow/archive:/opt/nifi/nifi-1.2.0/conf/archive:Z
  - /srv/nifi/flows:/opt/nifi/nifi-1.2.0/conf/flows:Z
  -
/srv/nifi/content_repository:/opt/nifi/nifi-1.2.0/content_repository:Z
  -
/srv/nifi/database_repository:/opt/nifi/nifi-1.2.0/database_repository:Z
  -
/srv/nifi/flowfile_repository:/opt/nifi/nifi-1.2.0/flowfile_repository:Z
  -
/srv/nifi/provenance_repository:/opt/nifi/nifi-1.2.0/provenance_repository:Z
  - /srv/nifi/work:/opt/nifi/nifi-1.2.0/work:Z
  - /srv/nifi/logs:/opt/nifi/nifi-1.2.0/logs:Z

Are you referring to the local state management provider value (default
/opt/nifi/nifi-1.2.0/state/local) ?

If so I guess volume mapping that folder should fix it ? Would that be the
right thing to do ?




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-clusters-duplicate-nodes-shown-in-cluster-overview-tp1966p1972.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Nifi clusters : duplicate nodes shown in cluster overview

2017-05-19 Thread ddewaele
We have a 2 node cluster (centos-a / centos-b).  During on of your failover
tests, we noticed that when we rebooted centos-b, sometimes "duplicate" node
entries can be seen in the cluster.

We rebooted centos-b and when it came back online the cluster NiFi saw 2 out
of 3 nodes connected. 

centos-b was added twice (using different nodeIds).

1. centos-b : 05/19/2017 06:48:51 UTC : Node disconnected from cluster due
to Have not received a heartbeat from node in 44 seconds
2. centos-b : 05/19/2017 07:42:54 UTC : Received first heartbeat from
connecting node. Node connected.

Is this by design ? In this case (and I assume in most cases), an address /
apiPort combo should uniquely identify a particular node. Why does it get
assigned a new nodeId ?

As a result, we need to manually disconnected the duplicate disconnected
centos-b


Output of the cluster rest endpoint :

 
{
  "cluster": {
"nodes": [
  {
"nodeId": "62be0e80-306a-4037-80e5-b4def5fbc78e",
"address": "centos-b",
"apiPort": 8080,
"status": "DISCONNECTED",
"roles": [],
"events": [
  {
"timestamp": "05/19/2017 06:48:51 UTC",
"category": "WARNING",
"message": "Node disconnected from cluster due to Have not
received a heartbeat from node in 44 seconds"
  },
  {
"timestamp": "05/18/2017 13:33:56 UTC",
"category": "INFO",
"message": "Node Status changed from CONNECTING to CONNECTED"
  }
]
  },
  {
"nodeId": "d41d71f2-0ab4-4d6e-bbf2-793bd4faad06",
"address": "centos-a",
"apiPort": 8080,
"status": "CONNECTED",
"heartbeat": "05/19/2017 07:44:39 UTC",
"roles": [
  "Primary Node",
  "Cluster Coordinator"
],
"activeThreadCount": 0,
"queued": "0 / 0 bytes",
"events": [
  {
"timestamp": "05/18/2017 13:33:56 UTC",
"category": "INFO",
"message": "Node Status changed from CONNECTING to CONNECTED"
  }
],
"nodeStartTime": "05/18/2017 13:33:51 UTC"
  },
  {
"nodeId": "ddd371c7-2618-4079-8c61-ee30245d15cc",
"address": "centos-b",
"apiPort": 8080,
"status": "CONNECTED",
"heartbeat": "05/19/2017 07:44:36 UTC",
"roles": [],
"activeThreadCount": 0,
"queued": "0 / 0 bytes",
"events": [
  {
"timestamp": "05/19/2017 07:42:54 UTC",
"category": "INFO",
"message": "Received first heartbeat from connecting node. Node
connected."
  },
  {
"timestamp": "05/19/2017 07:42:47 UTC",
"category": "INFO",
"message": "Connection requested from existing node. Setting
status to connecting."
  }
],
"nodeStartTime": "05/19/2017 07:42:40 UTC"
  }
],
"generated": "07:44:39 UTC"
  }
}



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-clusters-duplicate-nodes-shown-in-cluster-overview-tp1966.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-18 Thread ddewaele
Hi,
 
Just wanted to point out that the newly appointed coordinator (centos-b)
does end up sending heartbeats to itself as you described. 

2017-05-18 12:41:41,336 DEBUG [Process Cluster Protocol Request-3]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from
centos-b:8080

It seems heartbeats are purged when a new coordinator is selected.

https://github.com/apache/nifi/blob/b73ba7f8d4f6319881c26b8faad121ceb12041ab/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/heartbeat/ClusterProtocolHeartbeatMonitor.java#L136

And disconnecting nodes can only be done based on existing heartbeats.

https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/heartbeat/AbstractHeartbeatMonitor.java#L162

As the centos-a heartbeats were purged, centos-a never gets disconnected.




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1954.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-18 Thread ddewaele
Found something interesting in the centos-b debug logging 

after centos-a (the coordinator) is killed centos-b takes over. Notice how
it "Will not disconnect any nodes due to lack of heartbeat" and how it still
sees centos-a as connected despite the fact that there are no heartbeats
anymore.

2017-05-18 12:41:38,010 INFO [Leader Election Notification Thread-2]
o.apache.nifi.controller.FlowController This node elected Active Cluster
Coordinator
2017-05-18 12:41:38,010 DEBUG [Leader Election Notification Thread-2]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Purging old heartbeats
2017-05-18 12:41:38,014 INFO [Leader Election Notification Thread-1]
o.apache.nifi.controller.FlowController This node has been elected Primary
Node
2017-05-18 12:41:38,353 DEBUG [Heartbeat Monitor Thread-1]
o.a.n.c.c.h.AbstractHeartbeatMonitor Received no new heartbeats. Will not
disconnect any nodes due to lack of heartbeat
2017-05-18 12:41:41,336 DEBUG [Process Cluster Protocol Request-3]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from
centos-b:8080
2017-05-18 12:41:41,337 DEBUG [Process Cluster Protocol Request-3]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as
follows:
Node: [NodeConnectionStatus[nodeId=centos-b:8080, state=CONNECTED,
updateId=45], NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED,
updateId=42]]
Self: [NodeConnectionStatus[nodeId=centos-b:8080, state=CONNECTED,
updateId=45], NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED,
updateId=42]]
Difference: []


2017-05-18 12:41:41,337 INFO [Process Cluster Protocol Request-3]
o.a.n.c.p.impl.SocketProtocolListener Finished processing request
410e7db5-8bb0-4f97-8ee8-fc8647c54959 (type=HEARTBEAT, length=2341 bytes)
from centos-b:8080 in 3 millis
2017-05-18 12:41:41,339 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-05-18
12:41:41,330 and sent to centos-b:10001 at 2017-05-18 12:41:41,339; send
took 8 millis
2017-05-18 12:41:43,354 INFO [Heartbeat Monitor Thread-1]
o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats in
93276 nanos
2017-05-18 12:41:46,346 DEBUG [Process Cluster Protocol Request-4]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from
centos-b:8080
2017-05-18 12:41:46,346 DEBUG [Process Cluster Protocol Request-4]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as
follows:
Node: [NodeConnectionStatus[nodeId=centos-b:8080, state=CONNECTED,
updateId=45], NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED,
updateId=42]]
Self: [NodeConnectionStatus[nodeId=centos-b:8080, state=CONNECTED,
updateId=45], NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED,
updateId=42]]
Difference: []




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1950.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-18 Thread ddewaele
Thanks for the response. 

When killing a non-coordinator node, it does take 8 * 5 seconds before I see
this :

nifi-app.log:2017-05-18 12:04:29,644 INFO [Heartbeat Monitor Thread-1]
o.a.n.c.c.node.NodeClusterCoordinator Status of centos-b:8080 changed from
NodeConnectionStatus[nodeId=centos-b:8080, state=CONNECTED, updateId=26] to
NodeConnectionStatus[nodeId=centos-b:8080, state=DISCONNECTED, Disconnect
Code=Lack of Heartbeat, Disconnect Reason=Have not received a heartbeat from
node in 43 seconds, updateId=27]

When killing the coordinator node, the newly appointed coordinator doesn't
seem to detect the heartbeat timeout.

I'll see if I can enable the debug logging.

My Nifi runs inside a KVM. KVM includes 3 seperate VMs. External zookeeper
(replicated mode) running on the 3 VMs, and 2 VMs used for NiFi nodes.

I have the same issue in a dockerized environment






--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1948.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-18 Thread ddewaele
I can reproduce the issue by killing the java processes associated with the
cluster coordinator node.

The NiFi UI will not be accessible anymore until that particular node is
brought up again, or until the node entry is removed from the cluster (via
the REST API).

Killing non-coordinator nodes does result in nifi detected heartbeat loss
and flagging it as DISCONNECTED.




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1947.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Nifi Cluster fails to disconnect node when node was killed

2017-05-18 Thread ddewaele
Hi,

I have a NiFi cluster up and running and I'm testing various failover
scenarios.

I have 2 nodes in the cluster :

- centos-a : Coordinator node / primary
- centos-b : Cluster node

I noticed in 1 of the scenarios when I killed the Cluster Coordinator node,
that the following happened :

centos-b couldn't contact the coordinator anymore and became the new
coordinator / primary node. (as expected) :

Failed to send heartbeat due to:
org.apache.nifi.cluster.protocol.ProtocolException: Failed to send message
to Cluster Coordinator due to: java.net.ConnectException: Connection refused
(Connection refused)
This node has been elected Leader for Role 'Primary Node'
This node has been elected Leader for Role 'Cluster Coordinator'

When attempting to access the UI on centos-b, I got the following error :

2017-05-18 11:18:49,368 WARN [Replicate Request Thread-2]
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET
/nifi-api/flow/current-user to centos-a:8080 due to {}

If my understanding is correct, NiFi will try to replicate to connected
nodes in the cluster. Here, centos-a was killed a while back and should have
been disconnected, but as far as NiFi was concerned it was still connected.

As a result I cannot access the UI anymore (due to the replication error),
but I can lookup the cluster info via the REST API. And sure enough, it
still sees centos-a as being CONNECTED.

{
"cluster": {
"generated": "11:20:13 UTC",
"nodes": [
{
"activeThreadCount": 0,
"address": "centos-b",
"apiPort": 8080,
"events": [
{
"category": "INFO",
"message": "Node Status changed from CONNECTING to
CONNECTED",
"timestamp": "05/18/2017 11:17:31 UTC"
},
{
"category": "INFO",
"message": "Node Status changed from [Unknown Node]
to CONNECTING",
"timestamp": "05/18/2017 11:17:27 UTC"
}
],
"heartbeat": "05/18/2017 11:20:09 UTC",
"nodeId": "a5bce78d-23ea-4435-a0dd-4b731459f1b9",
"nodeStartTime": "05/18/2017 11:17:25 UTC",
"queued": "8,492 / 13.22 MB",
"roles": [
"Primary Node",
"Cluster Coordinator"
],
"status": "CONNECTED"
},
{
"address": "centos-a",
"apiPort": 8080,
"events": [],
"nodeId": "b89e8418-4b7f-4743-bdf4-4a08a92f3892",
"roles": [],
"status": "CONNECTED"
}
]
}
}

When centos-a was brought back online, i noticed the following status change
:

Status of centos-a:8080 changed from
NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED, updateId=15] to
NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTING, updateId=19]

So it went from connected -> connecting.

It clearly missed the disconnected step here.

When shutting down the centos-a node using nifi.sh stop, it goes into the
DISCONNECTED state :

Status of centos-a:8080 changed from
NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED, updateId=12] to
NodeConnectionStatus[nodeId=centos-a:8
080, state=DISCONNECTED, Disconnect Code=Node was Shutdown, Disconnect
Reason=Node was Shutdown, updateId=13]

How can I debug this further, and can somebody provide some additional
insights ? I have seen nodes getting disconnected due to missing heartbeats

tatus of centos-a:8080 changed from
NodeConnectionStatus[nodeId=centos-a:8080, state=CONNECTED, updateId=10] to
NodeConnectionStatus[nodeId=centos-a:8080, state=DISCONNECTED, Disconnect
Code=Lack of Heartbeat, Disconnect Reason=Have not received a heartbeat from
node in 41 seconds, updateId=11]

But sometimes it doesn't seem to detect this, and NiFi keeps on thinking it
is CONNECTED, despite not having received heartbeats in ages.

Any ideas ?



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi 1.1.0 cluster on Docker Swarm

2017-03-17 Thread ddewaele
Hi Jeremy,

The issue we are facing is that we need to keep the nifi.web.http.host blank
in order to have a working swarm setup, but this conflicts with the way nifi
does cluster communication. Let me try to explain:

I have 2 nifi instances (cluster nodes) in a docker swarm connected to
zookeeper (also running in the docker swarm).

- stack1_nifi1 running on port 8080 on centos-a
- stack1_nifi2 running on port 8085 on centos-b

(stack1_nifi1 and stack1_nifi2 are swarm service names and are made
available in the docker network via DNS).

My Nifi config :

# Leave blank so that it binds to all possible interfaces
nifi.web.http.host=
nifi.web.http.port=8080  #(8085 on the other node)

nifi.cluster.is.node=true
# Define the cluster node (hostname) address to uniquely identify this node.
nifi.cluster.node.address=stack1_nifi1 #(stack1_nifi2 on the other node)
nifi.cluster.node.protocol.port=10001


In the NiFi logs I notice this :

2017-03-17 11:44:45,298 INFO [main]
o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster
Coordinator is located at stack1_nifi2:10001; will use this address for
sending heartbeat messages
2017-03-17 11:44:45,433 INFO [Process Cluster Protocol Request-1]
o.a.n.c.c.flow.PopularVoteFlowElection Vote cast by localhost:8085; this
flow now has 1 votes

In the first line the cluster node address is used, but in the second one it
seems the nifi.web.http.host is used. So the nodeIds are not using the
nifi.cluster.node.address, but seem to default to the empty
nifi.web.http.host entry (defaults to localhost).


Same thing can be seen here:

2017-03-17 11:44:50,517 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator
Resetting cluster node statuses from
{localhost:8080=NodeConnectionStatus[nodeId=localhost:8080,
state=CONNECTING, updateId=3],
localhost:8085=NodeConnectionStatus[nodeId=localhost:8085, state=CONNECTING,
updateId=5]} to {localhost:8080=NodeConnectionStatus[nodeId=localhost:8080,
state=CONNECTING, updateId=3],
localhost:8085=NodeConnectionStatus[nodeId=localhost:8085, state=CONNECTING,
updateId=5]}

Shouldn't Nifi always use the nifi.cluster.node.address to generate the
nodeIds ? 

It should also use that setting to send replication requests I guess :

2017-03-10 06:03:59,014 WARN [Replicate Request Thread-7]
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET
/nifi-api/flow/current-user to localhost:8085 due to {}

Because my Nifi cluster seems to be up and running (I see heartbeats going
back and forth), but I cannot access the UI due the replicate error above.

The nifi running on centos-a:8080 is trying to do a request to
localhost:8085 where it should go to centos-b:8085. (in order to that it
should use the nifi.cluster.node.address).





Jeremy Dyer wrote
> Raf - Ok so good news and bad news. Good news its working for me. Bad news
> its working for me =) Here is the complete list of things that I changed.
> Hopefully this can at least really help narrow down what is causing the
> issue.
> 
> - I ran on a single machine. All that was available to me while at the
> airport.
> - I added a "network" section to the end of the docker-compose.yml file. I
> think you might already have that and this was just a snippet in your
> gist?
> - I removed the COPY from the Dockerfile around the custom processors
> since
> I don't have those.
> 
> In my mind the most likely issue is something around Docker swarm
> networking.





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-1-1-0-cluster-on-Docker-Swarm-tp1229p1266.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Building nifi locally on my mac

2016-12-30 Thread ddewaele
It was indeed the JDK version. Thx.

It seems my stacktrace got lost in the emails (using nabble as a frontend).



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Building-nifi-locally-on-my-mac-tp542p549.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Building nifi locally on my mac

2016-12-29 Thread ddewaele
When I try to create a local build of nifi on my mac I always get the
following test error : (on CentOS it works fine).

Any idea what is causing this and how this can be fixed ?





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Building-nifi-locally-on-my-mac-tp542.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Processors on the fly for many sensor devices

2016-12-27 Thread ddewaele
H,

We've been using our Netty4 based GetTCP processor in production for a while
now with success. We've currently got the code on a branch in your forked
repo :
https://github.com/IxorTalk/nifi-gettcp-bundle/tree/raf/netty-tcp-client

It detects read timeouts , channel inactivities, and can properly recover
from these scenarios.

Would be great if you could take a look and provide some feedback. By using
Netty I feel like we don't need to implement a lot of potentially
error-prone low level nio plumbing.

We would be more than happy to contribute this back to you so that the
processor might end up in a future NiFi release. We feel that especially for
IoT based use-cases, where a lot of tcp based communication exists (reading
sensor data), this type of processor, being able to stream TCP based
bytestreams and outputting flowfiles would be of great value.

We're also thinking about creating an InvokeTCP processor, a processor that
would run more short-lived and accept flowfiles from other processors
(acting as TCP requests) and would output the TCP response as a flowfile. 






--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Re-Processors-on-the-fly-for-many-sensor-devices-tp47p540.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: NiFi Cron scheduling

2016-12-22 Thread ddewaele
I think the problem was worse on my mac due to Date/Time settings and auto
NTP updates. 
When turning off the automatic NTP sync I didn't see the problem occurring
anymore.

Anyway, our target server is CentOS and we haven't seen the issue there.
We're also running Nifi in a docker container now (even for local dev).

So for the moment we're covered. But good to know you were able to find the
potential culprit and were able to log an issue for it.

Thx !



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Cron-scheduling-tp481p522.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Making FlowFiles environment independent

2016-12-20 Thread ddewaele
Coming back to PutTCP, is there a reason why the hostname property doesn't
support EL ?

In both cases you mentioned you would like the option to externalise it or
make it dynamic.

Are there other ways of injecting a hostname in the PutTCP processor ?



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Making-FlowFiles-environment-independent-tp409p498.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Making FlowFiles environment independent

2016-12-18 Thread ddewaele
Works like a charm ! Thanks.


Bryan Bende wrote
> There is a concept of a variable registry which is described in the Admin
> Guide here [1], but it is still based on expression language.
> 
> For your use case with PutTCP, are you looking to have the hostname
> paramterized across environments (dev vs prod) or are you looking to
> connect to a different host per flow file?
> 
> [1]
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#custom_properties





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Making-FlowFiles-environment-independent-tp409p487.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: NiFi Cron scheduling

2016-12-18 Thread ddewaele
Hi,

I think you need to have a minimum of 6 fields (and optionally 7). I also
think the cron definition below would be executed 60 times (every second due
to the "*") every 10 minutes (due to the "0/10").

60 times from xx:10:00 to xx:10:59
60 times from xx:20:00 to xx:20:59
...

But even with this cron I'm seeing a duplicate when the schedule kicks in

12/19/2016 07:40:00.000 UTC
12/19/2016 07:39:59.972 UTC


Joe Percivall wrote
> Hello, 
> 
> Based on this[1] SO question, I believe you need to change your CRON
> schedule to be "* */10 * * *". 
> 
> [1]
> http://stackoverflow.com/questions/10401344/quartz-cron-trigger-runs-twice





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Cron-scheduling-tp481p486.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: ReplaceText and special characters

2016-12-18 Thread ddewaele
Thx to you both for the tips.

Using ${literal('\r')} works great. Clean and simple.



Joe Percivall wrote
> This question actually gets back to a discussion on the user entering
> literal vs. escaped text. In the NiFi UI the user inputs the text into the
> box and then it is converted into a Java String which gets automatically
> escaped in order to pass along the string as the user wrote it (so a
> processor would see the literal characters "\" and "n" when the user wrote
> "\n"). Though sometimes (as evidenced by this case) the user wants the
> control character instead of the literal values entered and Koji's
> suggestion of using EL as a work-around is great. That said, I do believe
> that "${literal('\r')}" can be used instead so that a replace isn't
> needed.
>  





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-and-special-characters-tp480p485.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


NiFi Cron scheduling

2016-12-18 Thread ddewaele
I noticed that when configuring a processor using the following CRON  "0 0/10
* * * ?" , that instead of the processor being scheduled to run every 10
minutes on the hour like this :

xx:00
xx:10
xx:20
xx:30
xx:40
xx:50

That it is in fact scheduled twice every 10 minutes at :

12/19/2016 00:09:59.971 UTC
12/19/2016 00:10:00.000 UTC

and at

12/19/2016 00:19:59.988 UTC
12/19/2016 00:20:00.000 UTC

Is there something wrong with my cron definition ?



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Cron-scheduling-tp481.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


ReplaceText and special characters

2016-12-18 Thread ddewaele
Hi,

I need to send a byte sequence to a TCP socket every 10 minutes. 

I've setup a GenerateFlowFile processor to generate 1 random byte every 10
minutes, followed by a replaceText processor that will replace that 1 byte
with my byte sequence (a string literal).

I can use SHIFT-ENTER in the ReplaceText processor to generate newlines, but
I would like to generate a carriage return instead of a newline.

Is this possible with the ReplaceText processor ? I've tried using "\r" ,
"\\r" in both regex and literal mode, but I cannot the carriage return in
the outgoing flowfile.

Any ideas on how to do this with a standard processor ? 

Also, is there another way to generate a flowfile in a CRON-like fashion ? I
read that the GenerateFlowFile is typically used for load testing, where
here it used to trigger a CRON based flow. I feel like I'm abusing the
GenerateFlowFile processor for this.

Thanks.



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-and-special-characters-tp480.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Processors on the fly for many sensor devices

2016-12-16 Thread ddewaele
When working with Nio in blocking mode (as we're currently doing) we will
need to ensure that the processor is cable of 

- reconnecting properly and handle read timeouts / server disconnects / 
properly
- cleaned up all tcp connections that it manages

This will require additional plumbing IMHO (I don't know to what extend NiFi
supports base classes for this), but I found that during testing that 

- Sometimes TCP socket communication hangs (on read operations) when
connection is flaky
- TCP sockets are not always properly closed, even after processor shutdown
(It requires a NiFi restart to close them).

I've logged most of the issues I encountered on Andrew's github repo.





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Re-Processors-on-the-fly-for-many-sensor-devices-tp47p466.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Processors on the fly for many sensor devices

2016-12-14 Thread ddewaele
Hi Andrew,

I noticed you've created a pull request to get this in the Nifi codebase and
that there were some review discussions going on. 

I was wondering what the status is on the GetTCP processor.

I've also logged some issues in your Github repo and can create some PRs if
you like.




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Re-Processors-on-the-fly-for-many-sensor-devices-tp47p427.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Making FlowFiles environment independent

2016-12-12 Thread ddewaele
Thanks a lot ... I'll take a look at the variable registry.

In my case, I want to have a single flow definition (flow template) that we
can deploy in dev / prod, and have an easy way to externalise environment
specific stuff in the flow (like a tcp hostname / port)

I tried passing a system property to the PutTCP hostname property but that
didn't work (When I used ${mySystemPropertyName} the processor didn't
resolve the mySystemPropertyName but simply used "${mySystemPropertyName}"
as a string). I assume this is because the PutTCP hostname property doesn't
support expression language ?




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Making-FlowFiles-environment-independent-tp409p415.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Making FlowFiles environment independent

2016-12-12 Thread ddewaele
We have a flowfile that contains a number of environment specific values
(ports / hostnames / .).

Am I correct in saying that there is no immediate variable registry
somewhere in nifi, and that all of these environment specific items need to
be passed as environment variables or java system properties ?

I understand that the nifi expression language allows us to retrieve
environment variables / system properties, but a number of processor don't
support the expression language for fields that do contain environment
specific values (like the PutTCP processor hostname property).

How should we go about updating those ?



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Making-FlowFiles-environment-independent-tp409.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Frontend for Nifi support / mailing list

2016-10-30 Thread ddewaele
In order to login to Pony Mail you need to be an ASF committer. Using my
email client to answer to threads and interacting with people is just too
cumbersome  

I can understand to some degree that you don't want to use gitter / slack,
but I would look into promoting something more user-friendly like nabble so
that people can use their browser to interact with people / threads.

However, it seems that there are 2 nabbles available 

http://apache-nifi-users-list.2361937.n4.nabble.com/  (the one I'm using
now)
http://apache-nifi.1125220.n5.nabble.com/Users-f2.html (the complete one
including developers / notifications that I could only use to browse but nor
reply)

I don't know who is maintaining these nabbles.


Andy LoPresto wrote
> Davy,
> 
> You may also be interested in the Pony Mail [1] interface for dev@ [2] and
> users@ [3]. Pony Mail is an Apache incubating project and it has been
> pretty useful for me (not for responding to threads but for searching
> archives and providing permalinks for reference). These pages are hosted
> by Apache and should provide permanent access to all the archives.





--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Re-Frontend-for-Nifi-support-mailing-list-tp2p46.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.