In order to login to Pony Mail you need to be an ASF committer. Using my
email client to answer to threads and interacting with people is just too
cumbersome
I can understand to some degree that you don't want to use gitter / slack,
but I would look into promoting something more
Thanks a lot ... I'll take a look at the variable registry.
In my case, I want to have a single flow definition (flow template) that we
can deploy in dev / prod, and have an easy way to externalise environment
specific stuff in the flow (like a tcp hostname / port)
I tried passing a system
We have a flowfile that contains a number of environment specific values
(ports / hostnames / .).
Am I correct in saying that there is no immediate variable registry
somewhere in nifi, and that all of these environment specific items need to
be passed as environment variables or java system
Hi Andrew,
I noticed you've created a pull request to get this in the Nifi codebase and
that there were some review discussions going on.
I was wondering what the status is on the GetTCP processor.
I've also logged some issues in your Github repo and can create some PRs if
you like.
--
When I try to create a local build of nifi on my mac I always get the
following test error : (on CentOS it works fine).
Any idea what is causing this and how this can be fixed ?
--
View this message in context:
It was indeed the JDK version. Thx.
It seems my stacktrace got lost in the emails (using nabble as a frontend).
--
View this message in context:
http://apache-nifi-users-list.2361937.n4.nabble.com/Building-nifi-locally-on-my-mac-tp542p549.html
Sent from the Apache NiFi Users List mailing list
Hi,
I need to send a byte sequence to a TCP socket every 10 minutes.
I've setup a GenerateFlowFile processor to generate 1 random byte every 10
minutes, followed by a replaceText processor that will replace that 1 byte
with my byte sequence (a string literal).
I can use SHIFT-ENTER in the
I noticed that when configuring a processor using the following CRON "0 0/10
* * * ?" , that instead of the processor being scheduled to run every 10
minutes on the hour like this :
xx:00
xx:10
xx:20
xx:30
xx:40
xx:50
That it is in fact scheduled twice every 10 minutes at :
12/19/2016
Thx to you both for the tips.
Using ${literal('\r')} works great. Clean and simple.
Joe Percivall wrote
> This question actually gets back to a discussion on the user entering
> literal vs. escaped text. In the NiFi UI the user inputs the text into the
> box and then it is converted into a
Hi,
I think you need to have a minimum of 6 fields (and optionally 7). I also
think the cron definition below would be executed 60 times (every second due
to the "*") every 10 minutes (due to the "0/10").
60 times from xx:10:00 to xx:10:59
60 times from xx:20:00 to xx:20:59
...
But even with
Works like a charm ! Thanks.
Bryan Bende wrote
> There is a concept of a variable registry which is described in the Admin
> Guide here [1], but it is still based on expression language.
>
> For your use case with PutTCP, are you looking to have the hostname
> paramterized across environments
Coming back to PutTCP, is there a reason why the hostname property doesn't
support EL ?
In both cases you mentioned you would like the option to externalise it or
make it dynamic.
Are there other ways of injecting a hostname in the PutTCP processor ?
--
View this message in context:
H,
We've been using our Netty4 based GetTCP processor in production for a while
now with success. We've currently got the code on a branch in your forked
repo :
https://github.com/IxorTalk/nifi-gettcp-bundle/tree/raf/netty-tcp-client
It detects read timeouts , channel inactivities, and can
I think the problem was worse on my mac due to Date/Time settings and auto
NTP updates.
When turning off the automatic NTP sync I didn't see the problem occurring
anymore.
Anyway, our target server is CentOS and we haven't seen the issue there.
We're also running Nifi in a docker container now
When working with Nio in blocking mode (as we're currently doing) we will
need to ensure that the processor is cable of
- reconnecting properly and handle read timeouts / server disconnects /
properly
- cleaned up all tcp connections that it manages
This will require additional plumbing
Hi Jeremy,
The issue we are facing is that we need to keep the nifi.web.http.host blank
in order to have a working swarm setup, but this conflicts with the way nifi
does cluster communication. Let me try to explain:
I have 2 nifi instances (cluster nodes) in a docker swarm connected to
zookeeper
Hi,
We're using Nifi in a containerized environment (using docker-compose for
local development and test, and docker swarm in production).
The current docker image doesn't have a lot of environment options, meaning
if you want to run Nifi with some custom config like the one below :
Great also have some ideas about this. I'll log a JIRA and elaborate on
those.
We can then see on how to move this forward. (willing to do a pull request
for this).
--
View this message in context:
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Docker-tp2562p2576.html
Sent from
Hi,
I have a NiFi cluster up and running and I'm testing various failover
scenarios.
I have 2 nodes in the cluster :
- centos-a : Coordinator node / primary
- centos-b : Cluster node
I noticed in 1 of the scenarios when I killed the Cluster Coordinator node,
that the following happened :
We have a 2 node cluster (centos-a / centos-b). During on of your failover
tests, we noticed that when we rebooted centos-b, sometimes "duplicate" node
entries can be seen in the cluster.
We rebooted centos-b and when it came back online the cluster NiFi saw 2 out
of 3 nodes connected.
Found something interesting in the centos-b debug logging
after centos-a (the coordinator) is killed centos-b takes over. Notice how
it "Will not disconnect any nodes due to lack of heartbeat" and how it still
sees centos-a as connected despite the fact that there are no heartbeats
anymore.
Hi,
Just wanted to point out that the newly appointed coordinator (centos-b)
does end up sending heartbeats to itself as you described.
2017-05-18 12:41:41,336 DEBUG [Process Cluster Protocol Request-3]
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from
centos-b:8080
It
We're using docker, and in our failover scenario the machine is rebooted
and/or the docker system is restarted.
We're currently volume mapping the following :
- /srv/nifi/flow/archive:/opt/nifi/nifi-1.2.0/conf/archive:Z
- /srv/nifi/flows:/opt/nifi/nifi-1.2.0/conf/flows:Z
-
Sorry, payload should also include the nodeId
curl -v -X PUT -d
"{\"node\":{\"nodeId\":\"b89e8418-4b7f-4743-bdf4-4a08a92f3892\",\"status\":\"DISCONNECTING\"}}"
-H "Content-Type: application/json"
http://192.168.122.141:8080/nifi-api/controller/cluster/nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
You should be able to put it into DISCONNECTED mode by doing the following
call :
curl -v -X PUT -d "{\"node\":{\"status\":\"DISCONNECTING\”}}” -H
"Content-Type: application/json"
http://192.168.122.141:8080/nifi-api/controller/cluster/nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
It should
We have a node nifi cluster running with 3 zookeeper instances (replicated)
in a Docker Swarm Cluster.
Most of time the cluster is operating fine, but from time to time we notice
that Nifi stops processing messages completely. It eventually resumes after
a while (sometimes after a couple of
Seems nabble doesn't send the raw-text-formatted log snippets.
Added them in this gist :
https://gist.github.com/ddewaele/67ca6cb95b9c894a9eb8d782b2ad99a2
--
View this message in context:
http://apache-nifi-users-list.2361937.n4.nabble.com/NiFi-Cluster-with-lots-of-SUSPENDED-RECONNECTED-LOST
I can reproduce the issue by killing the java processes associated with the
cluster coordinator node.
The NiFi UI will not be accessible anymore until that particular node is
brought up again, or until the node entry is removed from the cluster (via
the REST API).
Killing non-coordinator nodes
Thanks for the response.
When killing a non-coordinator node, it does take 8 * 5 seconds before I see
this :
nifi-app.log:2017-05-18 12:04:29,644 INFO [Heartbeat Monitor Thread-1]
o.a.n.c.c.node.NodeClusterCoordinator Status of centos-b:8080 changed from
Thx a lot for the quick response. Looking forward to the PR and the
release :)
Would this for example still make the 1.4.0 release ?
It would also be very interesting to log client ports in debug mode
don't know how easy that is with nio.
There is Keep Alive Timeout = 2min specified on
Thx a lot for the quick response. Looking forward to the PR and the
release :)
Would this for example still make the 1.4.0 release ?
It would also be very interesting to log client ports in debug mode
don't know how easy that is with nio.
There is Keep Alive Timeout = 2min specified on
I've let it run overnight on 1.4.0-SNAPSHOT. Didn't see any hanging
connections and after timeouts they were cleaned up.
However, I noticed something else (perhaps unrelated). About 40% of the
messages that we "get" from the tcp connection contained "noise / garbage"
and didn't pass their
Small update : No garbage / noise on 1.3.0 also so it must be in
1.4.0-SNAPSHOT.
Noticed that the PutTCP processor has changed the way it processes incoming
flowfiles. Might be related to that.
Looked into the data provenance and noticed 3 bytes EF BF BD (in hex) when
things go bad. (This
Stopping the processor doesn't cleanup the tcp connection. It remains
ESTABLISHED.
There are 2 ways of getting out of it (none of them are ideal).
- Restarting Nifi
- Restarting the Moxa serial ports
I've dumped the output in the following gist :
https://gist.github.com/ddewaele
We are using NiFi PutTCP processors to send messages to a number of Moxa
onCell ip gateway devices.
These Moxa devices are running on a cellular network with not always the
most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
client connections.
What we notice is that
Hi,
Trying it out now. forgot how long it takes to build :)
Will give feedback here.
Thx for the client port logging also ... that is always useful for
debugging perhaps we can check later in what way we can retrieve it in
the timeout scenarios / standard close scenario
Really hope this
36 matches
Mail list logo