Re: all of our schedule tasks not running/being scheduled....
Hi Dan, If all available Timer Driven Thread are being used (or hang unexpectedly for some reason), then no processor can be scheduled. The number at the left top the NiFi UI under the NiFi logo shows the number of threads currently working. If you see something more than 0, then I'd recommend to take some thread dumps to figure out what running thread is doing. Other than that, I've encountered unexpected behavior with a NiFi cluster if a node encountered OutOfMemory error. The cluster started to behave incorrectly as it can not replicate REST requests among nodes. I'd search any ERR logs in nifi-app.log. Thanks, Koji On Tue, Jan 30, 2018 at 1:10 PM, dan youngwrote: > Hello, > > We're running a secure 3 node 1.4 cluster. Has anyone seen any behaviour > where the cluster just stops scheduling the running of flowfiles/tasks? > i.e. cron/timer, just don't run when they're suppose to. I've tried to stop > and restart a processor that is say set to run ever 900sec, but nothing > happens. Then only thing I can do is to cycle through restarting each node > in the cluster and then we're good for a few daysthis is something that > just started happening and has occurred twice in the last week or so. > Anything I should keep an eye out for or look for in the logs? > > Regards, > > Dan
all of our schedule tasks not running/being scheduled....
Hello, We're running a secure 3 node 1.4 cluster. Has anyone seen any behaviour where the cluster just stops scheduling the running of flowfiles/tasks? i.e. cron/timer, just don't run when they're suppose to. I've tried to stop and restart a processor that is say set to run ever 900sec, but nothing happens. Then only thing I can do is to cycle through restarting each node in the cluster and then we're good for a few daysthis is something that just started happening and has occurred twice in the last week or so. Anything I should keep an eye out for or look for in the logs? Regards, Dan
Re: Large number of NiFi processors
Hello Jon, The number of processors is virtually unlimited, provided that you have enough CPU to sustain the number of concurrent tasks allocated to NiFi to provide acceptable performance, enough disk space for the NiFi repositories, and enough RAM to cover the overhead of the processor instances. With 1,000 processors, you'll need enough concurrent tasks to keep the latency of data flow to an acceptably low level. Depending on the processors you're using, and the design of your flow, you may need to configure NiFi with additional heap space to cover the requirements of that many processors. NiFi 1.x had added quite a few features that are not available in the 0.x version, like multi-tenancy support, additional user authentication options (OpenID/Knox/improved LDAP configuration), improved proxy support, NiFi Registry support, new processors, increased security, and bug fixes, among other things. Take a look at some of the release notes from the 1.x line [1] for more details, There are improved repository implementations, along with Record-based processors that improve throughput of data through NiFi, in the 1.x line that I don't think were back-ported to 0.x. Are you running a single instance (non-clustered) of NiFi? You may want to create a NiFi cluster to set up distributed processing of data if the resources of the host on which you're running NiFi is not able to keep up. How has your current configuration been performing? I know this answer is very high-level, and would be happy to dive into some details if you'd like. [1] https://cwiki.apache.org/confluence/display/NIFI/Release+Notes On Mon, Jan 29, 2018 at 5:24 PM Jon Allenwrote: > Is there a limit to the number of processors that can be put into a NiFi > flow? > > I'm assuming there isn't an actual hard limit but what's likely to break > as the number of processors increases, and what's considered a large number? > > We currently have a few hundred processors in our graph but it's looking > like this will head towards 1,000 in the near future. Does anyone have any > suggestions for tuning the system to handle this? Are there any papers > available describing what I should be looking at? > > We're currently running with NiFi 0.7.4. Are there any changes in later > releases that improve things in this area? > > Thanks, > Jon >
Large number of NiFi processors
Is there a limit to the number of processors that can be put into a NiFi flow? I'm assuming there isn't an actual hard limit but what's likely to break as the number of processors increases, and what's considered a large number? We currently have a few hundred processors in our graph but it's looking like this will head towards 1,000 in the near future. Does anyone have any suggestions for tuning the system to handle this? Are there any papers available describing what I should be looking at? We're currently running with NiFi 0.7.4. Are there any changes in later releases that improve things in this area? Thanks, Jon
Re: NiFi 1.4 Clustering Error: Cannot Replicate Request GET /nifi-api/flow/current-user
Good deal, thanks for getting back to me about it. I think that I may make some changes and potentially open a PR depending on what comes of it (I'll have to take a look at the workflow and all of that good stuff first). Cheers, Ryan H On Mon, Jan 29, 2018 at 12:55 PM, Bryan Bendewrote: > Ryan, > > I'm not that familiar with Docker and DCOS, but I think what you said > is correct... > > The issue is that you currently can't leave nifi.web.http.host blank > because that will cause the "node API address" of each node to be > calculated as 'localhost', which then means replication of requests > fails. > > So you have to set nifi.web.http.host to something that each node can > reach. I'm not familiar enough with your setup to know if there is a > way to do that. > > -Bryan > > > On Mon, Jan 29, 2018 at 12:39 PM, Ryan H > wrote: > > Hi Bryan, > > > > Yes that makes total sense, and it is what I figured was happening. So > > whatever is configured for nifi.web.http.host is where api calls will go > to, > > but this is also what jetty will bind to, correct? So in my case, I would > > have to have the additional property mentioned in > > https://issues.apache.org/jira/browse/NIFI-3642 since jetty can't bind > to a > > VIP. > > > > -Ryan H > > > > On Mon, Jan 29, 2018 at 12:28 PM, Bryan Bende wrote: > >> > >> Ryan, > >> > >> I remember creating an issue for something that seems similar to what > >> you are running into: > >> > >> https://issues.apache.org/jira/browse/NIFI-3642 > >> > >> Long story short, I believe you do need to specify a value for > >> nifi.web.http.host because that will be used to replicate requests > >> that come in to the REST API, so each node needs that value to be > >> something that is reachable by the other nodes. > >> > >> -Bryan > >> > >> > >> On Mon, Jan 29, 2018 at 12:03 PM, Ryan H > >> wrote: > >> > Dev Team, > >> > > >> > > >> > I am running into an interesting issue while trying to cluster NiFi > in a > >> > containerized environment (Docker containers running on DC/OS cluster) > >> > and I > >> > am somewhat stuck with what to do. I am starting with getting just 2 > >> > NiFi > >> > nodes with a single external zookeeper instance (just to get it > working, > >> > will not use for production). Currently our DC/OS cluster does not > >> > support > >> > container-to-container communication (no overlay network support at > the > >> > moment) so we are using VIP’s to expose required ports on the > >> > container(s) > >> > so traffic can be mapped to a well known address and correct container > >> > port > >> > even though the host/host port may change. > >> > > >> > > >> > Currently everything spins up and the UI can be accessed on whatever > >> > node is > >> > elected the Cluster Coordinator (in this case it is the Primary Node > as > >> > well) (does show that there are 2 nodes in the cluster). However, any > >> > action > >> > taken on the canvas results in the following error shown in the UI: > >> > > >> > > >> > Node localhost:80 is unable to fulfill this request due to: > Transaction > >> > c91764e4-2fc8-492b-8887-babb59981ff3 is already in progress. > >> > > >> > > >> > When trying to access the UI of the other node, the canvas cannot be > >> > reached > >> > and the following error is shown on the error splash screen (increased > >> > read > >> > timeout to 30 secs, still the same): > >> > > >> > > >> > An unexpected error has occurred > >> > com.sun.jersey.api.client.ClientHandlerException: > >> > java.net.SocketTimeoutException: Read timed out > >> > > >> > > >> > If configured to use the hostname of the container, then the error is: > >> > > >> > unknown host exception > >> > > >> > > >> > In the NiFi logs, the following errors are present (as well as some > >> > other > >> > warnings): > >> > > >> > > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > >> > org.apache.nifi.StdOut > >> > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] > >> > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request > >> > GET > >> > /nifi-api/flow/current-user to localhost:80 due to > >> > com.sun.jersey.api.client.ClientHandlerException: > >> > java.net.SocketTimeoutException: Read timed out > >> > > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > >> > org.apache.nifi.StdOut > >> > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] > >> > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > >> > > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > >> > org.apache.nifi.StdOut > >> > com.sun.jersey.api.client.ClientHandlerException: > >> > java.net.SocketTimeoutException: Read timed out > >> > > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > >> > org.apache.nifi.StdOut > >> > at > >> > > >> > com.sun.jersey.client.urlconnection.URLConnectionClientHandler. > handle(URLConnectionClientHandler.java:155) > >> > > >> >
Re: NiFi 1.4 Clustering Error: Cannot Replicate Request GET /nifi-api/flow/current-user
Ryan, I'm not that familiar with Docker and DCOS, but I think what you said is correct... The issue is that you currently can't leave nifi.web.http.host blank because that will cause the "node API address" of each node to be calculated as 'localhost', which then means replication of requests fails. So you have to set nifi.web.http.host to something that each node can reach. I'm not familiar enough with your setup to know if there is a way to do that. -Bryan On Mon, Jan 29, 2018 at 12:39 PM, Ryan Hwrote: > Hi Bryan, > > Yes that makes total sense, and it is what I figured was happening. So > whatever is configured for nifi.web.http.host is where api calls will go to, > but this is also what jetty will bind to, correct? So in my case, I would > have to have the additional property mentioned in > https://issues.apache.org/jira/browse/NIFI-3642 since jetty can't bind to a > VIP. > > -Ryan H > > On Mon, Jan 29, 2018 at 12:28 PM, Bryan Bende wrote: >> >> Ryan, >> >> I remember creating an issue for something that seems similar to what >> you are running into: >> >> https://issues.apache.org/jira/browse/NIFI-3642 >> >> Long story short, I believe you do need to specify a value for >> nifi.web.http.host because that will be used to replicate requests >> that come in to the REST API, so each node needs that value to be >> something that is reachable by the other nodes. >> >> -Bryan >> >> >> On Mon, Jan 29, 2018 at 12:03 PM, Ryan H >> wrote: >> > Dev Team, >> > >> > >> > I am running into an interesting issue while trying to cluster NiFi in a >> > containerized environment (Docker containers running on DC/OS cluster) >> > and I >> > am somewhat stuck with what to do. I am starting with getting just 2 >> > NiFi >> > nodes with a single external zookeeper instance (just to get it working, >> > will not use for production). Currently our DC/OS cluster does not >> > support >> > container-to-container communication (no overlay network support at the >> > moment) so we are using VIP’s to expose required ports on the >> > container(s) >> > so traffic can be mapped to a well known address and correct container >> > port >> > even though the host/host port may change. >> > >> > >> > Currently everything spins up and the UI can be accessed on whatever >> > node is >> > elected the Cluster Coordinator (in this case it is the Primary Node as >> > well) (does show that there are 2 nodes in the cluster). However, any >> > action >> > taken on the canvas results in the following error shown in the UI: >> > >> > >> > Node localhost:80 is unable to fulfill this request due to: Transaction >> > c91764e4-2fc8-492b-8887-babb59981ff3 is already in progress. >> > >> > >> > When trying to access the UI of the other node, the canvas cannot be >> > reached >> > and the following error is shown on the error splash screen (increased >> > read >> > timeout to 30 secs, still the same): >> > >> > >> > An unexpected error has occurred >> > com.sun.jersey.api.client.ClientHandlerException: >> > java.net.SocketTimeoutException: Read timed out >> > >> > >> > If configured to use the hostname of the container, then the error is: >> > >> > unknown host exception >> > >> > >> > In the NiFi logs, the following errors are present (as well as some >> > other >> > warnings): >> > >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] >> > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request >> > GET >> > /nifi-api/flow/current-user to localhost:80 due to >> > com.sun.jersey.api.client.ClientHandlerException: >> > java.net.SocketTimeoutException: Read timed out >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] >> > o.a.n.c.c.h.r.ThreadPoolRequestReplicator >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > com.sun.jersey.api.client.ClientHandlerException: >> > java.net.SocketTimeoutException: Read timed out >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > at >> > >> > com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > at com.sun.jersey.api.client.Client.handle(Client.java:652) >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > at >> > >> > com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123) >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >> > org.apache.nifi.StdOut >> > at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) >> > >> > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] >>
Re: NiFi 1.4 Clustering Error: Cannot Replicate Request GET /nifi-api/flow/current-user
Hi Bryan, Yes that makes total sense, and it is what I figured was happening. So whatever is configured for nifi.web.http.host is where api calls will go to, but this is also what jetty will bind to, correct? So in my case, I would have to have the additional property mentioned in https://issues.apache.org/jira/browse/NIFI-3642 since jetty can't bind to a VIP. -Ryan H On Mon, Jan 29, 2018 at 12:28 PM, Bryan Bendewrote: > Ryan, > > I remember creating an issue for something that seems similar to what > you are running into: > > https://issues.apache.org/jira/browse/NIFI-3642 > > Long story short, I believe you do need to specify a value for > nifi.web.http.host because that will be used to replicate requests > that come in to the REST API, so each node needs that value to be > something that is reachable by the other nodes. > > -Bryan > > > On Mon, Jan 29, 2018 at 12:03 PM, Ryan H > wrote: > > Dev Team, > > > > > > I am running into an interesting issue while trying to cluster NiFi in a > > containerized environment (Docker containers running on DC/OS cluster) > and I > > am somewhat stuck with what to do. I am starting with getting just 2 NiFi > > nodes with a single external zookeeper instance (just to get it working, > > will not use for production). Currently our DC/OS cluster does not > support > > container-to-container communication (no overlay network support at the > > moment) so we are using VIP’s to expose required ports on the > container(s) > > so traffic can be mapped to a well known address and correct container > port > > even though the host/host port may change. > > > > > > Currently everything spins up and the UI can be accessed on whatever > node is > > elected the Cluster Coordinator (in this case it is the Primary Node as > > well) (does show that there are 2 nodes in the cluster). However, any > action > > taken on the canvas results in the following error shown in the UI: > > > > > > Node localhost:80 is unable to fulfill this request due to: Transaction > > c91764e4-2fc8-492b-8887-babb59981ff3 is already in progress. > > > > > > When trying to access the UI of the other node, the canvas cannot be > reached > > and the following error is shown on the error splash screen (increased > read > > timeout to 30 secs, still the same): > > > > > > An unexpected error has occurred > > com.sun.jersey.api.client.ClientHandlerException: > > java.net.SocketTimeoutException: Read timed out > > > > > > If configured to use the hostname of the container, then the error is: > > > > unknown host exception > > > > > > In the NiFi logs, the following errors are present (as well as some other > > warnings): > > > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] > > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request > GET > > /nifi-api/flow/current-user to localhost:80 due to > > com.sun.jersey.api.client.ClientHandlerException: > > java.net.SocketTimeoutException: Read timed out > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] > > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > com.sun.jersey.api.client.ClientHandlerException: > > java.net.SocketTimeoutException: Read timed out > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at > > com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle( > URLConnectionClientHandler.java:155) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at com.sun.jersey.api.client.Client.handle(Client.java:652) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at > > com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle( > GZIPContentEncodingFilter.java:123) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at com.sun.jersey.api.client.WebResource$Builder.get( > WebResource.java:509) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at > > org.apache.nifi.cluster.coordination.http.replication. > ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator. > java:641) > > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] > org.apache.nifi.StdOut > > at > > org.apache.nifi.cluster.coordination.http.replication. > ThreadPoolRequestReplicator$NodeHttpRequest.run( >
Re: NiFi 1.4 Clustering Error: Cannot Replicate Request GET /nifi-api/flow/current-user
Ryan, I remember creating an issue for something that seems similar to what you are running into: https://issues.apache.org/jira/browse/NIFI-3642 Long story short, I believe you do need to specify a value for nifi.web.http.host because that will be used to replicate requests that come in to the REST API, so each node needs that value to be something that is reachable by the other nodes. -Bryan On Mon, Jan 29, 2018 at 12:03 PM, Ryan Hwrote: > Dev Team, > > > I am running into an interesting issue while trying to cluster NiFi in a > containerized environment (Docker containers running on DC/OS cluster) and I > am somewhat stuck with what to do. I am starting with getting just 2 NiFi > nodes with a single external zookeeper instance (just to get it working, > will not use for production). Currently our DC/OS cluster does not support > container-to-container communication (no overlay network support at the > moment) so we are using VIP’s to expose required ports on the container(s) > so traffic can be mapped to a well known address and correct container port > even though the host/host port may change. > > > Currently everything spins up and the UI can be accessed on whatever node is > elected the Cluster Coordinator (in this case it is the Primary Node as > well) (does show that there are 2 nodes in the cluster). However, any action > taken on the canvas results in the following error shown in the UI: > > > Node localhost:80 is unable to fulfill this request due to: Transaction > c91764e4-2fc8-492b-8887-babb59981ff3 is already in progress. > > > When trying to access the UI of the other node, the canvas cannot be reached > and the following error is shown on the error splash screen (increased read > timeout to 30 secs, still the same): > > > An unexpected error has occurred > com.sun.jersey.api.client.ClientHandlerException: > java.net.SocketTimeoutException: Read timed out > > > If configured to use the hostname of the container, then the error is: > > unknown host exception > > > In the NiFi logs, the following errors are present (as well as some other > warnings): > > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET > /nifi-api/flow/current-user to localhost:80 due to > com.sun.jersey.api.client.ClientHandlerException: > java.net.SocketTimeoutException: Read timed out > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > com.sun.jersey.api.client.ClientHandlerException: > java.net.SocketTimeoutException: Read timed out > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at > com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at com.sun.jersey.api.client.Client.handle(Client.java:652) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at > com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:509) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:641) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:852) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > 2018-01-29 14:46:52,393 INFO [NiFi logging handler]
NiFi 1.4 Clustering Error: Cannot Replicate Request GET /nifi-api/flow/current-user
Dev Team, I am running into an interesting issue while trying to cluster NiFi in a containerized environment (Docker containers running on DC/OS cluster) and I am somewhat stuck with what to do. I am starting with getting just 2 NiFi nodes with a single external zookeeper instance (just to get it working, will not use for production). Currently our DC/OS cluster does not support container-to-container communication (no overlay network support at the moment) so we are using VIP’s to expose required ports on the container(s) so traffic can be mapped to a well known address and correct container port even though the host/host port may change. - Currently everything spins up and the UI can be accessed on whatever node is elected the Cluster Coordinator (in this case it is the Primary Node as well) (does show that there are 2 nodes in the cluster). However, any action taken on the canvas results in the following error shown in the UI: Node localhost:80 is unable to fulfill this request due to: Transaction c91764e4-2fc8-492b-8887-babb59981ff3 is already in progress. - When trying to access the UI of the other node, the canvas cannot be reached and the following error is shown on the error splash screen (increased read timeout to 30 secs, still the same): An unexpected error has occurred com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out - If configured to use the hostname of the container, then the error is: unknown host exception - In the NiFi logs, the following errors are present (as well as some other warnings): 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/current-user to localhost:80 due to com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut 2018-01-29 14:46:52,393 WARN [Replicate Request Thread-3] o.a.n.c.c.h.r.ThreadPoolRequestReplicator 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at com.sun.jersey.api.client.Client.handle(Client.java:652) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:509) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:641) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:852) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.util.concurrent.FutureTask.run(FutureTask.java:266) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.lang.Thread.run(Thread.java:748) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut Caused by: java.net.SocketTimeoutException: Read timed out 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.net.SocketInputStream.socketRead0(Native Method) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 2018-01-29 14:46:52,393 INFO [NiFi logging handler] org.apache.nifi.StdOut at
PublishKafka setting record timestamp
Hi, Is it possible to set the producer record timestamp within the PublishKafka_1_0 / PublishKafkaRecord_1_0 processor? I tried to use the "Attributes to Send as Headers" option with a timestamp attribute, but this did not work. Not sure if the timestamp producer record's timestamp is in the headers. Appreciate any help. Thanks, Mika>