Failed to drop FlowFiles due to java.io.IOException: Cannot update journal file
Hi Team, I am facing issue when i am trying to empty queue. Failed to drop FlowFiles due to java.io.IOException: Cannot update journal file /data/nifi/flowfile_repository/dev/journals/1485.journal because this journal has already encountered a failure when attempting to write to the file. If the repository is able to checkpoint, then this problem will resolve itself. However, if the repository is unable to be checkpointed (for example, due to being out of storage space or having too many open files), then this issue may require manual intervention. There is no any issue with open files, diskspace, inodes. Please help -- ViVek Raghuwanshi Mobile +1-847-848-7388
Re: NIFI - Load Balanced APIs
John, If you’re running a proxy in front of NiFi, you will definitely need to configure some sort of session stickiness. Otherwise you can run into other issues as well. For instance, some "asynchronous requests” like updating variables, etc. may intermittently fail because the request creates a “request object” in the background that is temporary and then polled by the UI, and that resource exists only on the node that the request was made to. Thanks -Mark > On Aug 26, 2020, at 1:27 PM, jgunvaldson wrote: > > Hi All, > > We use a API Manager from WSO2 to proxy our NIFI APIs. This provides > subscription, OAUTH and other features that are very useful > > Problem is when a node is disconnected in NIFI (don’t ask, happens more than > we would like) then our API Manager, having been configured to use that node > - will send a 500 back to the user. It is not advanced enough to try other > (possibly up) NIFI NODEs (has a round robin list). Next request gets usually > the next configured NODE.. > > Thus every x divided by the (number of nodes configured) request fails - when > x represents a disconnected NODE. > > Question. What are some of the options for hosting a “reliable” NIFI API in > respect to loss or disconnection of a NODE? > > Good question right? > > Best Regards, > John >
NIFI - Load Balanced APIs
Hi All, We use a API Manager from WSO2 to proxy our NIFI APIs. This provides subscription, OAUTH and other features that are very useful Problem is when a node is disconnected in NIFI (don’t ask, happens more than we would like) then our API Manager, having been configured to use that node - will send a 500 back to the user. It is not advanced enough to try other (possibly up) NIFI NODEs (has a round robin list). Next request gets usually the next configured NODE.. Thus every x divided by the (number of nodes configured) request fails - when x represents a disconnected NODE. Question. What are some of the options for hosting a “reliable” NIFI API in respect to loss or disconnection of a NODE? Good question right? Best Regards, John
Re: [E] Re: Site to Site with multi-entry keystore?
Thanks for looking at this Andy, guess i was mistaken about needing a single-entry keystore. Also thank you for the info regarding WWW-Authenticate behavior with mTLS. Now that you point it out, seems rather logical, if both sides enforce authentication, no need to explicitly ask... Right, i believe the credentials used and the policies on both sides are correct. The trust and keystores on both sides are valid, not expired with matching CN signatures to what is expected and configured in Users and the RPGs. Both also set client and server Auth ExtendedKeyUsages, i thought that was the issue but not so, credentials should be fine for mTLS use. Both sides using tls1.2 with matching ciphers. Policies also look correct on both sides, in addition to the Admin Guide i have several working s2s installations i'm cross checking against. I'm increasing debug in JVM ssl handshake and bootstrap logging to see if i can get more details, i can see the connection response is from the correct host:port, with 401 Unauthorized, but not the specific reason for authn error. Thanks again. patw On Tue, Aug 25, 2020 at 3:10 PM Andy LoPresto wrote: > All S2S authentication is performed using mutual authentication TLS, so > there would not be a WWW-Authenticate request. You’re saying each endpoint > has the appropriate keystore and truststore in place, and each trusts the > other? You’ve also set the appropriate user policies (different from > certificate trust; the user identity is proxied in the request itself and > used for authorization)? > > Have you checked the logs/nifi-app.log and logs/nifi-user.log files to see > what identity the incoming authentication request is presenting? > > Andy LoPresto > alopre...@apache.org > *alopresto.apa...@gmail.com * > He/Him > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69 > > On Aug 25, 2020, at 8:09 AM, Pat White wrote: > > Hi Folks, > > Does S2S require use of a single entry keystore, or will multiple entries > work ok? > > I thought i saw documentation which stated S2S will only work with single > entry keystores, but i'm not able to find the reference. Trying to track > down a 401 Unauthorized error trying to do S2S with a peer cluster, without > receiving a followup credential request. > > Everything seems ok, policies allow both sides access, credentials are > valid and set both client and server Auth. Just appears as if the response > to the nifi-api/site-to-site GET doesn't trust the peer node and drops the > connection, without a follow up WWW-Authenticate request. However, i can't > find a reason for the reject. > > > patw > > >
Re: How to read an element inside another element in json with UpdateRecord
That's JsonPath, not a record path, but it would be almost the same: /data/[1][0] to get the date. Adjust the array indexes accordingly to get other values. On Tue, Aug 25, 2020 at 5:52 PM Eric Ladner wrote: > Try $.data[6][1] to get the "15.m..".. entry. > > On Tue, Aug 25, 2020 at 3:17 PM Wesley C. Dias de Oliveira < > wcdolive...@gmail.com> wrote: > >> Hi, Nifi Community. >> >> I'm trying to read an element inside another element in json with >> UpdateRecord in the following json: >> >> "data": [ >> ["Date", "Campaign name", "Cost"], >> ["2020-08-25", "01.M.VL.0.GSP", 75.14576], >> ["2020-08-25", "11.b.da.0.search", 344.47], >> ["2020-08-25", "12.m.dl.0.search", 98.04], >> ["2020-08-25", "13.m.dl.0.search", 276.98], >> ["2020-08-25", "14.m.dl.0.search", 23.7], >> ["2020-08-25", "15.m.dl.0.search", 3.87], >> ["2020-08-25", "16.b.da.0.search", 4.2], >> ["2020-08-25", "19.m.dl.0.display", 71.452542], >> ["2020-08-25", "55.m.vl.1.youtube", 322.875653], >> ["2020-08-25", "57.m.dl.0.youtube", 124.061768], >> ["2020-08-25", "58.m.vl.1.youtube", 0.387847], >> ["2020-08-25", "59.m.vl.1.youtube", 72.637692], >> ["2020-08-25", "62.b.vl.1.youtube", 1.397887] >> ] >> >> For example, I need to get the value '59.m.vl.1.youtube' or the date >> value '2020-08-25'. >> >> Here's my processor settings: >> [image: image.png] >> >> Can someone suggest something? >> >> Thank you. >> -- >> Grato, >> Wesley C. Dias de Oliveira. >> >> Linux User nº 576838. >> > > > -- > Eric Ladner >