Hi Noe,
No, this isn’t supported. Can you maybe explain your intended use case a bit?
In general, the flowfiles are “unaware” of the processors operating on them.
Some processors write attributes to the flowfile but these are metadata about
the flowfile and its content, not usually about the
I don’t know of a way to do this in Jolt. You could use an EvaluateJsonPath
processor to extract the MSG element to content or an attribute, and use the
Expression Language unescapeJson function to remove the escape characters.
Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP
Hi Koji,
Turn out even simpler than that: EvaluateJsonPath to extract it MSG part
into attribute and AttibuteToJson produces normal Json.
Thank you,
Victor
On Wed, Mar 6, 2019 at 8:55 PM Koji Kawamura wrote:
> Hello,
>
> I haven't tested myself, but using EvaluateJsonPath and ReplaceText
> with
Hello,
I haven't tested myself, but using EvaluateJsonPath and ReplaceText
with 'unescapeJson' EL function may be an alternative approach instead
of Jolt.
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson
Idea is, use EvaluateJsonPath to extract the MSG part
Yea ListenTCP also doesn't handle the back-pressure with the client
the way it really should.
Regarding the load balancing, I believe traditional s2s does factor in
the load of each node when deciding how to load balance, but I don't
know if this is part of load balanced connections or not. Mark
Yup, but because of the unfortunate way the source (outside NiFi)
works, it doesn't buffer for long when the connection doesn't pull or
drops. It behaves far more like a 5 Mbps UDP stream really :-(
On Wed, 6 Mar 2019 at 21:44, Bryan Bende wrote:
>
> James, just curious, what was your source
In our case, backpressure applied all the way up to the TCP network
source which meant we lost data. AIUI, current load balancing is round
robin (and two other options prob not relevant). Would actual load
balancing (e.g. send to node with lowest OS load, or number of active
threads) be a
Is there a way to specify the number of processing threads not in a global
setting? (ie. if a machine has 24-cores, give it a different number of
threads than the machine with 4-cores)
On Wed, Mar 6, 2019 at 3:51 PM Joe Witt wrote:
> This is generally workable (heterogenous node capabilities)
This is generally workable (heterogenous node capabilities) in NiFi
clustering. But you do want to leverage back-pressure and load balanced
connections so that faster nodes will have an opportunity to take on the
workload for slower nodes.
Thanks
On Wed, Mar 6, 2019 at 3:48 PM James Srinivasan
Yes, we hit this with the new load balanced queues (which, to be fair, we
also had with remote process groups previously). Two "old" nodes got
saturated and their queues filled while three "new" nodes were fine.
My "solution" was to move everything to new hardware which we had inbound
anyway.
On
You may run into issues with different processing power, as some machines
may be overwhelmed in order to saturate other machines.
On Wed, Mar 6, 2019 at 3:34 PM Mark Payne wrote:
> Chad,
>
> This should not be a problem, given that all nodes have enough storage
> available to handle the influx
Chad,
This should not be a problem, given that all nodes have enough storage
available to handle the influx of data.
Thanks
-Mark
> On Mar 6, 2019, at 1:44 PM, Chad Woodhead wrote:
>
> Are there any negative effects of having filesystem mounts (dedicated mounts
> for each repo) used by the
I have json message that contains another json message in textual form:
{
"one": "one".
"two":2,
"MSG": "{\"src\":\"my source\",\"priority\":\"0\"}"
}
What would be transform spec to get "contained" message in json ?
{
"src": "my source",
"priority": "0"
}
Thank you
I have json message that contains another json message in textual form:
{
"one": "one".
"two":2,
"MSG": "{\"src\":\"my source\",\"priority\":\"0\"}"
}
What would be transform spec to get "contained" message in json ?
{
"src": "my source",
"priority": "0"
}
I've tried the following spec:
[
{
Hello,
Is there a way using expression language to get current processors id or
something similar?
I found a jira ticket related to
https://issues.apache.org/jira/browse/NIFI-4284.
Thank you
Noe
Are there any negative effects of having filesystem mounts (dedicated
mounts for each repo) used by the different NiFi repositories differ in
size on NiFi nodes within the same cluster? For instance, if some nodes
have a content_repo mount of 130 GB and other nodes have a content_repo
mount of 125
John
it is too hard to guess from just the shared images. A full stack trace
and set of logs may be more enlightening. If unzipping/opening such a
thing is so incredibly slow there are two culprits I've heard of/seen in
the past.
1) Anti-virus/security related software that is overly
Hi all,
I've been running a 3 node NiFi 1.8.0 cluster for a number of months without
issue. Recently after a power outage one of the nodes restarted and seems to
have got stuck on startup on the NAR unpacking stage. Each NAR is taking
over 1 minute (in some cases longer) to unpack meaning that
Dear Apache Enthusiast,
(You’re receiving this because you are subscribed to one or more user
mailing lists for an Apache Software Foundation project.)
TL;DR:
* Apache Roadshow DC is in 3 weeks. Register now at
https://apachecon.com/usroadshowdc19/
* Registration for Apache Roadshow Chicago is
19 matches
Mail list logo