>From looking at the processor code it looks like it adds both the terms and
locations to the filter endpoint and should be able to filter on both. The
processor leverages the Hosebird Client [1] so it could be possible that
library is not working as expected.
Is there a specific example of terms
Hello,
This might related to:
https://issues.apache.org/jira/browse/NIFI-1020
Does anyone know if the upgrade of the Kafka client for the next NiFi
release addresses this issue?
-Bryan
On Mon, Oct 19, 2015 at 4:01 AM, 彭光裕 wrote:
> hi,
>
>
>
> I have a test topic
Hello,
I'm not that familiar with Kite, but is it possible that you need to create
the Kite dataset using the Kite CLI before StoreInKiteDataset tries to
write data to it?
It looks like that is how the test cases for this processor work:
In addition to what Mark said, there is also the option of templates [1].
Templates let you export a portion, or all of your flow,
and then import it again later. When you export a template it will not
export any properties that are marked as sensitive properties,
so it is safe to share with
Hello,
You should be able to use expression language in the URL value, you could
reference any attribute by doing the following: ${attributeName}. So your
URL could be http://myhost/${id}
-Bryan
On Wed, Nov 11, 2015 at 8:03 AM, Darren Govoni wrote:
> Hi,
> I am trying
I think PutHDFS always creates them so it isn't an option through the
properties.
-Bryan
On Thu, Nov 12, 2015 at 8:19 AM, Mark Payne wrote:
> Mark,
>
> My guess is that it was an oversight. I don't believe it was intentional
> to leave it out of PutHDFS.
>
> Thanks
>
query.
> On Nov 3, 2015 9:41 AM, "Bryan Bende" <bbe...@gmail.com> wrote:
>
>> Christopher,
>>
>> In terms of templates, the best resources we have right now are:
>>
>> https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
>
Hi David,
After re-reading the Twitter API documentation [1], it says:
"The track, follow, and locations fields should be considered to be
combined with an OR operator. track=foo=1234 returns Tweets matching
“foo” OR created by user 1234."
-Bryan
[1]
Jeff,
Are you using the 0.3.0 release?
I think this is the issue you ran into which is resolved for the next
release:
https://issues.apache.org/jira/browse/NIFI-944
With regards to ConvertJSONtoAvro, I believe it one json document per line
with a new line at the end of each line (your second
I'm not sure if this makes sense with out seeing the full flow, but can you
construct the Json for each image before MergeContent by using ReplaceText,
this way the id will already be taken care of before merging?
On Wednesday, October 14, 2015, Chris Mangold wrote:
> I have
flume/PollableSource."
>
> Can you please help me in this regard. Any step/configuration I missed.
>
> Thanks and Regards,
> Parul
>
>
> On Wed, Oct 7, 2015 at 6:57 PM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> Hello,
>>
>> The NiFi Flume processors are
Hello,
When you say you were unhappy with the performance, can you give some more
information about what was not performing well?
Was the NiFi Spark Receiver not pulling messages in fast enough and they
were queuing up in NiFi?
Was NiFi not producing messages as fast as you expected?
What kind
Hello,
Just to clarify, so you are seeing the messages reach the output port and
then get removed from the queue? And on the spark side the NiFi Spark
receiver never receives anything? Or it receives message, but they have no
content?
-Bryan
On Monday, October 19, 2015, Rama Krishna Manne
ed
> from the queue
>
> yes
>
> And on the spark side the NiFi Spark receiver never receives anything? Or
> it receives message, but they have no content?
>
> It receives the data but no content to do computation
>
> On Mon, Oct 19, 2015 at 2:14 PM, Bryan Bende <bbe.
t;>> 2015-10-10 02:30:46,029 ERROR [Timer-Driven Process Thread-9]
>>> o.a.n.processors.flume.ExecuteFlumeSink
>>> ExecuteFlumeSink[id=2d08dfe7-4fd1-4a10-9d25-0b007a2c41bf]
>>> ExecuteFlumeSink[id=2d08dfe7-4fd1-4a10-9d25-0b007a2c41bf] failed to process
>>> d
Hello,
The NiFi Flume processors are for running Flume sources and sinks with in
NiFi. They don't communicate with an external Flume process.
In your example you would need an ExecuteFlumeSource configured to run the
netcat source, connected to a ExecuteFlumeSink configured with the logger.
processor
> so that this new.json can be converted in to a flat document of key/value
> pairs, or an array of flat documents.
>
> Any help/guidance would be really useful.
>
> Thanks and Regards,
> Parul
>
> On Mon, Oct 12, 2015 at 10:36 PM, Bryan Bende <bbe...
vention.
> >
> > Another approach is to have cluster master in NiFi talk to ZK and decide
> > which shards to query. Divide these shards among slave nodes.
> > My understanding is NiFi cluster master is not indented for such purpose.
> > I'm not sure if this even possible.
Hi Chris,
In the MergeContent case, instead of putting \n in the file, try putting a
new line by hitting return.
I remembering doing this once and I had created the empty file with vi and
added a new line, and then I actually got 2 new lines in my output because
I guess vi has a new line
? If so it would be interesting to see what happens if
you bump that up.
On Thu, Sep 24, 2015 at 5:06 PM, Bryan Bende <bbe...@gmail.com> wrote:
> Adam,
>
> Based on that message I suspect that MongoDB does not support sending in
> an array of documents since it looks like i
Sorry I missed Joe's email while sending mine... I can put together a
template showing this.
On Wednesday, September 23, 2015, Bryan Bende <bbe...@gmail.com> wrote:
> David,
>
> Take a look at ExtractText, it is for pulling FlowFile content into
> attributes. I think tha
to try this setup to see the results, appreciate your help.
>
> Thanks,
> -Chakri
>
> From: Bryan Bende <bbe...@gmail.com
> <javascript:_e(%7B%7D,'cvml','bbe...@gmail.com');>>
> Reply-To: "users@nifi.apache.org
> <javascript:_e(%7B%7D,'cvml','users@nifi.apa
Hi Chakri,
I believe the DistributeLoad processor is more for load balancing when
sending to downstream systems. For example, if you had two HTTP endpoints,
you could have the first relationship from DistributeLoad going to a
PostHTTP that posts to endpoint #1, and the second relationship going
Hi Dan,
This is definitely a use case that NiFi can handle.
A possible architecture for your scenario would be something like the
following...
- Run NiFi instances on the machines where you need to collect logs, these
would not be clustered, just stand-alone instances.
- These would pick up your
Dan,
A stand-alone instance is the default behavior. If you extract a NiFi
distribution and run "bin/nifi.sh start", without changing any of the
clustering related properties, then you get a single instance running on
port 8080 by default.
My thought behind sending them via site-to-site is to
Bob,
The field mappings with the json paths is actually something provided by
Solr. The PutSolr processor is just passing all those parameters over, and
Solr is doing the extraction, so we are limited here by what Solr provides.
I'm not aware of a way to tell Solr to select the whole document
Charlie,
The behavior you described usually means that the processor encountered an
unexpected error which was thrown back to the framework which rolls back
the processing of that flow file and leaves it in the queue, as opposed to
an error it expected where it would usually route to a failure
Obaid,
I can't say for sure how much this would improve performance, but you might
want to wrap the OutputStream with BufferedOutputStream or BufferedWriter.
Would be curious to here if that helps.
A similar scenario from the standard processors is ReplaceText, here is one
example where it uses
Russell/Corey,
In 0.4.0 there is a new way for processors to indicate what they expect as
far as input, it can be required, allowed, or forbidden. This prevents
scenarios like ExecuteSQL which at one point required an input FlowFile,
but the processor could be running and started with out an
Adding to what Oleg said...
GetSFTP has a property called Ignore Dotted Files which defaults to true
and tells GetSFTP to ignore filenames that begin with dots which can help
in this scenario if the uploader can upload to temporary file starting with
a dot and rename when done.
-Bryan
On Fri,
t
> documentation says? Have you looked at the source to see what’s going on
> there? If its open please link and we can tale a look.
>
> Cheers
> Oleg
>
> On Jun 3, 2016, at 4:58 PM, Kumiko Yada <kumiko.y...@ds-iq.com> wrote:
>
> Here is the code, https://g
Hi Michael,
Can you confirm that the ListHDFS that is showing the problem is pointing
at a core-site.xml that has the value of "hadoop.security.authentication"
set to "kerberos" ?
-Bryan
On Wed, Jun 8, 2016 at 12:22 PM, Michael Dyer
wrote:
> I'm looking for any
eems
> to be doing some iteration where I presume the remove is called. Perhaps it
> is not a thread safe class after all. What does Microsoft documentation
> says? Have you looked at the source to see what’s going on there? If its
> open please link and we can tale a look.
>
>
>
> C
e threads
> via UI might be the same thing.
>
> Thanks
> Kumiko
> ------
> *From:* Bryan Bende <bbe...@gmail.com>
> *Sent:* Thursday, June 9, 2016 11:26:10 AM
> *To:* users@nifi.apache.org
> *Cc:* Kevin Verhoeven; Ki Kang
>
> *Subje
Hello,
This Wiki page shows how to setup the dependencies to use the
SSLContextService from a custom processor bundle:
https://cwiki.apache.org/confluence/display/NIFI/Maven+Projects+for+Extensions#MavenProjectsforExtensions-LinkingProcessorsandControllerServices
Thanks,
Bryan
On Wed, May
What is the error you are getting from TransformXML?
On Tue, Jun 14, 2016 at 9:38 AM, Anuj Handa wrote:
> anybody has any thoughts on UTF 8 Flow files with XMLtransforemation and
> other processors ?
>
> Anuj
>
> On Mon, Jun 13, 2016 at 4:45 PM, Anuj Handa
Hi Donald,
I know this does not directly address the conflict in dependencies, but I
wanted to mention that it is not required to inherit from nifi-nar-bundles.
The Maven archetype does that by default, but you can certainly remove it,
there are some instructions on how to do so here:
I'm not sure if this would make a difference, but typically the
configuration resources would be the full paths to core-site.xml and
hdfs-site.xml. Wondering if using those instead of yarn-site.xml changes
anything.
On Monday, June 13, 2016, Ravi Papisetti (rpapiset)
wrote:
-> Send it to validation/transformation Hadoop Job -> get results back to
> Nifi -> do other things. What do you think of this approach?
>
> 2016-06-01 21:24 GMT+03:00 Bryan Bende <bbe...@gmail.com>:
>
>> NiFi is definitely suitable for processing large
Conrad,
I would think that you could do this all in NiFi.
How do the log files come into NiFi? TailFile, ListenUDP/ListenTCP,
List+FetchFile?
-Bryan
On Thu, Jun 2, 2016 at 6:41 AM, Conrad Crampton wrote:
> Hi,
>
> Any advice on ‘best’ architectural approach
Hi Kumiko,
I think you can only increment counters and report provenance events from
processors, but not query counters or query provenance.
If you are in a custom processor, could have an AtomicInteger counter that
increments on every onTrigger and reset when 24 hours has passed?
Thanks,
url splitting/
> enriching etc. – probably could have done with ExecuteScript processor in
> hindsight) so am comfortable with going this route if that is deemed more
> appropriate.
>
> Thanks
> Conrad
>
> *From: *Bryan Bende <bbe...@gmail.com>
> *Reply-To: *"user
more
>>
>> questions, as always, don't hesitate to shoot another email!
>>
>>
>>
>> Thanks
>>
>> -Mark
>>
>>
>>
>>
>>
>> On Jun 2, 2016, at 9:28 AM, Conrad Crampton <conrad.cramp...@secdata.com>
>> wrote:
not suitable for Nifi.
>
> Is Nifi a suitable tool for processing large files or I should not do
> actual processing work outside the Nifi flow?
>
> 2016-06-01 17:28 GMT+03:00 Bryan Bende <bbe...@gmail.com>:
>
>> Hello,
>>
>> This post [1] has a description of how to r
Hi Keith,
There is currently no built in processor that directly transforms XML to
JSON.
TransformXML leverages XSLT to transform and XML document into some other
format.
In that post, the XSLT happens to transform into JSON, but it looks like
maybe it only handles top-level elements and not
Hello,
This post [1] has a description of how to redistribute data with in the
same cluster. You are correct that it involves a RPG pointing back to the
same cluster.
One thing to keep in mind is that typically we do this with a List + Fetch
pattern, where the List operation produces lightweight
I personally use IntelliJ and it generally does pretty well at
automatically importing everything.
The process to import an existing Maven project is described here:
https://www.jetbrains.com/help/idea/2016.1/importing-project-from-maven-model.html
In Step #2 you would select the root directory
Also, the user guide has a description of the scheduling strategies which
described the cron format:
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#scheduling-tab
On Thu, Jun 16, 2016 at 1:17 PM, Pierre Villard wrote:
> Hi Keith,
>
> This is the
Hello,
Glad to hear you are getting started using ListenSyslog!
You are definitely running into something that we should consider
supporting. The current implementation treats each new-line as the message
delimiter and places each message on to a queue.
When the processor is triggered, it grabs
0
> > nifi.remote.input.secure=false
> >
> > When I try drop remote process group (with http://:8080/nifi),
> I see
> > error as follows for two nodes.
> >
> > [:8080] - Remote instance is not allowed for Site to Site
> > communication
> > [:8080] - Remote ins
I believe what Joe was referring to with RouteText was that it can take a
regular expression with a capture group, and output a FlowFile per unique
value of the capturing group. So if the incoming data is a FlowFile with a
bunch of syslog messages and you provide a regex that captures hostname, it
ery parameter?
>
> Regards,
>
> Sudeep
>
> On Thu, Feb 4, 2016 at 9:11 PM, sudeep mishra <sudeepshekh...@gmail.com>
> wrote:
>
>> Thanks for the feedback Bryan.
>>
>> Yes I need a processor similar to what you described.
>>
>> On Thu, Feb 4,
Hi Sudeep,
>From looking at the GetMongo processor, I don't think this can be done
today. That processor is meant to be a source processor that extracts data
from Mongo using a fixed query.
It seems to me like we would need a FetchMongo processor with a Query field
that supported expression
at 10:48 AM, Madhukar Thota <madhukar.th...@gmail.com>
wrote:
> Thanks Bryan. I will look into ExtractText processor.
>
> Do you know what scripting languages are supported with new processors?
>
> -Madhu
>
> On Fri, Feb 12, 2016 at 9:27 AM, Bryan Bende <bbe..
Just adding to what Juan said...
The PutMongo processor sends the content of a FlowFile to Mongo, so if you
use AttributesToJson -> PutMongo, with AttributesToJson Destination set to
flowfile-content, then you would be sending the attributes to Mongo.
-Bryan
On Wed, Feb 3, 2016 at 9:22 AM, Juan
or is available somewhere or
> planned J . I have seen PutEmail but not the dual processor in the help
>
>
>
> The goal is to automatically and regularly processes incoming mail ,
> transforms the content and index the transformed content with solr
>
> phil
>
> Best r
Hi Kyle,
It seems like the stack trace is suggesting that Spark is trying to
download dependencies from the like that references
Executor.updateDependencies:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L391
Any chance you are behind
Hi Russ,
If you have the traditional bundle with a jar project for your processors
and a nar project that packages everything, then the additionalDetails.html
goes in the jar project under src/main/resources/docs followed by a
directory with the fully qualified class name of your processor.
As
Hello,
MergeContent has properties for header, demarcator, and footer, and also
has a strategy property which specifies whether these values come from a
file or inline text.
If you do inline text and specify a demarcator of a new line (shift + enter
in the demarcator value) then binary
Just wanted to point out that the stack trace doesn't actually show the
error coming from code in the NiFi Site-To-Site client, so I wonder if it
is something else related to Spark.
Seems similar to this error, but not sure:
I know this does not address the larger problem, but in this specific case,
would the 0.4.1 Kafka NAR still work in 0.5.x?
If the NAR doesn't depend on any other NARs I would think it would still
work, and could be a work around for those that need to stay on Kafka 0.8.2.
On Sunday, February 21,
Keaton,
You can definitely build a REST service in NiFi! I would take a look at
HandleHttpRequest and HandleHttpResponse.
HandleHttpRequest would be the entry point of your service, the FlowFiles
coming out of this processor would represent the request being made, you
can then perform whatever
Hi Prabhu,
How did you end up converting your CSV into JSON?
PutHBaseJSON creates a single row from a JSON document. In your example
above, using n1 as the rowId, it would create a row with columns n2 - n22.
Are you seeing columns missing, or are you missing whole rows from your
original CSV?
2 store as columns in hbase.
>
> some of rows (n1's) are stored inside the table but remaining are read
> well but not stored.
>
> Thanks,
> Prabhu Mahendran
>
> On Tue, Apr 12, 2016 at 1:58 PM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> Hi Prabhu,
>>
Hi Lee,
The List+Fetch model in a cluster is one of the trickier configurations to
set up.
This article has a good description with a diagram under the "pulling
section" that shows ListHDFS+FetchHDFS, but should be the same for
ListFile+FetchFile:
I think the problem is that all attributes are actually Strings internally,
even after calling toNumber() that is only temporary while the expression
language is executing.
So by the time it gets to AttributesToJson it doesn't have any information
about the type of each attribute and they all end
I think this a legitimate bug that was introduced in 0.5.0.
I created this ticket: https://issues.apache.org/jira/browse/NIFI-1596
For those interested, I think the line of code causing the problem is this:
Uwe,
Personally I don't have that much experience with MongoDB, but the
additional functionality you described sounds like something we would want
to support. Looking through JIRA I only see one ticket related to MongoDB
to add SSL support [1] so I think it would be great to create a new JIRA to
Russell,
Just want to confirm what you are seeing... so when you bring up the usage
for your processor, you see the normal documentation, but you don't see an
"Additional Details..." link at the top of the page?
One example I know of is the PutSolrContentStream processor:
I suspect this is a dependency problem with the way the NAR was built.
How did you create the new project structure for your copied PutHBaseJSON?
You would need the same dependencies that nifi-hbase-bundle has...
A provided dependency on the hbase client service API in your processors
pom:
Hong,
Glad to hear you are getting started with NiFi! What do your property names
look like on EvaluatJsonPath?
Typically if you wanted to extract the effective timestamp, event id, and
applicant id from your example json, then you would add properties to
EvaluateJsonPath like the following:
Hello,
I believe this example shows an approach to do it (it includes Hive even
though the title is Solr/banana):
https://community.hortonworks.com/articles/1282/sample-hdfnifi-flow-to-push-tweets-into-solrbanana.html
The short version is that it extracts several attributes from each tweet
using
Also, this blog has a picture of what I described with MergeContent:
https://blogs.apache.org/nifi/entry/indexing_tweets_with_nifi_and
-Bryan
On Thu, Apr 21, 2016 at 4:37 PM, Bryan Bende <bbe...@gmail.com> wrote:
> Hi Igor,
>
> I don't know that much about Hive so I can't
Hi Susheel,
In addition to what Pierre mentioned, if you are interested in an example
of using HandleHttpRequest/Response, there is a template in this repository:
https://github.com/hortonworks-gallery/nifi-templates
The template is HttpExecuteLsCommand.xml and shows how to build a web
service
Hi Conrad,
Sorry this has been so challenging to setup. After trying it out myself, I
believe the problem you ran into when you didn't set the System properties
is actually a legit bug in the SiteToSiteClient...
I wrote it up in this JIRA [1], but the short answer is that it never uses
those
Conrad,
I think the error message is mis-leading a little bit, it says...
"Unable to communicate with yarn-cm1.mis-cds.local:9870 because it requires
Secure Site-to-Site communications, but this instance is not configured for
secure communications"
That statement is saying that your NiFi
Hi Conrad,
I think there are a couple of things at play here...
One is that the SSL properties need to be set on the
SiteToSiteClientBuilder, rather than through system properties. There
should be methods to set the keystore and other values.
In a secured NiFi instance, the certificate you are
Conrad,
Unfortunately I think this is a result of the issue you discovered with the
SSLContext not getting created from the properties on the
SiteToSiteClientBuilder...
Whats happening is the spark side is hitting this:
if (siteToSiteSecure) {
if (sslContext == null) {
throw new
When using the "text" strategy you may have to do shift+enter in the
demarcator field to create a new line.
On Thu, May 12, 2016 at 3:52 PM, Joe Witt wrote:
> Igor,
>
> I believe it will encode whatever you give it in UTF-8 and place those
> bytes in. For absolute control
Hello,
I think this would probably be better handled by SplitText with a line
count of 1.
SplitJson would be more for splitting an array of JSON documents, or a
field that is an array.
-Bryan
On Tue, May 17, 2016 at 12:15 PM, Madhukar Thota
wrote:
> I have a
17, 2016 at 1:29 PM, Andrew Grande <agra...@hortonworks.com>
>> wrote:
>>
>>> Try SplitText with a header line count of 1. It should skip it and give
>>> the 2nd line as a result.
>>>
>>> Andrew
>>>
>>> From: Madhukar Thota <
wrote:
>>>
>>>> Try SplitText with a header line count of 1. It should skip it and give
>>>> the 2nd line as a result.
>>>>
>>>> Andrew
>>>>
>>>> From: Madhukar Thota <madhukar.th...@gmail.com>
>>&
They are treated with same priority, but as Oleg mentioned, the PRs do make
it easier for collaborative review and has the built in integration with
Travis, although currently some issues to get it consistently working.
On Tue, May 3, 2016 at 11:26 AM, Suneel Marthi wrote:
>
wProcess
> page to remove the ambiguity.
>
>
> On Tue, May 3, 2016 at 11:27 AM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> They are treated with same priority, but as Oleg mentioned, the PRs do
>> make it easier for collaborative review and has the built in integra
provenance_repository/123674.prov in 698 milliseconds
> 2016-05-06 20:03:16,584 INFO [Provenance Repository Rollover Thread-1]
> o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance
> Event file containing 2770 records
> 2016-05-06 20:03:33,473 INFO [pool-18-thread-1]
>
Hello,
It seems like maybe a wrong version of a library is being used, or maybe a
JAR is missing that needs to be included in your NAR.
Can you share what dependencies you have in the pom.xml of your processors
project?
You can check under
fo :
> /usr/local/hadoop/hadoop-2.6.0/etc/hadoop/core-site.xml,/usr/local/hadoop/hbase-1.1.2/conf/hbase-site.xml
> ZooKeeper QuorumInfo: No value set
> ZooKeeper Client PortInfo : No value set
> ZooKeeper ZNode ParentInfo : /hbase
> HBase Client RetriesInfo : 1
>
or in GetHbase Processor. I update
> Hbase-Site.xml and remove remaining properties like " Zookeeper Quorum
> Info,Zookeeper Client Port Info . still i got the same error in GetHbase
> processor.
>
> On Thu, May 5, 2016 at 9:07 PM, Bryan Bende <bbe...@gmail.com> wrote:
&g
Hello,
I'm not sure if this will work depending how large the tables are, but
since you were already able to move to the data into three separate
tables...
Could you then do an ExecuteSQL processor that used a select query that
joined the three tables together, so the results coming out of the
Hi Alex,
PutSolrContentStream would likely need the following configuration based on
what you described:
Solr Type = Cloud
Solr Location = Your ZK
Collection = feed
Content Stream Path = /update
Content Type = application/json
Add two user defined properties:
commit = true
update.chain =
Hello,
For HBaseClient, are you sure your ZooKeeper being used by HBase is running
on localhost:2181?
Typically you don't really need to set the three ZooKeeper properties, and
you can instead just set the hbase-site.xml in the config resources.
For example, my Hadoop Configuration Resources is
at 11:30 AM, Bryan Bende <bbe...@gmail.com> wrote:
> Hello,
>
> For HBaseClient, are you sure your ZooKeeper being used by HBase is
> running on localhost:2181?
>
> Typically you don't really need to set the three ZooKeeper properties, and
> you can instead just set the h
Mike,
If I am understanding correctly I think this can be done today... The
Directory property on PutHDFS supports expression language, so you could
set it to a value like:
/data/${now():format('dd-MM-yy')}/
This could be set directly in PutHDFS, although it is also a common pattern
to stick an
Hello,
Adding the dev alias as well to see if anyone else knows the answer.
-Bryan
On Fri, Jul 29, 2016 at 10:37 AM, Mariama Barr-Dallas wrote:
> Hello,
> I am attempting to add a Controller Service to a processor property via
> the rest API by changing the descriptors
Milind,
I'm not sure if I understand the question correctly, but are you asking how
to find a specific provenance event beyond the 1,000 most recent that are
displayed when loading the provenance view?
If so, there is a Search button in the top right of the Provenance window
that brings up a
Anuj,
Just to clarify, you want to route on the name of the element under
POSTransaction? Meaning, route "Order" to one place and "Refund" to another?
I'm not a JSON Path expert, but I can't come up with a way to get just an
element name from JSON path, it is usually used to get the value of a
Hi Manish,
This post [1] has an overview of how to distribute data across your NiFi
cluster.
In general though, NiFi runs the same flow on each node and the data needs
to be divided across the nodes appropriately depending on the situation.
The only exception to running the same flow on every
ngal...@gmail.com
> > wrote:
>
>> Hi Bryan,
>> Thank you for the input. That really helps. I'll try that.
>>
>> On Mon, Jun 27, 2016 at 6:31 PM, Bryan Bende <bbe...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> Right now Attrib
Carlos/Peter,
Thanks for reporting this issue. It seems IS_AUTOINCREMENT is causing
problems in a couple of situations, I know there was another issue with
Hive where they return IS_AUTO_INCREMENT rather than IS_AUTOINCREMENT.
We should definitely address this issue... would either of you be
Hello,
I'm assuming you are using site-to-site since you mentioned failing to
create a transaction.
In nifi.properties on the AWS instance, there is probably a value for
nifi.remote.input.socket.port which would also need to be opened.
-Bryan
On Sat, Jan 21, 2017 at 7:00 PM, mohammed shambakey
1 - 100 of 770 matches
Mail list logo