I have one flow that will have to handle files that are anywhere from 500mb
to several GB in size. The current plan is to store the in HDFS or S3 and
then bring them down for processing in NiFi. Are there any suggestions on
how to handle such large single files?
Thanks,
Mike
We have a very large body of CSV files (well over 1TB) that need to be
imported into HBase. For a single 20GB segment, we are looking at having to
push easily 100M flowfiles into HBase and most of the JSON files generated
are rather small (like 20-250 bytes).
It's going very slowly, and I assume t
@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
>
> On Apr 7, 2017, at 5:26 AM, Mike Thomsen wrote:
>
> I have one flow that will have to handle files that are anywhere from
> 500mb to several GB in size. The current plan is to store the in HDFS
happens when things
> fail
> > half way through. If the puts are idempotent then it may be fine to route
> > the whole to failure and try again even if some data was already
> inserted.
> >
> > Feel free to create a JIRA for hbase record processors, or I can do it
> >
Is it possible to save the controller services w/ a template?
Thanks,
Mike
in
> your data flow. There is an existing JIRA [1] to always include them.
>
> Thanks
>
> Matt
>
> [1] https://issues.apache.org/jira/browse/NIFI-2895
>
> On Thu, Jun 8, 2017 at 12:59 PM, Mike Thomsen
> wrote:
>
>> Is it possible to save the controller services w/ a template?
>>
>> Thanks,
>>
>> Mike
>>
>
>
Yeah, I just screwed up and didn't reference one.
On Thu, Jun 8, 2017 at 1:26 PM, Mike Thomsen wrote:
> I'll have to look again, but I scanned through the XML and didn't see
> either my avro schema registry or the jsonpath reader.
>
> Thanks,
>
> Mike
>
&g
ng to be taking the value of a field and turning it
> into an appropriate byte[], so you'll likely want to use the type of
> the field to cast into an appropriate Java type and then figure out
> how to represent that as bytes.
>
> I know this was a lot of information, but I hope this
I am trying to write a query for GetMongo that gives me documents added in
the last five minutes. It looks like this:
{
"ts": {
"$gte": new Date(ISODate().getTime() - (1000 * 60 * 5))
}
}
The processor goes to an invalid state because it says "query validated
against [that query]
om *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
>
> On Jun 23, 2017, at 9:06 AM, Mike Thomsen wrote:
>
> I am trying to write a query for GetMongo that gives me documents added in
> the last five minutes. It looks like this:
>
> {
>"ts&
alue" : "XYZ"
> }, {
> "value" : "LMN"
> } ]
>
> Which seems correct given that its reading in the JSON with a schema
> that only has the field "value" in it.
>
> Let me know if that is not what you are looking for.
>
>
>
The checkstyle configuration seems to be missing from the repo I forked
from github.com/apache/nifi. I ran checkstyle:checkstyle locally and a
processor passed, but it failed when I did a pull request. Might want to
look at that.
Thanks,
Mike
The Travis CI tests passed for mine, but the AppVeyer build failed because
it didn't like some other NiFi module. PRs:
https://github.com/apache/nifi/pull/1948
https://github.com/apache/nifi/pull/1945
One is a PR to make GetMongo better able to handle very large queries (ex.
for bulk ingestion l
ttent
> failures. Personally I look at the travis details and check that at least
> one of language-dependent build is OK (and that the others are not failing
> because of language issues).
>
> Thanks
>
> 2017-06-27 12:43 GMT+02:00 Mike Thomsen :
>
>> The Travis CI tests
up to someone volunteering for the review process. That's also
> why we invite people to review/test PRs from other contributors: it shows
> the interest of the community and speed up the review process.
>
> Thanks!
>
> 2017-06-27 19:49 GMT+02:00 Mike Thomsen :
>
>&g
One of my customers has a lot of data in Mongo in quite a few collections.
There's no shared schema and some of the data has a lot of nesting. Aside
from a few conversions like flattening the data, most of the data goes more
or less as-is into ElasticSearch.
So my question is, it possible to do an
I have a client that needs to join a data stream from Mongo with the
contents of a CSV file based on a common attribute. Would MergeContent
suffice or is there a better route?
Thanks,
Mike
Is it safe to choose "string" as a default type with Avro? I'm trudging
through some really dirty data right now and that seems to behave fine when
I I do something like this:
Flowfile content:
{
"x": 1
}
Avro field definition:
{ "name": "x", "type": ["null", "string"] }
Where sometimes X
ExecuteSQL default to
> String if they can't figure out what type to use.
>
> Regards,
> Matt
>
>
> > On Aug 17, 2017, at 6:00 PM, Mike Thomsen
> wrote:
> >
> > Is it safe to choose "string" as a default type with Avro? I'm trudging
>
It's very hard to give you any advice without any knowledge of what your
data looks like and what processors you're using.
On Thu, Aug 17, 2017 at 7:05 PM, Noel Alex Makumuli
wrote:
> Hello guys,
>
> Ever since I tried Apache Nifi, I was amazed with its powerful features
> and
> perfomance.
> Ev
Does anyone have any experience persisting provenance beyond the lifecycle
of a flowfile? The high level use case I have in mind is some sort of
traceability database or index where the provenance events of every datum
that comes in gets sent.
Thanks,
Mike
ommon
> places I've seen people send this data are to HDFS, HBase, Accumulo,
> etc..
>
> Hopefully that gives some ideas/direction to head in. Definitely want
> to hear more about what you're thinking and where you're headed. This
> data is very useful for sure.
>
Is it possible to make multiple updates to a record from a single call to a
lookup service? We have to add about 5-6 new fields to a record based on
the contents of a single CSV file, but it doesn't seem like
SimpleCsvFileLookupService or ScriptedLookupService would let us do
something like return
gt; -Mark
>
>
> > On Aug 29, 2017, at 1:36 PM, Matt Burgess wrote:
> >
> > Right now it's a single update per processor, you can provide multiple
> keys to do a compound lookup but it returns a single value. ExecuteScript
> is technically record-aware so you could sc
Adam,
I cannot say exactly why the default settings won't work for you on a clean
installation, but it likely has to do with how small the VM is. The OS
overhead alone is probably a few hundred MB of RAM. If you have anything
else running, even just MySQL or MongoDB it's entirely possible that you
I tried something similar, and found that Mongo executed the query
progressively slower and slower because it apparently has to read each
document and covert them into a JavaScript object to execute that query. In
1.4, expression language support is supposed to be added to the query field
so there
I can vouch for this method. I have two flows for a client that use
GenerateFlowFile to build a JSON DSL query for ElasticSearch and are
executed on a timer. Works quite well with InvokeHttp.
On Thu, Oct 19, 2017 at 11:41 PM, Mark Rachelski
wrote:
> Thank you Bryan,
>
> That should fit my purpos
According to the Avro documentation, this should define a date field:
{
"name": "DateTest",
"type": "record",
"fields": [
{"name": "date", "type": "long", "logicalType": "date"}
]
}
NiFi 1.5.0-SNAPSHOT treats that as a Long and writes it to Mongo that way.
What am I doing
r you specify in PutMongoRecord, you'll
> need to configure it with a Date Format string.
>
> Regards,
> Matt
>
>
> On Wed, Nov 1, 2017 at 3:27 PM, Mike Thomsen
> wrote:
> > According to the Avro documentation, this should define a date field:
> >
> > {
Are you doing the appropriate import static statement to import that static
method?
On Fri, Nov 3, 2017 at 2:40 AM, sally wrote:
> I want to use getFileSytem() for pulling in any file that is newer than the
> timestamp that we have but I can't import it (I mean getFileSytem()) I have
> this code
You may need to update the logback xml file in the conf folder. There is a
line in there for the processor package. Might be too high for info.
On Sat, Nov 4, 2017 at 10:50 AM Eric Chaves wrote:
> Hi folks,
>
> I'm trying to adapt the flow described at
> https://community.hortonworks.com/articles
age (version 1.4.0)
> and my logs folder is empty. I was looking at bootstrap.conf, logback.xml
> and nifi.properties but couldn't found any config value that may
> disable/enable log. Where should those logs be going?
>
> 2017-11-04 12:55 GMT-02:00 Mike Thomsen :
>
>> You
Rahul,
I have a client that uses Mongo for most of their data storage. What I am
doing for them for enrichment like this is I took all of their enrichment
data and store it now in a Mongo collection. Then as data comes in, I use a
LookupService to merge all of the fields from that collection into
Don't know, but you might want to try out InvokeHttp. I know it lets you
tap into the output if you tell it to always output the HTTP response.
On Wed, Nov 8, 2017 at 8:28 AM, James McMahon wrote:
> How can we tap into the workflow to see the output of the PostHTTP
> processor? What are options
Sally,
On top of using XmlSlurper, you're going to want to use MarkupBuilder or
StreamingMarkupBuilder depending on the size of the XML document.
Alternatively, you can just do some copy pasta from Java examples of DOM
parsing to build the DOM, insert a node and then use the Java DOM API to
serial
Take a look at the MongoDBLookupService. It should give you a template to
work from:
https://github.com/apache/nifi/blob/9a8e6b2eb150865361dda241d71405c5a969f5e8/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDB
Eric,
It says your CPU is x86, not x64. Do you have 8GB of RAM to allocate to the
JVM? (That's what Xmx8g means) 32bit VMs are typically only provisioned
with less than 4GB of RAM.
On Tue, Nov 14, 2017 at 8:37 AM, Eric Thompson
wrote:
> I am getting an error of 'Administratively Yielded for 1 s
The encrypt-config tool can encrypt sensitive properties in
nifi.properties, but can it be set up to also go into the files specified
with the registry property as additional sources of properties?
Thanks,
Mike
Based on this, https://issues.apache.org/jira/browse/NIFI-2653, it looks
like I cannot encrypt the property that holds the MongoDB password if it is
in the variable registry file. Is that correct?
Thanks,
Mike
On Wed, Nov 15, 2017 at 9:25 AM, Mike Thomsen
wrote:
> The encrypt-config tool
d
> flows even with things like passwords. That might help you.
>
> Thanks
>
>
>
> On Wed, Nov 15, 2017 at 11:04 AM, Mike Thomsen
> wrote:
> > Based on this, https://issues.apache.org/jira/browse/NIFI-2653, it looks
> > like I cannot encrypt the property tha
Not right now. It wouldn't be too hard to add for 1.5. Would supplying a
comma-separated list of top-level keys be enough? Something like:
siren,nic
I am not sure how you'd handle something like "something.siren,another.nic"
if that's possible (I know you can supply update operators like that, so
Also, I'll use this thread to bring something about PutMongoRecord into the
mailing list for other Mongo users to see.
PutMongoRecord cannot support more than document replacement updates
because the Mongo update operators use "$" which is an illegal starting
character in Avro. So you cannot defin
A-Data-Schema-
> and-Templates
>
> On Wed, Nov 15, 2017 at 3:32 PM, DENIMAL Thomas
> wrote:
> > Hello Mike,
> >
> >
> >
> > Thanks for your answer.
> >
> > Do you know how can i ask for an enhancement request for this feature?
> >
> >
Just use ExecuteScript and the Groovy MarkupBuilder/StreamingMarkupBuilder
APIs. There is plenty of documentation on the latter out there with good
samples. Depending on how much XML data there is, you'll probably need to
use the StreamingMarkupBuilder and have it write to a temp file. One
caveat:
Maven doesn't handle static imports, that's either a Java or Groovy
compiler issue. So all Maven is telling you is that there is something
wrong between your import statement syntax and the declared dependencies.
Without more than that, I can't really give you an idea of what's wrong.
On Tue, Nov
I think it's telling you that outputStream.write(text) needs to be
(text.encode('utf-8'))
On Sat, Nov 4, 2017 at 2:37 PM, N, Vyshali wrote:
> Hi,
>
>
>
> I’m trying to do data anonymization using faker package in Nifi for which
> I’m using executescript processor.
>
> The code references are fro
Try this:
def slurper = new XmlSlurper()
def clean = [:]
def parsed = slurper.parseText(inputString)
clean.putAll(parsed)
//Serialize to XML w/ preferred API
On Wed, Nov 15, 2017 at 5:33 AM, sally
wrote:
> I want to romeve namespaces from my xml response(now i am working inside
> nifi
> envi
Does anyone have any experience using AD as the backend for NiFi's
authentication and authorization? I've never had to work with it before,
but it seems like we can use it as either a LDAP provider or a Kerberos
implementation. Does anyone have any recommendations on how to do the
integration so th
ad if you have any other questions configuring AD integration through
> LDAP.
>
>
>
> Kevin
>
>
>
> *From: *Mike Thomsen
> *Reply-To: *
> *Date: *Tuesday, November 21, 2017 at 11:54
> *To: *
> *Subject: *NiFi and Active Directory
>
>
>
> Does anyone h
Try this out as an alternative to using a web service. It's in
ExecuteScript (Groovy)
import static groovy.json.JsonOutput.*
def flowFiles = session.get(500)
def xmlSlurper = new XmlSlurper()
flowFiles?.each { flowFile ->
def text
session.read(flowFile, { inputStream ->
text = in
I'm following Pierre's blog post that shows how to set up LDAP w/ ApacheDS:
https://pierrevillard.com/2017/01/24/integration-of-nifi-with-ldap
I've tried this with 1.4.0 and 1.5.0-SNAPSHOT (toolkits built for each too)
for what it's worth.
Built the certs with this command:
bin/tls-toolkit.sh s
ns.xml and restarting NiFi.
>
>
>
> Hope this helps! If you have any other questions about configuring LDAP or
> authorizers, let me know.
>
>
>
> Kevin
>
>
>
>
>
>
>
> *From: *Mike Thomsen
> *Reply-To: *
> *Date: *Friday, December 1, 2017 at
s are empty on startup. Try
> deleting conf/users.xml and conf/authorizations.xml and restarting NiFi.
>
>
>
> Hope this helps! If you have any other questions about configuring LDAP or
> authorizers, let me know.
>
>
>
> Kevin
>
>
>
>
>
>
>
> *Fro
e-based identities you have configured,
>> so you will need to choose an ldap-based user to be your initial admin. Or
>> configure a CompositeUserGroupProvider so that you can use certificates and
>> only require ldap login in absence of a client certificate.
>>
>> -
Some of our users are under OU=Temp, OU=IT, O=Client. The rest are under
OU=Staff, OU=IT, O=Client. What is the best route for configuring NiFi to
able to find users in both LDAP branches?
I should also mention that the NiFi groups are on the same branch in case
that matters.
Thanks,
Mike
I get this error after I installed a new build:
The request contained an invalid host header [SERVER_IP:8080] in the
request [/]. Check for request manipulation or third-party intercept.
In the logs it says:
2017-12-15 18:34:59,937 WARN [NiFi Web Server-66]
o.a.n.w.s.HostHeaderSanitizationCustom
; Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
>
> On Dec 15, 2017, at 3:32 PM, Mike Thomsen wrote:
>
> I get this error after I installed a new build:
>
> The request conta
I have a variable registry set up with a view environment-specific
variables. I use a symlink to switch between properties files when I need
NiFi to target one environment vs the other. After restarting NiFi, I
noticed that it hasn't actually changed the variables. The ones for the
test environment
ld test this with a custom proc for example
>
> On Dec 17, 2017 5:47 PM, "Mike Thomsen" wrote:
>
>> I have a variable registry set up with a view environment-specific
>> variables. I use a symlink to switch between properties files when I need
>> NiFi to target
Disregard... Problem entirely on my end. I forgot to update the path when I
upgraded. I installed a new copy of NiFi, did a cp on all of the relevant
conf files and forgot to update the path in the variable registry line.
On Sun, Dec 17, 2017 at 6:20 PM, Mike Thomsen
wrote:
> I tested w
> outputStream.write(bytearray(trailer.encode('utf-8')))
encode should return the list.
On Tue, Jan 23, 2018 at 1:36 PM, James McMahon wrote:
> Thank you very much Matt. I'll make that refinement right now and
> reprocess my data collection. -Jim.
>
> On Tue, Jan 23, 2018 at 1:08 PM, Matt Burg
It's input requirement is set to INPUT_FORBIDDEN. It shouldn't be too hard
to set that to INPUT_ALLOWED and make it able to handle a flowfile or a
fixed folder path (or even better, fixed folder path w/ EL support). If you
do a patch, I'll try to find time to do a review.
On Fri, Jan 26, 2018 at 4
As a work around, you could turn your shell script into a template and use
PutFile to put a copy of it in there with the attributes from NiFi injected
into the body and run it with ExecuteProcess since that one seems to work.
On Tue, Feb 6, 2018 at 11:28 AM, Karthik Kothareddy (karthikk) [CONT - T
We're using AD, and I have verified that we can actually pull the users and
groups by logging in as the initial admin and checking the users. It shows
the users and the LDAP groups we assigned. Looks fine there.
When a user goes to login with their domain account, it says invalid
username and pass
Jim,
You need to call session.commit() periodically if you want to progressively
push flowfiles out. Though you need to think about the failure scenarios
your script has and whether you really want to sacrifice the ability to do
a rollback in the event things go wrong.
On Thu, Feb 15, 2018 at 6:
1.5 introduced a new property: nifi.web.http.host
Set that to the URL you want to use for accessing it.
On Fri, Feb 16, 2018 at 6:19 AM, Sean Marciniak wrote:
> Hey team,
>
> I have NiFi running on a standalone VM and I try to directly connect to it
> over http and I get this message:
>
>
> ``
IP address should work.
On Fri, Feb 16, 2018 at 7:14 AM, Sean Marciniak wrote:
> Does URL need to be a FQDN? Can it not just accept the host IP address?
>
>
> On 16 February 2018 at 12:13:55 pm, Mike Thomsen (mikerthom...@gmail.com)
> wrote:
>
> 1.5 intr
t those details as soon as I get a chance to dig into the
> specifics of AD a bit more.
>
>
>
> Thanks,
>
> Kevin
>
>
>
> [1] https://support.microsoft.com/en-us/help/555636
>
> [2] https://docs.oracle.com/cd/A97630_01/network.920/a96579/
> comtools.htm#6
That doesn't look like the right way to specify an empty array. This SO
example fits about what I'd expect:
https://stackoverflow.com/a/42140165/284538
So it should be default:[0]
On Tue, Feb 27, 2018 at 8:56 AM, Mark Payne wrote:
> Juan,
>
> So the scenario that you laid out in the NIFI-4893
For what it's worth, this scenario sounds very similar to why I wrote the
MongoDBLookupService. I had a client that was using a CSV file w/ data
dictionary CSV file.
On Mon, Feb 26, 2018 at 8:30 AM, Matt Burgess wrote:
> Mausam,
>
> You could use PutFile to store off the Category CSV, then you c
I have Atlas 0.8.2 (BerkeleyDB and Embedded ES) and NiFi 1.6.0 nightly both
up and claiming that they can talk to one another.
What should I be seeing if they are? My test configuration consists of a
simple process group that has GetMongo, UpdateAttributes and
PutElasticSearchHttpRecord. I'm not s
t included in the default assembly. Instead there is a
> "include-atlas" profile that can be activated when building the
> assembly, and that should include the Atlas NAR and associated
> reporting task.
>
> Regards,
> Matt
>
>
> On Wed, Feb 28, 2018 at 1:42 PM, Mike Tho
ts in Hive (something like
> that).
>
> I don't think its a massive long-term store for event-level provenance
> data like NiFi has, but others can chime in here if I am wrong.
>
> -Bryan
>
>
> On Thu, Mar 1, 2018 at 10:11 AM, Mike Thomsen
> wrote:
> > So I tr
e ElasticSearch
> processors.
>
> Otherwise we'd get into building 100 reporting tasks for all the
> various destinations, just like all the processors.
>
> -Bryan
>
> On Thu, Mar 1, 2018 at 11:04 AM, Mike Thomsen
> wrote:
> > Bryan,
> >
> > I have
Are you by any chance running a custom build of NiFi?
On Fri, Mar 2, 2018 at 9:44 AM, Arne Degenring
wrote:
> Hi,
>
>
>
> We have seen a strange problem on NiFi 1.4.0 where custom processors could
> suddenly not be started, because of incompatibility with custom services:
>
>
>
> 2018-03-02 13:4
Yves/Pierre,
I agree with Pierre. I think starting in 1.7 we're going to need to rethink
this and possibly do some breaking changes to really make this work right.
My $0.02 is that PutMongo should allow updates to come in the following use
cases:
1. Read update keys using attributes prefixed with
This article talks about a JMS client for the SIB, so it might work:
https://www-01.ibm.com/support/docview.wss?uid=swg21995757
On Tue, Mar 6, 2018 at 4:17 AM, Tian TD Deng wrote:
> Dear All,
>
> I was wondering if it's possible to consume messages from IBM WESB SIB
> topic using Apache Nifi Co
Pierre,
I should have a PR tonight or tomorrow that should provide a reasonable
work around for Yves.
Thanks,
Mike
On Fri, Mar 16, 2018 at 10:08 AM, Mike Thomsen
wrote:
> Yves/Pierre,
>
> I agree with Pierre. I think starting in 1.7 we're going to need to
> rethink this and
Scott,
The "input requirement" is hard-coded into the processor. Not knowing much
about your use case, I'd say you're either going to have to get really
creative or ask for some changes.
Thanks,
Mike
On Sun, Mar 18, 2018 at 9:37 PM, scott wrote:
> Hello community,
>
> I'm looking at using an
I'm trying to use the Docker image to set up a secure NiFi demo, and am
running into this error:
Unknown user with identity 'CN=initialAdmin, OU=NIFI'. Contact the system
administrator.
SSL works, I verified that the owner in the cert is "CN=initialAdmin,
OU=NIFI"
I've attached the Docker Compos
Yeah, that's the weird part. It looks valid to me:
On Thu, Mar 22, 2018 at 8:07 AM, Pierre Villard wrote:
> Hey Mike,
>
> Can you check the users.xml file created by NiFi when it started for the
> first time?
>
> 2018-03-22 12:41
"CN=initialAdmin, OU=NIFI"
> In your yaml file, I'd try to use double quotes around your property
> values.
>
> 2018-03-22 13:16 GMT+01:00 Mike Thomsen :
>
>> Yeah, that's the weird part. It looks valid to me:
>>
>>
>>
&g
TH=tls’ in the documentation for LDAP setup; that is
> an error. I’ll open a PR to correct the documentation. To confirm how it
> works, look at the start.sh file)
>
>
>
> Cheers,
> Kevin
>
>
>
> *From: *Mike Thomsen
> *Reply-To: *
> *Date: *Thursday,
e [1].
>
>
>
> Also, it looks like there is already a JIRA for the AUTH=ldap
> documentation issue [2].
>
>
>
> Kevin
>
>
>
> [1] https://issues.apache.org/jira/browse/NIFI-5002
>
> [2] https://issues.apache.org/jira/browse/NIFI-4934
>
>
>
> *
using the initial admin. Grant them access to the right
> resources (e.g., the UI), and then you should be able to login with
> test/password.
>
>
>
> *From: *Mike Thomsen
> *Reply-To: *
> *Date: *Thursday, March 22, 2018 at 10:03
>
> *To: *
> *Subject: *Re:
I don't think there are any processors yet for this sort of thing. I've
been thinking about working on some for a while now. How would you expect
that hypothetical PutRedisHash processor to work? Here are some example use
cases that I've been mulling for building one:
1. Read from attributes with
Off the top of my head, try PutHBaseCell for that. If you run into
problems, let us know.
As a side note, you should be careful about storing large binary blobs in
HBase. I don't know to what extent our processors support HBase MOBs
either. In general, you'll probably be alright if the pictures ar
I think you would hit two big barriers in design:
1. NiFi just isn't designed to be an app server for additional service
layer components a la Tomcat.
2. Synchronizing between the REST services and NiFi's highly asynchronous
processing would be a logistical nightmare if your goal is to confine NiF
If you know one of the supported scripting languages, you can probably do
some of that with ExecuteScript. For example, if you wanted to drop every
other flowfile in a block of 100, it'd be like this:
def flowfiles = session.get(100) // Get up to 100
int index = 1
flowfiles?.each { flowFile ->
if
Colleague of mine is trying to poll this API:
/nifi-api/flow/process-groups/root/status?recursive=true
The instance is protected with SSL and LDAP auth. The user account I'm
trying has "view system diagnostics," but it still gets a 403. Any ideas?
Thanks,
Mike
Mohit,
Looking at your schema:
{
"type": "record",
"name": "test",
"namespace": "test",
"fields": [{
"name": "name",
"type": ["null", "int"],
"default": null
}, {
"name": "age",
"type": ["null", "string"],
"default": null
}]
}
It looks like you have your fields' types backwards. (Name should be
Am I missing something or is there no way to update the settings in
bootstrap.conf for things like Xmx and Xms?
Thanks,
Mike
> java.arg variables in bootstraf.conf.
>
>
>
> *From:* Mike Thomsen [mailto:mikerthom...@gmail.com]
> *Sent:* Wednesday, April 4, 2018 3:34 PM
> *To:* users@nifi.apache.org
> *Subject:* EXT: Change memory limits w/ Docker image
>
>
>
> Am I missing something
Found these errors in the Docker logs:
postgres_1 | 2018-04-05 18:33:22.183 UTC [51] ERROR: column
"timestamp_field" is of type timestamp without time zone but expression is
of type bigint at character 282
postgres_1 | 2018-04-05 18:33:22.183 UTC [51] HINT: You will need to
rewrite o
f NiFi are you using?
>
> Regards,
> Matt
>
>
> On Thu, Apr 5, 2018 at 3:05 PM, Mike Thomsen
> wrote:
> > Found these errors in the Docker logs:
> >
> > postgres_1 | 2018-04-05 18:33:22.183 UTC [51] ERROR: column
> > "timestamp_field" is of
Matt were you using 1.6?
On Thu, Apr 5, 2018 at 3:56 PM Mike Thomsen wrote:
> 1.5
>
> Thanks,
>
> Mike
>
> On Thu, Apr 5, 2018 at 3:40 PM, Matt Burgess wrote:
>
>> Mike,
>>
>> I can't reproduce this, I use the same DDL a
Take a look at TestInvokeJavascript.java. There are some samples there of
manually calling customValidate.
On Tue, Apr 10, 2018 at 10:28 AM, Mohit
wrote:
> Hi,
>
>
>
> I’m testing a custom processor. I’ve used a customValidate(ValidationContext
> context) method to validate the properties. When
Scott,
In your last email, the way I read it you found part of the problem was
using USE_USERNAME and not USE_DN, have you done a full comparison of the
other config with this one?
On Tue, Apr 10, 2018 at 2:58 PM, Scott Howell
wrote:
> Yes I did, I had Nifi-registry working with a local instanc
If you know a scripting language that's supported, you can use the
ScriptedLookupService to tailor the behavior to your exact specification.
The dynamic properties also support EL, so depending your use case you
might be able to leverage that.
Ex of a Groovy script built for ScriptedLookupService:
Sorry, misread which processor you're using. You'd want to use LookupRecord
with my suggestion.
On Wed, Apr 11, 2018 at 9:40 AM, Mike Thomsen
wrote:
> If you know a scripting language that's supported, you can use the
> ScriptedLookupService to tailor the behavior to you
1 - 100 of 457 matches
Mail list logo