Were you disabling the processor or just stopping it? I've found with the
scripted processors they will get in an odd state and that's the only way to
reset it.
Shawn
From: Boris Tyukin
Sent: Tuesday, May 22, 2018 3:07:36 PM
To:
Logged in as the user NiFi is running as on the same host are you able to
create files with that group? We use PutFile and none of our groups are local
to the host.
Thanks
Shawn
From: James McMahon
Sent: Wednesday, May 30, 2018 8:21 AM
To: users@nifi.apache.org
Subject: Re: User, Group in
something like this?
passwd: files ldap
shadow: files ldap
group:files ldap
(We did try to reverse that order to "ldap files". The same warnings get
thrown).
On Wed, May 30, 2018 at 11:11 AM, Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
Logged in
containning 12 Billion
rows
Hi,
Yes I tried to fetch around 40 million rows which took time but it was
executed. I’ll try with the Avro thing.
How to break the select into multiple part? Can you explain in brief the
partition flow to start with?
Thanks,
Mohit
From: Shawn Weeks
Sent
I'm already doing something like that with a single HandleHTTPRequest and a
RouteOnAttribute just after to send it to the appropriate location. You can
have multiple HandleHTTPResponses so it's really not all that complicated.
Thanks
Shawn
From: Kelsey RIDER
It's probably not stuck doing nothing, using a JDBC connection to fetch 12
Billion rows is going to be painful no matter what you do. At those kind of
sizes you're probably better off having Hive create a temporary table in Avro
format and then consuming the Avro files from HDFS into NiFi. The
I feel this is probably really simple but I'm trying to replace the delimiter
in a text file with '\001' to match Hive's default delimiter. Here is my config
which is based on a Hortonworks post from 2016 but it's not inserting ctrl+a
it's inserting the string '\001'.
Thanks
Shawn
A delimiter which you can verify in vi.
Thanks.
--
Jagrut
On Mon, Jun 25, 2018 at 1:09 PM, Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
I feel this is probably really simple but I'm trying to replace the delimiter
in a text file with '\001' to match Hive's default delimiter. He
I'm building a rest service with the HTTP Request and Response Processors to
support data extracts from Hive. Since some of the extracts can be quiet large
using the SelectHiveQL Processor isn't a performant option and instead I'm
trying to use on demand Hive Temporary Tables to do the heavy
adable external file.
Or, better yet, one of the LookupService variants, which is more generic.
HTH,
Andrew
On Mon, Feb 19, 2018, 3:10 PM Shawn Weeks
<swe...@weeksconsulting.us<mailto:swe...@weeksconsulting.us>> wrote:
Hi, I’m looking for some ideas on how to handle a workflow I’m develo
te:
I think this is exactly what a Lookup Service was designed to do. You are free
to implement any logic of yours behind the scenes.
Andrew
On Mon, Feb 19, 2018, 5:12 PM Shawn Weeks
<swe...@weeksconsulting.us<mailto:swe...@weeksconsulting.us>> wrote:
The problem I see with the Scan
.
HTH,
Andrew
On Mon, Feb 19, 2018, 3:10 PM Shawn Weeks
<swe...@weeksconsulting.us<mailto:swe...@weeksconsulting.us>> wrote:
Hi, I’m looking for some ideas on how to handle a workflow I’m developing. I
have NiFi monitoring a drop off location where files are delivered. Files f
processor I developed
that queries a database table comparing the incoming file name against known
file name patterns stored as regular expressions and attaches those attributes
to the flow file. I feel like there is a better way to do this but I'm still
fairly new to NiFi.
Thanks
Shawn Weeks
See NIFI-5134 as there was a known bug with the Hive Connection Pool that made
it fail once the Kerberos Tickets expired and you lost your connection from
Hive. If you don't have this patch in your version once the Kerberos Tickets
reaches the end of it's lifetime the connection pool won't work
starts failing after a
week
Thanks Shawn. Looks like this was fixed in 1.7.0. Will have to upgrade.
From: Shawn Weeks [mailto:swe...@weeksconsulting.us]
Sent: Friday, July 27, 2018 8:07 AM
To: users@nifi.apache.org
Subject: Re: [EXT] Re: Hive w/ Kerberos Authentication starts failing after a
week
The project I'm on is running into this issue as well and it gets particularly
painful when all of your server's are signed by the same root ca that signs
your smart card logins and your using something like KnoxSSO. Explaining to
your end users that you should skip the first Certificate Prompt
be improved and I have some ideas for how to do it. I've cloned the
issue to NiFi to make sure we are tracking it for both projects [1][2]
[1] https://issues.apache.org/jira/browse/NIFIREG-189
[2] https://issues.apache.org/jira/browse/NIFI-5504
On Thu, Aug 9, 2018 at 11:54 AM, Shawn Weeks
mailto:swe
iPhone
On Aug 18, 2018, at 1:30 PM, Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
I was building some example NiFi workflows from the CSV files at
https://people.sc.fsu.edu/~jburkardt/data/csv/csv.html specifically nile.csv
and it appears that NiFi is trying to include the quoted
I was building some example NiFi workflows from the CSV files at
https://people.sc.fsu.edu/~jburkardt/data/csv/csv.html specifically nile.csv
and it appears that NiFi is trying to include the quoted header with quotes in
the Avro schema it generates. This is an all defaults CSVReader used with
You're probably hitting NIFI-5109. If NiFi goes through an election it loses
its state for the List Processors.
Thanks
Shawn
From: Travis Vaske
Sent: Wednesday, July 18, 2018 2:13 PM
To: users@nifi.apache.org
Subject: List Processor Issues
I have 20
I'm working on a project in NiFi that needs to handle a variety of CSV Formats
with varying delimiters, headers, escape characters etc. These files will be
converted to AVRO and streamed into their respective Hive tables. I'd rather
not have hundreds of CSVReader services and I'm wondering if
Realize this is a bit late but I ran into you're emails when I encountered the
same issue on a my installation of NiFi. I don't know what the too many users
message is about but I do know why NiFi gets a null pointer exception. The
PutHiveStreaming Processor uses the HiveEndpoint Class to
://raw.githubusercontent.com/weeksconsulting/nifi-templates/master/Test_Native.xml
Thanks
Shawn Weeks
Looking at PutSQL it looks the same. I'm assuming I'm missing something
somewhere because I don't see where the connection acquired from the pool is
ever closed.
Thanks
Shawn
From: Shawn Weeks <swe...@weeksconsulting.us>
Sent: Sunday, March 18, 2018 12:08 PM
To: users@nifi.apache.org
S
Realize this might be more of a dev question but in the PutHiveQL Processor it
appears that it acquires the connection object from the connection pool when
the processor is first started and then never releases or acquires the
connection object again. This is based on rel/nifi-1.5.0 tag on
e. I'd
try setting the back pressure limits to 0 / 0 B on one or more of the
relationships in the loop so the involved processors don't get "stuck".
Brandon
On Wed, Feb 28, 2018 at 12:26 PM Shawn Weeks
<swe...@weeksconsulting.us<mailto:swe...@weeksconsulting.us>> wrote:
Hi, I’ve got a workflow where I’m trying to extract nested compressed files. I
used an example I found on here where you setup a flow that passes the file
through the Identify Mime Type Processor and then a Route on Attribute to send
the file to either the Compress Content or Unpackage Content
to route to failure
not success.
Thanks
Shawn Weeks
Took a look at the pull request and that should handle the issue I was seeing.
Is there going to be an issue with the directory being left there if something
else fails? For my case the empty directory is fine.
Thanks
Shawn Weeks
From: Bryan Bende
Sent
somewhere
else. Can you ensure it is a valid date in that format before hand?
Thanks
Shawn Weeks
From: Juan Pablo Gardella
Sent: Thursday, October 18, 2018 10:13 AM
To: users@nifi.apache.org
Subject: Re: [EXT] ReplaceText cannot consume messages if Regex does not match
Hi, the error
What processor are you defining your expression in? I also may be
misunderstanding the problem because I don’t see any regular expressions
anywhere. Can you create a sample workflow showing your issue so I can take a
look at it.
Thanks
Shawn Weeks
From: Juan Pablo Gardella
Sent: Thursday
}/[0-9]{4}')}” however that
would not catch things that look like dates but aren’t valid.
Thanks
Shawn Weeks
From: Juan Pablo Gardella
Sent: Thursday, October 18, 2018 11:03 AM
To: users@nifi.apache.org
Subject: Re: [EXT] ReplaceText cannot consume messages if Regex does not match
At search value
Shawn Weeks
for a specific call
to GenerateTableFetch have completed.
Thanks
Shawn Weeks
of some other error messages.
Thanks
Shawn Weeks
From: Noe Detore
Sent: Wednesday, October 31, 2018 7:16:15 AM
To: users@nifi.apache.org
Subject: PutHiveStreaming TimelineClientImpl Exception
Hello,
Using NIFI 1.5 PutHiveStreaming processor I am seeing a lot
nd you can then apply deeper processing on them.
Thanks
On Fri, Oct 26, 2018 at 11:36 AM Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
>
> Is there anyway for a ScriptedRecordReader to set an attribute on a FlowFile
> when there is an error? Have a situation where I've wr
Shawn Weeks wrote:
>
> Is there anyway for a ScriptedRecordReader to set an attribute on a FlowFile
> when there is an error? Have a situation where I've written a groovy script
> to parse xml into a specific record structure and occasionally the incoming
> data has characters not
Yeah there are about 4 million files in the directory and NiFi wasn't too happy
about listing all of them. This is just for a test anyway so I might be able to
use GetHDFS.
Thanks
Shawn Weeks
From: Bryan Bende
Sent: Friday, September 21, 2018 8:54:03 AM
;
however that doesn't appear to work and the ListHDFS Processor returns nothing.
This is in the Hortonworks HDF 3.1.2 release of NiFi 1.5. The tool tip seems to
indicate the regex is only applied to the file name so what am i missing.
Thanks
Shawn Weeks
with the error had the Kafka Service Name listed as
kafka/_HOST@MY_REALM.COM if I change it to kafka/_HOST it works fine but that
might be another bug.
Thanks
Shawn Weeks
From: Pierre Villard
Sent: Tuesday, September 25, 2018 8:06:07 AM
To: users@nifi.apache.org
Subject
Well changing to kafka/_HOST doesn't work but that's for other reasons. The bug
is the unstoppability.
Thanks
Shawn Weeks
From: Shawn Weeks
Sent: Tuesday, September 25, 2018 8:15:58 AM
To: users@nifi.apache.org
Subject: Re: Possible Bug with PublishKafka_1_0
)
at sun.security.jgss.GSSManagerImpl.createName(GSSManagerImpl.java:138)
at
com.sun.security.sasl.gsskerb.GssKrb5Client.(GssKrb5Client.java:107)
... 24 common frames omitted
Thanks
Shawn Weeks
but it's all
going to be custom record readers as even the new xml record reader isn't going
to handle this convolvated stuff I'm getting.
My next attempt may be to stream it into HBase and then copy it from there to
Hive.
Thanks
Shawn Weeks
From: Matt Burgess
(RecordFieldType.BYTE.dataType),(boolean)false)
]
)
Thanks
Shawn Weeks
From: Shawn Weeks
Sent: Monday, September 17, 2018 2:36:40 PM
To: users@nifi.apache.org
Subject: Re: Scripted Record Reader - Missing Something Obvious
It's not happy about the default either. I'm
pe Byte.
Which means that it is expecting as its value an object of type Byte[] but you
are passing it an object of type byte[].
You'd have to create a Byte[] instead, using the object wrapper instead of the
primitive byte array.
Thanks
-Mark
On Sep 17, 2018, at 2:12 PM, Shawn Weeks
mailto:swe..
/NIFI-4857
On Mon, Sep 17, 2018 at 3:06 PM Shawn Weeks wrote:
>
> So I've tried the Apache Commons ArrayUtils.toObject and now I get this which
> isn't much different.
>
>
> new MapRecord(recordSchema,[
> 'id':variables.uuid,
> 'file_name':variables."original_filename&q
don't need a nullable
RecordField, so you can add a boolean 'false' param to your
RecordField constructor for that field, and that may get you around
the union issue.
Regards,
Matt
On Mon, Sep 17, 2018 at 3:19 PM Shawn Weeks wrote:
>
> It's the Hortonworks variation on 1.5. HDF 3.1.2. I'
t.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Thanks
Shawn Weeks
ypicks
from 1.8.0 jiras.
Will check the archives as mentioned, thanks again.
patw
On Wed, Dec 19, 2018 at 4:45 PM Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
There is a bug for this but I’m not sure which release fixed it. Something
after 1.5 I think. The patch is in the hortonwor
nable to view Bucket with ID b5c0b8d3-44df-4afd-9e4b-114c0e299268.
> Returning Forbidden response.
>
> Enabling Jetty debug log may be helpful to get more information, but
> lots of noisy logs should be expected.
> E.g. add this entry to conf/logback.xml
>
>
> Thanks,
&g
],
groups[] does not have permission to access the requested resource. No
applicable policies could be found. Returning Forbidden response.
I could just give blanket access to everything but I prefer to be more precise.
Thanks
Shawn Weeks
Since fileSize is a standard property for a flow file shouldn’t the TestRunner
set it when you queue a new file? The properties it appears to set are filename
and uuid.
Thanks
Shawn Weeks
Sent from my iPhone
icitly by a processor). You can get the size of a FlowFile by
calling FlowFile.getSize()
Does that help?
Thanks
-Mark
On Feb 26, 2019, at 11:20 AM, Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
Since fileSize is a standard property for a flow file shouldn’t the TestRunner
set it wh
I found the code for it that inserts those like they were attributed in
ValueLookup. To further confuse it some processors appear to set an attribute
of that name.
Thanks for explaining that I was a bit confused.
Shawn Weeks
Sent from my iPhone
On Feb 26, 2019, at 10:54 AM, Mark Payne
I’m pretty sure AVRO only supports a single schema per file. You can create
columns of record type and put each type of record in the correct column but at
that point I might just look at using a MAP data type and write a custom record
reader. Normally you’d split the data into a separate file
Are you not able to update the properties for the controller service. It looks
like you use something like PUT /controller-services/{id} with some json kinda
like this.
{ "revision": {…}, "id": "value", "uri": "value", "position": {…},
"permissions": {…}, "bulletins": [{…}],
out how to test it. I'm
working off of the NiFi 1.5 Processor ArchType.
Thanks
Shawn Weeks
How are you setting the value to “1” in the Notify Processor? Do you have two
Notify’s back to back with one setting to 0 and then the next incrementing by
one. I’ve always had trouble if several Notify’s are trying to change the same
key because the Wait sees the value as it flip/flops.
s the Signal Counter Delta to 0 to prevent the next
flowfile from proceeding before the gate is cleared. If there is work waiting
in the Wait queue, that could be tripping the back-to-back issue…
Thanks,
Dave
From: Shawn Weeks
Sent: Wednesday, June 12, 2019 11:50 AM
To: users@nifi.apache.o
See this example as I had a lot of questions about wait and notify earlier and
this helped a lot.
https://gist.github.com/ijokarumawak/9e1a4855934f2bb9661f88ca625bd244
Thanks
Shawn
Sent from my iPhone
On May 17, 2019, at 1:59 PM, David Gallagher
mailto:dgallag...@cleverdevices.com>> wrote:
How do I fetch all the bulletins for a Process Group including all of its
children. When I provide a filter groupId= to the rest api /flow/bulletin-board
it doesn’t include children.
Thanks
Shawn Weeks
ibe it, where the running_count property
> has to be set to something on the flowfile in order for the flowfile to go
> through.
>
> On Tue, Apr 23, 2019 at 10:39 PM Shawn Weeks
> wrote:
>>
>> Running into some additional inconsistencies. I’m under the impression that
you find the example useful.
Thanks,
Koji
On Thu, Apr 25, 2019 at 1:23 AM Shawn Weeks wrote:
>
> Maybe it's just the documentation that's way off. I would expect that if I
> said wait for the counter to be 1 that it releases things when the counter is
> 1 not when the counter is
Subject: Re: Regarding Hanging Tez queries in Nifi
The query is doing a validation (performing a count), so only a single row is
being returned.
On Thu, Apr 25, 2019 at 10:05 AM Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
How many rows does the query return? The actual fetch o
Trying to figure out the correct way to stream timestamps as it appears that
using the Hive String Timestamp format no longer works.
Error [java.lang.NumberFormatException: For input string: "2019-04-26
00:05:00.363"]
Also What about dates?
Thanks
Shawn Weeks
apRecord$$Lambda$1216/421560596@5794948d
for field c_date] parsing Record [OW[class=class
org.apache.nifi.serialization.record.MapRecord,value=MapRecord[{c_string=Hello
World, c_timestamp=2019-01-01 09:23:15, c_date=2019-01-01}]]].:
See attached example...
Thanks
Shawn Weeks
From: Shawn Weeks
supposed to have two Notify Processors
back to back where one resets the counter to zero and the next increments by
one? That seems a bit clunky.
Thoughts?
Thanks
Shawn Weeks
I’m assuming your talking about the snappy problem. If you use compress content
prior to puthdfs you can compress with Snappy as it uses the Java Native Snappy
Lib. The HDFS processors are limited to the actual Hadoop Libraries so they’d
have to change from Native to get around this. I’m pretty
: Shawn Weeks
Reply-To: "users@nifi.apache.org"
Date: Thursday, November 7, 2019 at 7:41 AM
To: "users@nifi.apache.org"
Subject: Re: How to replace multi character delimiter with ASCII 001
So that worked but now I can’t figure out how to set the Value Separator in th
but NiFi isn't
interpreting \u0001. I've also tried \001 and ${literal('\u0001')}. None of
which worked. What is the correct way to do this?
Thanks
Shawn Weeks
[cid:a63a1786-00f7-4ebd-a424-928935cf08d0]
;mailto:alopre...@apache.org>
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
On Nov 6, 2019, at 12:25 PM, Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
I'm trying to process a delimited file with a multi character delimiter which
is n
There appears to be something in the NiFi Provenance that Site to Site
Reporting doesn't like. Does anyone know a workaround for this as it quits
sending updates when this happens. I'm on NiFi 1.9.2.
Thanks
Shawn
2019-11-14 11:42:50,627 ERROR [Timer-Driven Process Thread-11]
ock in order to work. IIRC
Hadoop-Snappy is different from regular Snappy in the sense that it puts the
compression header in each block so the file can be reassembled and
decompressed correctly.
On Nov 11, 2019, at 10:30 AM, Shawn Weeks wrote:
I’m assuming your talking about the snappy problem.
in dns so
I’m not sure where it’s getting the name from.
Thanks
Shawn Weeks
he.org<mailto:alopre...@apache.org>
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
On Dec 10, 2019, at 6:59 PM, Mike Thomsen
mailto:mikerthom...@gmail.com>> wrote:
Are you using the tarball or a Docker image?
On Mon, Dec 9, 2019 at
Is there a work around for https://issues.apache.org/jira/browse/NIFI-6886?
Site to site seems to work it just floods the logs with errors. I’m using site
to site to log data provenance back to database for future analysis.
2019-12-11 22:58:17,216 WARN [NiFi Site-to-Site Connection Pool
It uses snappy-java to get around the native class path issues that would exist
otherwise. What’s wrong with snappy-java?
Thanks
Shawn
From: Noe Detore
Reply-To: "users@nifi.apache.org"
Date: Monday, November 25, 2019 at 2:16 PM
To: "users@nifi.apache.org"
Subject: CompressContent
Trying to figure out what's causing this issue. In a simple test of
UpdateRecord I'm getting an exception. This is on NiFi 1.9.2
2019-09-25 11:59:55,462 ERROR [Timer-Driven Process Thread-10]
o.a.n.processors.standard.UpdateRecord
UpdateRecord[id=692c3fdc-016d-1000-1d1a-3f2c2a1a99bb] Failed to
Hopefully I missing something really trivial. I've got a new 1.9.2 install with
HTTPS enabled. I've added my initial user to the data provenance policy and I'm
able to click on the data provenance menu but it's always empty despite seeing
files getting modifed in the ./provenance_repository
Well nevermind on this. I had to both add a policy for View Provenance and add
a policy on the default flow. Now I see events.
Thanks
Shawn
From: Shawn Weeks
Sent: Wednesday, September 25, 2019 5:29 PM
To: users@nifi.apache.org
Subject: NiFi 1.9.2 Data
I copied the //log_date from the NiFi Documentation so if your only supposed to
use single slashes we might have a bug there. Turns out the EOF error is
because I was Record Path mode instead of Literal Value. Something I missed
from the documentation.
Thanks
Shawn Weeks
I’m trying to process some tab delimited text that may contain embedded quotes
and slashes. How do I disable quote and escape in the CSVReader so that they
aren’t used. Setting them to empty text doesn’t work and I’m getting the
following error because there are quotes and slashes in the data
(화) 오후 9:59, Shawn Weeks
mailto:swe...@weeksconsulting.us>>님이 작성:
What the exception your seeing?
Thanks
Shawn
From: DC Gong mailto:ggong0...@gmail.com>>
Reply-To: "users@nifi.apache.org<mailto:users@nifi.apache.org>"
mailto:users@nifi.apache.org>>
Date: Tue
Did you create the actual map cache server in controller services? I couldn’t
tell. All I saw was the client service.
Thanks
Shawn
Sent from my iPhone
On Dec 24, 2019, at 12:16 PM, William Gosse wrote:
I’m trying to use DetectDuplicate processor but not having much luck. Here the
config:
I’m pretty sure that exception is coming from Hive and not NiFi. I’m really
struggling to see why the Hive JDBC driver needs understanding of storage when
it’s just Thrift messages to the HiveServer2. Are you able to run these queries
through beeline?
Thanks
From: Matt Burgess
Reply-To:
s,
Matt
On Thu, Jan 23, 2020 at 5:39 PM Shawn Weeks
mailto:swe...@weeksconsulting.us>> wrote:
I’m pretty sure that exception is coming from Hive and not NiFi. I’m really
struggling to see why the Hive JDBC driver needs understanding of storage when
it’s just Thrift messages to the HiveSe
How are you defining the schema and what data type are setting for that column?
Thanks
Shawn
From: KhajaAsmath Mohammed
Reply-To: "users@nifi.apache.org"
Date: Wednesday, February 19, 2020 at 3:32 PM
To: "users@nifi.apache.org"
Subject: NIFI Bug with Convert Record - 99.99 changed to
If your using an external one like HBase I wouldn’t expect there to be any
issue assuming it had enough space. However if you are using the built in one
aka DistributedMapCacheServer then all the values need to fit in memory. One
thing I see an issue with is there isn’t a bulk way to get data
What happens when you run PutSQL with something like this “call
my_stored_procedure(?,?)” as the command? I think in some cases you can
depending on the database. What database are you using? Worse case scenario you
could do with a couple of lines of groovy in ExecuteGroovyScript which should
Athena really isn't designed for single record inserts as each insert will
create another file in S3 and the driver behaves a lot more like Hive than a
regular jdbc connection so that processor probably won't ever work. To load
data into Athena from NiFi you can either use ConvertRecord to
but I don’t know enough about the api to know if there is a context maintained
throughout a given flow file to stash the variable in.
Thanks
Shawn Weeks
ocessor if that satisfies your use case.
Regards,
Matt
On Thu, Jan 2, 2020 at 11:44 AM Shawn Weeks
wrote:
>
> I have a use case where I need to append a row number to every record in
a flow file. Not everything I receive is text so the only guarantee I have is
On 1/3/20, 9:24 AM, "Shawn Weeks" wrote:
Adding an additional attribute to UpdateRecord sounds pretty straight
forward only thing I'm not sure about is where to store the state between each
calls to UpdateRecord.process. It will also would be nicer if UpdateRecord
could upd
much nonexistent so I’m not
sure where to even begin debugging.
Thanks
Shawn Weeks
What the exception your seeing?
Thanks
Shawn
From: DC Gong
Reply-To: "users@nifi.apache.org"
Date: Tuesday, December 24, 2019 at 1:39 AM
To: "users@nifi.apache.org"
Subject: CaptureChangeMySQL Error.
Hello,
I want to use a CaptureChangeMySQL processor.
The properties settings are set, but
I’ve created NIFI-6966 and NIFI-6967 with some examples.
Thanks
Shawn Weeks
From: Pierre Villard
Reply-To: "users@nifi.apache.org"
Date: Thursday, December 26, 2019 at 9:33 AM
To: "users@nifi.apache.org"
Subject: Re: CSV Record Reader - No Quote and No Escape
Hey Shawn
Attached is an example. Do not try this on a NiFi cluster you care about as you
might have to delete your flow.xml.gz file to kill it. I increased my xmx and
xms to 4gb for this test.
Thanks
Shawn Weeks
From: Pierre Villard
Reply-To: "users@nifi.apache.org"
Date: Thursday, Decembe
This example was for a different issue. The examples for CSV Reader are on the
JIRA. Sorry.
Thanks
Shawn
From: Shawn Weeks
Reply-To: "users@nifi.apache.org"
Date: Thursday, December 26, 2019 at 12:28 PM
To: "users@nifi.apache.org"
Subject: Re: CSV Record Reader - No
To: "users@nifi.apache.org"
Subject: Re: Advanced QueryRecord Brings NiFi Down
If you could share more details like query, schema, etc. that would be a big
help toward setting up for a Jira ticket to investigate.
On Mon, Dec 23, 2019 at 4:11 PM Shawn Weeks
mailto:swe...@weeksconsulting.us>
ure there's a difference in
the code between missing and null fields).
You can try "type": "string" in ValidateRecord to see if that fixes
it, or there's a "StrNotNullOrEmpty" operator in ValidateCSV.
Regards,
Matt
On Mon, Jan 6, 2
1 - 100 of 163 matches
Mail list logo