+1 (biding)
Ran through the release helper guid. Built successfully and ran
example flows worked as expected.
Note for others using old macOS.
I encountered test failure due to incompatibility between old macOS
Sierra (10.12.6) and RocksDB 6.2.2 as follows. Recent macOS should be
fine.
Hi Lei,
How about setting FIFO prioritizer at all the preceding connections
before the MergeRecord?
Without setting any prioritizer, FlowFile ordering is nondeterministic.
Thanks,
Koji
On Tue, Oct 15, 2019 at 8:56 PM wangl...@geekplus.com.cn
wrote:
>
>
> If FlowFile A, B, C enter the
Hi Lei,
To address FlowFile ordering issue related to CaptureChangeMySQL, I'd
recommend using EnforceOrder processor and FIFO prioritizer before a
processor that requires precise ordering. EnforceOrder can use
"cdc.sequence.id" attribute.
Thanks,
Koji
On Tue, Oct 15, 2019 at 1:14 PM
ers .
> 3. restore the same
>
> But Nifi flow is failed to process data .
>
> Have you ever tried this scenario ? If you tried please let me know .
>
> Thanks & Regards,
> Ganesh.B
>
>
> -Original Message-
> From: Koji Kawamura
> Sent: Friday,
Hi Ganesh,
What did you mean by following statement? Would you elaborate what is
expected and how it behaved actually?
> Nifi is not processing flow from the point where it got stopped or crashed .
Some processor needs "state" get restored in addition to FlowFiles.
States are stored in Zookeeper
Hi Seokwon,
I've added you contributor role. Looking forward to see your contribution!
Thanks,
Koji
On Thu, Oct 10, 2019 at 7:51 AM Seokwon Yang wrote:
>
> Hello,
>
> I would like to contribute to the nifi codebase. Please add me (Jira username
> : sjyang18) as a contributor.
>
> Thanks
>
>
Hi Lei,
I don't know any NiFi built-in feature to achieve that.
To distribute CaptureChangeMySQL load among nodes, I'd deploy separate
standalone NiFi (or even MiNiFi Java) in addition to the main NiFi
cluster for the main data flow.
For example, if there are 5 databases and 3 NiFi nodes, deploy
Hi Pierre,
The PR3394 looks good, but is hard to merge to master without manually
resolving conflict, at least with the commands I know. Please see my
comment on the PR.
Thanks,
Koji
On Thu, Oct 3, 2019 at 1:59 AM Pierre Villard
wrote:
>
> Someone willing to merge
+1 Create NiFi Standard Libraries (binding)
On Wed, Sep 4, 2019 at 7:25 AM Mike Thomsen wrote:
>
> +1 binding
>
> On Tue, Sep 3, 2019 at 5:33 PM Andy LoPresto wrote:
>
> > +1, create NiFi Standard Libraries (binding)
> >
> > Andy LoPresto
> > alopre...@apache.org
> > alopresto.apa...@gmail.com
There is a critical issue with RAW Site-to-Site server side code with Java 11.
RAW Site-to-Site cannot be used currently due to the illegal blocking
mode error.
https://issues.apache.org/jira/browse/NIFI-5952
There is a PR to make RAW S2S not using blocking mode. This addresses
the issue and RAW
Hi Chris,
You are correct, Wait processor has to rely on an attribute within a
FlowFile to determine target signal count.
I think the idea of making Wait be able to fetch target signal count
from DistributedMapCache is a nice improvement.
Please create a JIRA for further discussion. I guess we
Hello,
Sorry to hear that you are having trouble with upgrading NiFi.
What complaints or error messages do you get?
Thanks,
Koji
On Fri, Jul 5, 2019 at 8:18 PM Chaganti Suresh Naidu (NCS)
wrote:
>
> Dear Sir/Madam,
> Greetings,
>
> I was using nifi quite some time, with the version 1.4.0,
>
: org.jnp.interfaces.NamingContextFactory
- Naming Provider URL: jnp://localhost:1099
- Connection Factory Name: /ConnectionFactory
- Naming Factory Libraries: /Users/koji/Downloads/hornetq-2.4.0.Final/lib
Hope this helps.
Koji
On Fri, Jun 14, 2019 at 10:23 AM Koji Kawamura wrote:
>
> Hello,
>
Hello,
PutJMS is deprecated, PublishJMS is recommended instead.
PublishJMS uses JMSConnectionFactoryProvider Controller Service, in
which you can specify "MQ ConnectionFactory Implementation" and "MQ
Client Libraries path (i.e., /usr/jms/lib)".
You will need to download HornetQ from here, extract
Thanks Bryan for the heads up.
My GPG key had been expired. I've renewed my KEY by extending expiration.
Now I confirmed that my commits is marked as 'verified' on Github.
Koji
On Wed, Jun 12, 2019 at 5:43 AM Andy LoPresto wrote:
>
> Peter,
>
> If you have specific issues setting it up, I’m
+1 (binding)
Went through the Release Helper Guide.
- On OS X
- Build nifi-nar-maven-plugin with contrib-check was successful
- Removed .m2 dir before building NiFi
- Full NiFi build was successful
- Tested standalone and secure clustered NiFi, worked as expected
- Confirmed
Hi Matt,
I posted my answer to your Stackoverflow question.
https://stackoverflow.com/questions/55483317/nifi-create-indexes-after-inserting-records-into-table/55486259#55486259
Thanks,
Koji
On Wed, Apr 3, 2019 at 8:04 AM matthewmich...@northwesternmutual.com
wrote:
>
> NiFi Developers,
>
> I
Hi Rajesh,
To process FlowFiles in the order of its arrival, you need to use
FirstInFirstOutPrioritizer at the outgoing connection from ConsumeMQTT
processor, and all connections after that where first in first out ordering
is required.
Please refer this docs for detail.
+1 binding
- Went through the release helper guide
- Ran a simple flow to send data from MiNiFi to NiFi using S2S worked fine
JNI processors are exciting! Thanks for managing release Marc!
Koji
On Fri, Mar 22, 2019 at 1:12 PM Kevin Doran wrote:
>
> +1, binding
>
> - verified build on Mac OS X
+1 binding
Went through release helper guide.
Thanks Joe for managing this release!
On Fri, Mar 15, 2019 at 9:34 AM Aldrin Piri wrote:
>
> +1, binding
>
> comments:
> hashes and signature looked good
> build, tests, and contrib check good on Ubuntu and MacOS
>
>
> On Thu, Mar 14, 2019 at 6:58
Hi Nadeem,
> nifi.remote.input.host=
This property is used for how S2S server introduces itself to S2S
clients for further network communication.
For example, let's say if the server has 2 ip addresses, private and
public, and the public ip is bounded to a fqdn. hostnames for the
server would be:
Hi Nadeem,
How many S2S clients are connecting to your NiFi? And how many NiFi
nodes does your remote NiFi have?
I've encountered the same error message when I conducted a test using
hundreds of S2S client connecting to a single NiFi node.
It happened in a situation like followings:
1. A S2S
Hello,
The error message indicates that the URL is not in a valid format.
Is there a trailing white space with this configuration?
nifi.remote.input.host=FQDN of Nifi3
Thanks,
Koji
On Sat, Mar 2, 2019 at 12:00 PM Puspak wrote:
>
> # Site to Site properties-for Nifi1
> # nifi.remote.input.host=
+1 Release this package as nifi-1.9.0 (binding)
- Verified signature and hashes
- Clean build & test passed
- Tested flows using standalone and secure cluster environments
Thanks,
Koji
On Mon, Feb 18, 2019 at 10:30 PM Denes Arvay wrote:
>
> +1 Release this package as nifi-1.9.0 (non-binding)
>
Probably you've already found a solution, but just in case, did you
update nifi-processor-configuration file, too?
nifi-update-attribute-ui/src/main/webapp/META-INF/nifi-processor-configuration
Thanks,
Koji
On Thu, Jan 3, 2019 at 7:29 PM DAVID SMITH
wrote:
>
> Hi
> I am looking to create a
Hi team,
I'm trying to fix a part of ANTLR Lexer that hasn't been updated since
NiFi is released. Since would like to have more reviewers and comments
if possible.
Ed, Otto and I have been working on
NIFI-5826 UpdateRecord processor throwing PatternSyntaxException
Hello,
Instead of implementing another lock within NiFi, I suggest
investigating the reason why the primary node changed.
NiFi uses Zookeeper election to designate a node to be the primary node.
If a primary node's Zookeeper session timed out, Zookeeper elects
another node to take over the
Hi Bhasker,
MergeRecord processor can do the job.
If your XML files are compressed, you can use CompressContent with
'decompress' mode in front of MergeRecord.
Please refer this NiFi flow template as an example for MergeRecord
merging multiple XML files.
Hello,
You can substitute back-pressure configurations saved in your
flow.xml.gz file before restarting NiFi.
# For example, use sed command. The command creates updated-flow.xml.gz
gunzip -c conf/flow.xml.gz |sed
's/1<\/backPressureObjectThreshold>/300<\/backPressureObjectThreshold>/g'
Hi Manee,
It depends on the Client API on how to tell what the next response
data set should be.
That may be an additional query parameter such as last fetch
timestamp, or something like HTTP etag header in most APIs.
You can pass FlowFiles to InvokeHTTP to tell such parameters.
Also, I
Hello Dave,
Although you already mentioned that you haven't migrated to 1.8 yet, I
recommend to do so because:
- 1.8 adds new 'Listing Strategy' property to ListSFTP processor,
which may help your use-case to not miss any files to list
https://issues.apache.org/jira/browse/NIFI-5406
- 1.8 also
Hi Mark,
> In this scenario, should the nifi.cluster.load.balance.comms.timeout have
> caused the balancing operation to terminate (unsuccessful)?
I agree with that. Wasn't there any WARN log messages written?
Currently NiFi UI doesn't have capability to show load-balancing
related error on the
Hi Jon,
About reporting counter values, there is an existing JIRA for the
improvement idea to expose counters to Reporting task context. That
requires NiFi framework level improvements. I'd suggest taking a look
on it, and resuming discussion there if needed.
Hi Milan,
I assume you put both of the ControllerServices interface and
implementation classes into the same NAR file.
You need to separate those into different NARs.
Please refer nifi-standard-services-api-nar (interfaces) and
nifi-distributed-cache-client-sevices-nar (implementations).
Thanks,
Hi all,
I'd like to add another option to Matt's list of solutions:
4) Add a processor property, 'Enable detailed error handling'
(defaults to false), then toggle available list of relationships. This
way, existing flows such as Peter's don't have to change, while he can
opt-in new
+1 (binding)
Ran through the release helper.
No issue was found.
Thanks for RM duties, Jeff!
On Wed, Oct 24, 2018 at 1:42 PM James Wing wrote:
>
> +1 (binding). Ran though the release helper, tested the resulting binary.
> Thank you for your persistence, Jeff.
>
>
> On Mon, Oct 22, 2018 at
+1 (binding).
Build passed, confirmed few existing flows with a secure cluster.
On Mon, Oct 22, 2018 at 12:01 PM James Wing wrote:
>
> +1 (binding). Thanks again, Jeff.
>
> On Sat, Oct 20, 2018 at 8:11 PM Jeff wrote:
>
> > Hello,
> >
> > I am pleased to be calling this vote for the source
+1 binding
Validated signatures and hashes
Confirmed existing flows work, load-balance and node-offload with a
secured cluster from UI and CLI.
Thank you for the RM duties, Jeff!
Koji
On Fri, Oct 19, 2018 at 6:32 AM Jeremy Dyer wrote:
>
> +1, binding
>
> Validated signatures, hashes, and
Jeff, Sivasprasanna,
NIFI-5698 (PR3073) Fixing DeleteAzureBlob bug is merged.
Thanks,
Koji
On Mon, Oct 15, 2018 at 10:18 AM Koji Kawamura wrote:
>
> Thank you for the fix Sivaprasanna,
> I have Azure account. Reviewing it now.
>
> Koji
> On Sun, Oct 14, 2018 at 11
Thank you for the fix Sivaprasanna,
I have Azure account. Reviewing it now.
Koji
On Sun, Oct 14, 2018 at 11:21 PM Jeff wrote:
>
> Sivaprasanna,
>
> Thanks for submitting a pull request for that issue! Later today or
> tomorrow I'll have to check to see if I've already used up my free-tier
>
at 1:16 PM Clay Teahouse wrote:
>
> Thanks for the reply, Koji.
>
> In case of RPG, are there circumstances where the connections are not
> persistent?
>
>
> On Tue, Sep 25, 2018 at 12:14 AM Koji Kawamura
> wrote:
>
> > Hi Clay,
> >
> > RPG (Site-t
Hi Clay,
RPG (Site-to-Site) is a Peer-to-Peer communication protocol. There's
no distinction between a primary node and the remote cluster, or nodes
other than the primary node and the remote cluster.
E.g. With Cluster A (node a1, a2 and a3) and Cluster B (node b1, b2 and b3)
Each node must be
+1 (binding)
Verified building and testing with Ranger auth.
$ mvn clean install -Pcontrib-check -Pinclude-ranger
Apache release distribution guid line has been updated and discourages
providing SHA-1. We should update release process template.
"SHOULD NOT supply a MD5 or SHA-1 checksum file
+1 (binding)
Verified things in release helper guide.
Tested that a NiFi flow created in old version can be used with 1.7.0.
There are some component properties those have been renamed.
I listed such properties that I'm aware of, on the Migration Guidance page.
Hi all,
A PR is submitted to add to do classification or prediction using a
pre-built model with DeepLearning4J library (thanks @mans2singh!).
https://github.com/apache/nifi/pull/2686
I found following things can/should be improved so that people can use
it easier from a NiFi flow:
- To utilize
+1 (binding)
- Run through Release Helper Guide
- Tested other database other than H2
- Tested Git persistence provider
Few minor feedback:
1. Database user whose password is blank can not be used
When I used HSQLDB, the default 'sa' user does not have password. If I
configure a blank password
ibutes as criteria by default.
> I'll update the PR accordingly and make the new method default to the
> existing one in all of the lookup services that are already there.
>
> On Sat, Jun 9, 2018 at 8:44 AM Mike Thomsen wrote:
>
>> https://issues.apache.org/jira/browse/NIFI-5287
&g
Thanks Mike for starting the discussion.
Yes, I believe that will make LookupService and Schema access strategy
much easier, reusable, and useful.
What I was imagined is not adding new method signature, but simply
copy certain FlowFile attributes into the coordinates map.
We can add that at
There is an existing JIRA submitted by Pierre.
I think its goal is the same with what Joe mentioned above.
https://issues.apache.org/jira/browse/NIFI-4026
As for hashing and routing data with affinity/correlation, I think
'Consistent Hashing' is the most popular approach to minimize the
impact of
Hi Ruben,
I am not aware of any configuration to do that at NiFi side, I believe
NiFi doesn't have that.
I usually do access control based on client IP addresses by FireWall.
'iptables' is the standard one for Linux. You can find many examples
on the internet to configure iptables.
If you are
+1 (binding)
- Verified hashes
- Build and unit tests succeeded
- Run simple flows to send data from MiNiFi CPP to NiFi
- on Mac OS 10.13.4
I have a feedback on the release procedure.
Apache Release distribution policy has following:
"SHOULD NOT supply a MD5 checksum file (because MD5 is too
Hi Mike,
In order to evaluate an ExpressionLanguage with Map containing
variables, I used Query.prepare, to parse a query String into
PreparedQuery.
Following code snippet works without issue. Is that something you want to do?
final Map map = Collections.singletonMap("name", "John Smith");
final
Hello,
> A black command window screen pops up for a brief second and then closes.
Instead of double clicking run-nifi.bat button, you can run the bat
file from command prompt. That way, the output of run-nifi.bat will
stay in the command prompt and can help debugging what went wrong.
1. Open
Hi Anil,
1. I'd use MonitorActivity, too.
Assuming you want to do something when there is no new files listed by
ListSFTP at a scheduled time.
Then you can add MonitorActivity in between ListSFTP and FetchSFTP.
ListSFTP -> MonitorActivity --success--> FetchSFTP
Hi Bobby,
Elasticsearch creates index if it doesn't exist.
I haven't tried it myself yet, but Elasticsearch's Index template
might be useful to tweak default settings for indices those are
created automatically.
Hi Mike,
I agree with the approach that enrich provenance events. In order to
do so, we can use several places to embed meta-data:
- FlowFile attributes: automatically mapped to a provenance event, but
as Andy mentioned, we need to be careful not to put sensitive data.
- Transit URI: when I
+1 (binding)
Ran through the release helper steps.
Confirmed example flows including Atlas integration work with a secure
NiFi cluster.
Thanks for the release efforts!
On Fri, Apr 6, 2018 at 1:01 PM, James Wing wrote:
> +1 (binding) - Ran through the release helper checksums
+1 (binding)
- Confirmed hashes
- Built with include-atlas profile
- Confirmed various flows with 3 node secured cluster on Ubuntu
- Tested integration with Hadoop environment and NiFi Registry
Koji
On Wed, Mar 28, 2018 at 12:27 PM, Andrew Lim wrote:
> +1
+1
On Mon, Mar 12, 2018 at 3:10 AM, Matt Burgess wrote:
> +1
>
> On Sun, Mar 11, 2018 at 1:00 PM, Jeff wrote:
>> +1
>>
>> On Sat, Mar 10, 2018 at 8:42 PM Joe Skora wrote:
>>
>>> +1
>>>
>>>
>>> On Fri, Mar 9, 2018, 3:10 PM Scott Aslan
Hi,
A common mistake with tls-toolkit is generating keystore and
truststore for each node using DIFFERENT NiFi CA Cert.
If tls-toolkit standalone is executed against different output
directories, it may produce different NiFi CA in each directory.
Please check both of s2s client and server
Hi,
If tls-toolkit was used to generate certificates, then there should be
server-1 and server-2 directories created and each contains
keystore.jks and truststore.jks.
```
sudo bash ./tls-toolkit.sh standalone -n 'server-1,server-2' -C 'CN=demo,
OU=nifi' -O -o ../security_output
```
Please
Hi Derek,
Thanks for sharing the files and detailed README. I was able to
reproduce the issue.
It seems there are two different points those can be improved in this scenario.
I've created two JIRAs:
CSVRecordReader should utilize specified date/time/timestamp format at
its
Hi Derek,
By looking at the code briefly, I guess you are using ValidateRecord
processor with CSVReader and AvroWriter..
As you pointed out, it seems DataTypeUtils.isCompatibleDataType does
not use the date format user defined at CSVReader.
Is it possible for you to share followings for us to
/85db60ca71f1825f543c18c62bf7c3fd
Thanks,
Koji
On Sat, Feb 10, 2018 at 10:37 AM, Koji Kawamura <ijokaruma...@gmail.com> wrote:
> Hi Adam,
>
> Thank you very much for reporting the performance issue.
> I created NIFI-4866 and started fixing the issue by moving the
> problematic code block to crea
Hi Adam,
Thank you very much for reporting the performance issue.
I created NIFI-4866 and started fixing the issue by moving the
problematic code block to createConnection.
After confirming that addresses performance issue, I will send a PR to
get it merged.
Koji
On Sat, Feb 10, 2018 at 9:25
Hi Sivaprasanna,
That's a good point.
I am not aware of any background reason for ACCOUNT_NAME to be a
sensitive property.
It seems that it has been a sensitive property since the beginning
when Azure blob processors were contributed.
Hi Dave,
If you can confirm the updated application.js is included in the war
file, then it sounds like a matter of Web browser caching. The old
application.js cached at the client Web browser may be used. A hard
reload (Ctrl + Shift + R for Chrome) may help if that's the case.
Thanks,
Koji
On
Hello,
ComplexRecordField is used to represent child record which can have
multiple fields in it. I.e. embedded objects.
ComplexRecordField corresponds to Records type in Avro terminology [1].
UnionRecordField represents a field which data type can be one of
defined types. It is often used to
+1 (binding) Release this package as nifi-1.5.0
Verified signature and hashes. Built with include-atlas profile.
mvn clean install -Pcontrib-check,include-grpc,include-atlas
Confirmed flows using NiFi Registry and ReportLineageToAtlas reporting
task, worked as expected.
On Thu, Jan 11, 2018 at
+1 binding
This is really awesome!
I confirmed hashes and basic usages. Looks great for the 1st release.
Found couple of minor possible improvements on the NiFi side, and
posted comments to NiFi PR2219.
https://github.com/apache/nifi/pull/2219
Thanks for your work and effort, looking forward to
Hi Nadeem,
Did you try specifying a external directory instead of an exact jar
location, then put multiple versions of Jar there? This way
DBCPConnectionPool can utilize multiple jars. This would provide the
similar effect with putting one in NiFi lib dir.
If that doesn't work, an alternative
Hi Mike,
You might already have found it, but AvroTypeUtil.createSchema is
probably what you are looking for.
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java#L341
; event when the FlowFile was created.
>
> Regards,
> Ben
>
> 2017-12-27 17:21 GMT+08:00 Koji Kawamura <ijokaruma...@gmail.com>:
>
>> I see, thanks. The easiest way to look at provenance events would be
>> by right clicking a processor instance you are interested in, th
Consume.java:201)
>> at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:133)
>> at
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
>> at
>> org.eclipse.jetty.
retry to execute the sql.
> Will this logic cause my sql to be executed twice?
>
> For the WaitBatch processor, I will take your approach to test individually
> to see if the WaitBatch processor could cause the FlowFile repository
> checkpointing failure.
>
> Regards,
> Ben
>
&g
:49 PM, Koji Kawamura <ijokaruma...@gmail.com> wrote:
> Hi Ben,
>
> The one thing that looks strange in the screenshot is the
> ExecuteSqlCommand having FlowFiles queued in its incoming connection.
> Those should be transferred to 'failure' relationship.
>
> Following exe
xecuteSqlCommand is the second processor
> and before the WaitBatch processor, even if the FlowFile repository
> checkpointing failure is caused by WaitBatch, could it lead to the
> processors before it to process a FlowFile multiple times? Thanks.
>
> Regards,
> Ben
>
> 2017-12-2
e temp table only
> if doesn't exist, I didn't fix this bug in this way
> right away is because I was afraid this fix could cover some other problems.
>
> Thanks.
>
> Regards,
> Ben
>
> 2017-12-27 11:38 GMT+08:00 Koji Kawamura <ijokaruma...@gmail.com>:
>
>>
ker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> I also saw a lot of NiFi's exception like "ProcessException: FlowFile
> Repository failed to update", not sure if this is the reason the FlowFile
> got processed twice. Could you help to tak
t could support HA? Thanks.
>
> Regards,
> Ben
>
> 2017-12-26 18:34 GMT+08:00 Koji Kawamura <ijokaruma...@gmail.com>:
>
>> Hi Ben,
>>
>> As you found from existing code, DistributedMapCache is used to share
>> state among different processors, a
Hi Ben,
As you found from existing code, DistributedMapCache is used to share
state among different processors, and it can be used by your custom
processors, too.
However, I'd recommend to avoid such tight dependencies between
FlowFiles if possible, or minimize the part in flow that requires that
Hi Milan,
Thanks for your contribution! I reviewed the PR and posted a comment there.
Would you check that?
Koji
On Sat, Dec 23, 2017 at 7:15 AM, Milan Das wrote:
> I have logged a defect in NIFI. ListS3 is generation duplicate flows when
> S3 throughput is high.
>
>
>
>
ce I
> switched to use the WriteAheadProvenanceRepository implementation, up till
> now I haven't seen the error again.
> I will continue to check when the error might occur and post the logs here
> if needed. Once again thanks very much for your help.
>
> Regards,
> Ben
>
>
dn't find anything related to the unexpected shutdown in
> my logs, is there anything I could do to make NIFI log more verbose
> information to the logs?
>
> Regards,
> Ben
>
> 2017-12-25 14:56 GMT+08:00 Koji Kawamura <ijokaruma...@gmail.com>:
>
>> Hi Ben,
>>
commit the session but it's interrupted so the flowfile
>> still remains inside the original queue(like NIFI went down)?
>>
>> If you need to see the full log file, please let me know, thanks.
>>
>> Regards,
>> Ben
>>
>> 2017-12-25 13:51 GMT+08:00 Koji
indicating the table
already exists.
I tried to look at the logs you attached, but attachments do not seem
to be delivered to this ML. I don't see anything attached.
Thanks,
Koji
On Mon, Dec 25, 2017 at 1:43 PM, Koji Kawamura <ijokaruma...@gmail.com> wrote:
> Hi Ben,
>
> Just a quick
Hi Ben,
Just a quick recommendation for your first issue, 'The rate of the
dataflow is exceeding the provenance recording rate' warning message.
I'd recommend using WriteAheadProvenanceRepository instead of
PersistentProvenanceRepository. WriteAheadProvenanceRepository
provides better
Hi Sreejith,
Do you still have the issue? Unfortunately the attached screenshot is
dropped so I couldn't see what error you got.
I tried to reproduce the issue, but EvaluateXPath runs fine with your
example data regardless having whitespace or not.
Here is a flow template that I used to confirm:
Hi V,
Would you elaborate what you mean by duplicate response?
Does it mean when a failed FlowFile at the 1st request is routed back
to the same InvokeHTTP, sent as the 2nd request, and if the 2nd
request succeeds, you get TWO duplicated output FlowFiles for the
Response relationship?
If your
Thanks Aldrin for updating RC again, and to all devs who contributed
to MiNiFi 0.3.0 release!
I confirmed:
- Hashes are correct
- MiNiFi Windows Service works
- MiNiFi Toolset works, NiFi template -> MiNiFi config yml
- NiFi template -> C2 server -> MiNiFi PullHttpChangeIngestor works nicely
+1
Hi Aldrin,
I'm verifying the updated RC now. It's working nicely.
Just a question before casting my vote.
How was the source zip file is created? I am seeing minifi.exe and
minifiw.exe are existing in
Hi,
If the script encounters a while(1) loop when it called from NiFi,
then NiFi can not do anything until the loop ends.
To achieve what you described (keep using the same instance of a
script), I'd recommend to implement an API endpoint in that script,
e.g. a simple REST endpoint to receive
Hi Mayank,
I've tried to reproduce the issue, but to no avail so far.
PublishKafka_0_10 uses the specified Max Request Size as expected and
I got the exception if incoming message size exceeds the configured
size.
And I was able to publish messages whose size is 2.08MB with 10MB Max
Request Size.
Peter, Matt,
If the goal is sharing org.apache.nifi.csv.CSVUtils among modules, an
alternative approach is moving CSVUtils to nifi-standard-record-util
and add ordinary JAR dependency from nifi-poi-processors. How do you
think?
Thanks,
Koji
On Mon, Oct 16, 2017 at 12:17 PM, Peter Wicks (pwicks)
Hi Yuri,
I've added you to JIRA contributor list, you should be able to assign
yourself now.
Thanks for your contributions to enhance NiFi UX!
Koji
On Wed, Oct 11, 2017 at 3:07 AM, Yuri <1969yuri1...@gmail.com> wrote:
> Hello,
> I'd like to be able to assign JIRA issues to myself.
>
> My JIRA
+1 (binding) Release this package as nifi-1.4.0
Verified hashes, local build was successful on OS X, confirmed S2S
communication with older versions.
On Sat, Sep 30, 2017 at 9:27 AM, Andy LoPresto wrote:
> +1 (binding)
>
> Build environment: Mac OS X 10.11.6, Java
Hi Tina,
Glad to hear you were able to get schema.
The read size in ExecuteSQL is less because it's serialized with Avro
in which data can be written efficiently, and gets bigger after
ConvertJSONToSQL because each FlowFile has SQL statement in it.
Which version of Apache NiFi are you using? If
Hi Tina,
I tested ExecuteSQL -> ConvertAvroToJSON -> ConvertJSONToSQL -> PutSQL
flow with my SQL Server.
It worked fine, I was able to copy rows from a table to another.
One thing to note is that since you're using two different databases,
you need to specify 'Catalog Name' and 'Schema Name' at
The list of reserved keywords:
https://docs.microsoft.com/en-us/sql/t-sql/language-elements/reserved-keywords-transact-sql
On Tue, Sep 26, 2017 at 9:31 AM, Koji Kawamura <ijokaruma...@gmail.com> wrote:
> Hi Tina,
>
> I wonder if the column name is the cause of that issue
Hi Tina,
I wonder if the column name is the cause of that issue, because 'date'
is a reserved keyword.
I wonder whether ConvertJSONToSQL can wrap those columns with square
brackets as shown in your example query.
If possible, can you try to change the column name to different one
such as
1 - 100 of 177 matches
Mail list logo