Keith,
Would you be able to share your sample template with us as an attachment, a
GitHub Gist, or something similar?
Thanks,
James
On Wed, Jun 8, 2016 at 3:36 PM, Keith Lim wrote:
> Hi Joe,
>
> I created a simple workflow with EvaluateXPath and managed to repro the
>
re is the template. Need to remove the .txt extension.
>
> Note: I didn't bother to put in the correct flowfile xpath content for
> the processor to evaluate to success result.
>
>
> Thanks,
> Keith
>
>
>
>
>
>
> --
> *Fr
gt;
> Thanks,
> Keith
> ------
> *From:* James Wing <jvw...@gmail.com>
> *Sent:* Thursday, June 09, 2016 10:47:23 PM
> *To:* users@nifi.apache.org
> *Subject:* Re: Failure when running a workflow created from a template
> from another NiFi version.
>
> I agree with you Mat
> show up.
>
>
> Thanks,
> Keith
>
>
> --
> *From:* James Wing <jvw...@gmail.com>
> *Sent:* Wednesday, June 08, 2016 9:23:22 PM
> *To:* users@nifi.apache.org
> *Subject:* Re: Failure when running a workflow created from a templa
Igor,
One way would be to format both dates as strings (like "20160609" in your
HDFS paths) first, then compare the two strings for equality. In a
RouteOnAttribute expression:
${now():format("MMdd"):equals(${entryDate:format("MMdd")})}
If your goal is to merge the records into larger
> on nifi-0.7.0-SNAPSHOT with java 1.8.0_91.
>
>
> Thanks,
> Keith
> ------
> *From:* James Wing <jvw...@gmail.com>
> *Sent:* Wednesday, June 08, 2016 4:59 PM
> *To:* users@nifi.apache.org
> *Subject:* Re: Failure when running a workflow create
Kavinaesh,
I believe your "parallel" flow actually generates three separate flow files
from the three match outputs of EvaluateJSONPath "Get File Name alone". If
you stop the AttributesToJSON processor "Create Json" and examine the
contents of the three input queues, I believe you will find they
Michael,
You mentioned that GetSFTP did not work, are you aware of FetchSFTP?
FetchSFTP will accept an incoming flowfile. The typical NiFi pattern is
for a List* processor to feed into a Fetch* processor that accepts incoming
flowfiles, as opposed to Get* processors that originate flowfiles
Kevin,
I was able to reproduce this from your description, and what I found was
that the path "\\servername\sharename" is interpreted as a local relative
path and my NiFi wrote the file to local disk under
${NIFI_HOME}/\\servername\sharename/samplefile. That is why PutFile
succeeds.
Can you
Uwe,
I do not believe NiFi supports simple username/password authentication
today. But I have been working on a similar problem, and created a ticket
for this (https://issues.apache.org/jira/browse/NIFI-1614). Any input you
might be able to offer on the use case would be helpful.
Thanks,
You may also need to pass the Server header, I believe some server-side UI
code uses this to format client-side resource locations.
proxy_pass_header Server;
On Mon, Mar 14, 2016 at 8:54 AM, michail salichos <
michail.salic...@gmail.com> wrote:
> Hello,
>
> I tried adding ssl param
I am able to see the same EvaluateXPath issue Guillame Pool reported on
NiFi 0.6.0. I created a template gist at
https://gist.github.com/jvwing/7e7948d8eb5ad3643f38966b4ba3ce2e that
illustrates this for me. You don't even need to run the whole thing, just
try to start the EvaluateXPath
I believe you have a couple of problems:
1.) The XML needs to be well-formed. In your first example, the
CashReportData element is not properly closed, same with DrawerCount in the
second example.
2.) The XML has a namespace, which you would need to reference in your
XPath. I'm not sure how to
Expression language support must be configured in the PropertyDescriptor
for all properties, including dynamic properties. Your processor should
override getSupportedDynamicPropertyDescriptor() to provide dynamic
properties. Take a look at the UpdateAttribute processor for an example:
@Override
Yes, you can absolutely install NiFi as a service. The commands vary a bit
depending on the flavor of Linux and the init subsystem, but I believe the
following will work for CentOS:
# Install NiFi as a service
nifi.sh install
# Set NiFi to automatically start at boot
chkconfig nifi on
# Start
Igor,
The options that control log file rollover are in your NiFi's
conf/logback.xml file. Around line 24, you will find a section like the
following for the nifi-app.log:
./logs/nifi-app_%d{-MM-dd_HH}.%i.log
100MB
Have you looked at a combination of ListSFTP -> FetchSFTP? I believe
ListSFTP has built-in support for tracking recent files, and it might
satisfy your use case. If not, you can certainly filter the listed files
by the "filename" attribute before calling FetchSFTP.
On Fri, Apr 15, 2016 at 7:40
Sai,
Welcome to NiFi! This sounds like an "interesting" error. Would you
please share the version of NiFi you are using? Also, can you get the
complete stack trace of the error from NIFI_HOME/logs/nifi-app.log? The
error suggests that NiFi cannot connect to SQS at all, and has not
proceeded
Michael,
I believe the general issue you are experiencing in NiFi is the difference
between authentication and authorization. The UI is allowing you to
request and grant authorization (roles), but it is not providing
authentication (a password). In contrast, the login form is used for
Cheryl,
I'm not aware of any good explanation for that, or open issues on this
topic. Can you share more detail on the OS, JVM, NiFi versions?
Thanks,
James
On Thu, Feb 2, 2017 at 12:14 PM, Cheryl Jennings wrote:
> Hi Everyone,
>
> I had nifi running overnight,
Mike,
I am not familiar with a specialized connector for MS Dynamics, but NiFi
has good general-purpose support for databases via JDBC and HTTP(S) web
services (see InvokeHTTP). It is likely that NiFi can do at least some
interaction with Dynamics using only the built-in processors.
But NiFi is
MergeContent's "Defragment" merge strategy might be enough. It is not as
full-featured as the Wait/Notify design Andy suggested, but it might be
something you can use now.
Defragment reads attributes from the flowfiles to more precisely group your
content together than correlation attribute
Carlos,
Welcome to NiFi! I believe the Kite dataset is currently the most direct,
built-in solution for writing Parquet files from NiFi.
I'm not an expert on Parquet, but I understand columnar formats like
Parquet and ORC are not easily written to in the incremental, streaming
fashion that NiFi
Carl,
I think logging still works in ExecuteScript. But starting in NiFi 1.0,
the default log level for processors was set from 'INFO' back to 'WARN' to
reduce log volumes. To receive INFO messages from ExecuteScript, you will
want to do one or both of two things:
1.) Set the bulletin level of
Ashmeet,
Are you still having trouble with HandleHttpRequest? The file data should
be included in the flowfile content for flowfiles generated by
HandleHttpRequest. Are you seeing content, and the number of bytes you
expect?
However, the content will probably be packaged as multipart/form-data
;
>>
>>
>> Following is need:
>>
>>
>>
>> 1. READ CSV file from HDFS
>>
>> 2. Execute python script – reads CSV file and produces XML output
>> file – example /tmp/test.xml .
>>
>> 3. I need to process the
file – example /tmp/test.xml .
>
> 3. I need to process the /tmp/test.xml file using SplitXML
> processor
>
> 4. Put these into HDFS
>
>
>
>
>
> Thanks,
>
> Ram
>
> *From:* James Wing [mailto:jvw...@gmail.com]
> *Sent:* Monday, August 29, 2016 1
Manish, you are absolutely right to back up your flow.xml.gz and conf
files. But I would carefully distinguish between using these backups to
recreate an equivalent new NiFi, versus attempting to reset the state of
your existing NiFi. The difference is the live data in your flow, in the
did this with nifi 0.7.0.
>
> I'd like to be able to use tools like salt or puppet to pre-position
> templates.
>
> Thanks,
>
> Dan
> M: 443-992-2848
> GV: 410-861-0206
>
> On Sep 14, 2016, at 3:02 PM, James Wing <jvw...@gmail.com> wrote:
>
> Manish, you are absolut
Niraj,
Is this message displayed as a validation warning on the GetDynamoDB
processor? GetDynamoDB will retrieve content from DynamoDB, but it
requires an input flowfile to define which item keys to look up.
GetDynamoDB will not enumerate the items in a table like a List* processor.
Thanks,
Selvam,
Can you describe what is happening when you run FetchS3Object? Does the
processor start? Are flowfiles routed to the failure relationship? Are
there error messages?
To test your configuration, I recommend using ListS3 to get flowfiles with
attributes referencing the S3 objects, then
Dan,
Would you please share the version of NiFi you are using? Also, would you
please look in logs/nifi-app.log for the stack trace of the exception and
any nested exceptions?
Thanks,
James
On Wed, Aug 17, 2016 at 5:05 PM, dgm wrote:
> I’m just staring to use nifi and
t
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_101]
>
>85 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_101]
>
>86 at java.lang.Thread.run(Thread.java:745)
t;dgmorri...@gmail.com>
>> *Date: *Thursday, August 18, 2016 at 7:23 AM
>> *To: *<users@nifi.apache.org>
>> *Subject: *Re: PutS3Object Error
>>
>>
>>
>> Thanks James. So when you say the JRE security provider, are you
>> referring to bouncycastle
Koustav,
How are you running the Sqoop job? Can you share some code? Python is
sequential by default, but your Sqoop job might run asynchronously. I
believe the answer depends on your code (or library) not only starting the
Sqoop job, but polling for it's status until it is complete.
Thanks,
Brett,
The default provenance store, PersistentProvenanceRepository, does require
I/O in proportion to flowfile events. Flowfiles with many attributes,
especially large attributes, are a frequent contributor to provenance
overload because attribute state is tracked in provenance events. But
e Flow didn't actually work.
> >
> > Thanks,
> > Pablo.
> >
> > On Wed, Oct 26, 2016 at 3:59 PM, James Wing <jvw...@gmail.com> wrote:
> >>
> >> Pablo,
> >>
> >> Did you by any chance start the GetDynamoDB processor first without the
&g
Pablo,
Did you by any chance start the GetDynamoDB processor first without the
credentials, then stop, configure credentials, and restart? I suspect
there might be a bug where GetDynamoDB caches the client even through
stopping and starting the processor.
To test this theory, you might try
Andreas,
I'm not familiar with PutElasticSearch, but the code you point to does
appear strange. It doesn't look like there are any unit tests where
multiple results are returned from ElasticSearch with one or more failures
that would exercise this case.
L329...
// All items are returned
John,
When you configure the PutS3Object processor, you should configure the
Bucket Name as just 'nifibucket' (without quotes), and set the Region to
'eu-west-1'. You do not need the long-form '
nifibucket.s3-eu-west-1.amazonaws.com', I think that might be what is
confusing it.
Thanks,
James
The short answer is no, PutS3Object does not currently support a direct
equivalent of the AWS CLI's --no-verify-ssl option. There is an option to
provide your own SSLContextService, if you need to establish trust with
your proxy server (maybe, I'm not sure).
>From the screenshot and the error message, I interpret the sequence of
events to be something like this:
1.) ListS3 succeeds and generates flowfiles with attributes referencing S3
objects, but no content (0 bytes)
2.) FetchS3Object fails to pull the S3 object content with an Access Denied
error,
n and the error
> persists.
>
> The key and secret key are set correctly too. Do I need to modify my
> /etc/hosts perhaps?
>
> Thanks
>
> John
>
> On Nov 1, 2016 20:53, "James Wing" <jvw...@gmail.com> wrote:
>
>> John,
>>
>> When you co
On Wed, Nov 2, 2016 at 5:31 AM, John Burns <jzbu...@gmail.com> wrote:
> Hi James
>
> I seemed to have solved this by installing the AWS command line tools, I
> wan't aware I needed to do that.
>
> Many thanks for your help.
>
> John
>
> On Tue, Nov 1, 2016 at 10
Rai,
The GetDynamoDB processor requires a hash key value to look up an item in
the table. The default setting is an Expression Language statement that
reads the hash key value from a flowfile attribute,
dynamodb.item.hash.key.value. But this is not required. You can change it
to any attribute
by one. Do I achieve it through the expression
> language? if I write an script to do that, how do I pass it to my processor?
> Thanks
> Niraj
>
> On Thu, Oct 13, 2016 at 1:42 PM, James Wing <jvw...@gmail.com> wrote:
>
>> Rai,
>>
>> The GetDynamoDB
Brian,
Did you add entries for the node DNs in the conf/authorizers.xml file?
Something like:
CN=node1.nifi, ...
CN=node2.nifi, ...
...
Thanks,
James
On Wed, Dec 7, 2016 at 8:28 AM, Brian Jeltema wrote:
> I’m trying to create my first cluster using NiFi 1.1.0. It’s a
Nick,
Are there any errors shown in the Javascript console of your browser
developer tools?
Thanks,
James
On Sun, Dec 11, 2016 at 8:49 AM, Nicholas Hughes <
nicholasmhughes.n...@gmail.com> wrote:
> I'm seeing some interesting behavior in a secured NiFi cluster, and I
> can't seem to track
s Hughes <
> nicholasmhughes.n...@gmail.com> wrote:
>
>> James,
>>
>> I just have some 304 "not modified" codes. No errors.
>>
>>
>>
>> -Nick
>>
>>
>> On Sun, Dec 11, 2016 at 1:35 PM, James Wing <jvw...@gmail.com>
Which browser and version are you using?
On Sun, Dec 11, 2016 at 12:13 PM, Nicholas Hughes <
nicholasmhughes.n...@gmail.com> wrote:
> Each of the three objects in the nodes array has a roles attribute as an
> array.
>
> -Nick
>
>
> On Sun, Dec 11, 2016 at 2:51 PM, J
Nick,
You could use ExecuteScript to manipulate the JSON. The sample ECMAScript
below assumes that you already have the transformed timestamp as an
attribute "timestamp":
var flowFile = session.get();
if (flowFile !== null) {
var StreamCallback =
Uwe,
Can you share a screenshot of the provenance search criteria? If you try
the search the other way around, starting with all provenance events and
then filtering down to the time and processor, does that change anything?
Thanks,
James
On Fri, Mar 24, 2017 at 8:39 AM, Uwe Geercken
Steve,
The inferred schemas can be helpful to get you started, but I recommend
providing your own Avro schema based on your knowledge of what should be
guaranteed to downstream systems. If you want to pass untyped data, you
can't really beat JSON. Avro schema isn't so bad, honest.
As part of
le processor although it is running.
>
> Rgds,
>
> Uwe
>
> *Gesendet:* Freitag, 24. März 2017 um 19:27 Uhr
> *Von:* "James Wing" <jvw...@gmail.com>
> *An:* users@nifi.apache.org
> *Betreff:* Re: Nifi - Data provenance not reporting anymore
> Uwe,
Geercken <uwe.geerc...@web.de> wrote:
> James,
>
> where should I put the screenshot? This mail group does not allow to send
> graphics.
>
> Rgds,
>
> Uwe
>
> *Gesendet:* Freitag, 24. März 2017 um 18:27 Uhr
> *Von:* "James Wing" <jvw...@gmail.com&
Austin,
I think you are on the right track with RouteOnContent. Any chance you can
share a sample CSV header, the settings of your RouteOnContent processor,
including the regex?
Thanks,
James
On Thu, Mar 16, 2017 at 11:14 AM, Austin Heyne wrote:
> Hi,
>
> I have a set of
Jim,
You could use ListS3 to get existing S3 keys, then parse out the
'directories', and put the directories in a key/value store for a lookup
(like DistributedMapCache). But you might also be able to maintain the
lookup just with your metadata attributes in NiFi alone.
Thanks,
James
On Fri,
handles
> are not eliminated? The initialize() runs at Start only, but if it has been
> stopped and started one or more times prior it inherits all that previous
> baggage. Is that right?
>
> Thanks very much.
>
> Jim
>
> On Tue, Apr 4, 2017 at 2:18 PM, James Wing <jvw...@gmail
had thought about your first option there --
>> expanding the property list to include the Assume Role properties. I would
>> agree that #2 is a bit more robust and will do some more digging there. As
>> usual, it's a feature that I need yesterday and will likely take the path
>
James,
I suspect your call to self.logger.addHandler(fh) is cumulatively adding to
your log results as initialize() is called again. Can you define the log
file and formatting in your NiFi's conf/logback.xml (no restart required)?
Then you can safely call getLogger() and access the shared
EvaluateJSONPath does not support expression language in properties. But
you could use an ExecuteScript processor to read the "97" from an attribute
and use it to index the JSON content.
Thanks,
James
On Wed, Apr 12, 2017 at 6:14 AM, Guillaume Pool wrote:
> Hi,
>
>
>
> Has
Ravi,
Can you share a sample of the data you are splitting and the settings of
the SplitText processor? Is there more error stack trace information?
Thanks,
James
On Thu, Apr 13, 2017 at 3:42 PM, Ravi Papisetti (rpapiset) <
rpapi...@cisco.com> wrote:
> Hi,
>
>
>
> Using Apache Nifi 1.1.2
Bill,
I think the JSON path expression you are looking for is just
$
Which is a reference to the root object array. This should result in three
splits like {"provider_study_id":1001}
Thanks,
James
On Fri, Apr 14, 2017 at 7:43 AM, Bill Duncan wrote:
> Hi All,
>
> I am
Mikhail,
This sounds like a known issue with the flow.xml schema validation, where
new elements have been added to NiFi that have not been updated in the XML
schema. I believe they should be interpreted as warnings, rather than
errors.
Schema for flow.xml outdated
Mikhail,
Which version of NiFi are you running?
Thanks,
James
On Fri, Mar 10, 2017 at 9:29 AM, James Wing <jvw...@gmail.com> wrote:
> Mikhail,
>
> This sounds like a known issue with the flow.xml schema validation, where
> new elements have been added to NiFi that hav
Jim,
The good news is that you can delete the entire ./provenance_repository and
NiFi will start up with a fresh clean one. Of course, that won't solve
your analysis challenge.
You might want to play around with some of the provenance config entries to
limit the repository, like
The NiFi distribution does not ship with support for username/password
authentication outside LDAP and Kerberos. But NiFi supports a pluggable
model for login identity providers, so you can provide your own
implementation.
Lauren,
It sounds like you are doing the right troubleshooting steps. A few more
ideas off the top of my head:
- When you tested with the s3 cli, did you use the same credentials,
from the same machine NiFi is running on? The CloudTrail events are
written by AWS, so the ownership and
Tim,
I think your PropertyDescriptor should reference the controller service
interface type (AWSCredentialsProviderService.class), rather than a
concrete implementing class type. For example, in
AbstractAWSCredentialsProviderProcessor:
public static final PropertyDescriptor
Laurens,
ListS3 tracks S3 object keys within your bucket+prefix. ListS3 primarily
works on a last read timestamp, but tracks multiple keys when the
timestamps are equal. Directories are something of a hazy concept in S3.
Thanks,
James
On Tue, Aug 8, 2017 at 8:22 AM, Laurens Vets
Ben,
Processors typically get the DBCPConnectionPool controller service only
through the DBCPService interface. I believe it is possible to cast the
interface to the DBCPConnectionPool class. However, DBCPConnectionPool
does not expose the property values you are looking for through public
Ben,
In standalone mode, the state data is written to the NIFI_HOME/state
directory. It is OK, and very normal, for a standalone NiFi to use
processors that specify the CLUSTER scope. Without a cluster state store
like Zookeeper, NiFi gracefully falls back to using the local file system.
A
Uwe,
Please do create a JIRA. I agree with you that SplitRecord should provide
compatible fragment.* attributes like the older split processors.
Partially for a consistent user experience, and partly for compatibility
with MergeContent and other processors that read the fragment.* attributes.
Jeremy,
Have you tried "nifi.sh run" instead of start/stop? I'm not an expert on
supervisord, but it seems like it might be a better fit. Also, was
anything logged in the bootstrap log?
Thanks,
James
On Tue, May 30, 2017 at 8:28 AM, Jeremy Taylor
wrote:
>
>
>
>
>
Are you seeing errors, or just unexpected results? ListS3 only returns
references to objects on S3, but FetchS3Object should return the object
content. I recommend looking at the output of FetchS3Object to make sure
it is right (in size and content type) before trying to unzip it.
Thanks,
Mike,
I believe templates include controller services by default, as long as one
or more of the processors in the template references the controller
service. Did that not happen for you?
Thanks,
James
On Thu, Jun 8, 2017 at 9:59 AM, Mike Thomsen wrote:
> Is it
I do not believe NiFi has any specific features for Zeppelin yet, but it is
possible to write custom Zeppelin code paragraphs that communicate with the
NiFi API to pull data or inspect flow status. For an example, I recommend
Pierre Villard's US Presidential Election: tweet analysis using
Jim,
Your description of different success by user makes me think of file
permissions. Does NiFi have equal access to those files? As far as
general troubleshooting, I recommend the following:
1.) Copy/pasting your ListFile and FetchFile separately so you can wrestle
with their properties,
Manish,
I have never tried to connect NiFi with MSMQ, but NiFi supports enough
connection interfaces to make something possible. An ideal solution would
be a JMS wrapper for MSMQ so you could use NiFi's ConsumeJMS and PublishJMS
processors. JMS is a typical queue interface for Java apps like
There is a 64 KB limit on attributes. If you are processing a lot of
flowfiles with large attributes, you may not get optimal NiFi performance.
It is usually better to store data of that size in the flowfile content,
and only extract smaller, select data points into attributes.
Thanks,
James
Eric,
You can use the EvaluateJsonPath processor to extract the SNS message body
using a JSON path of "$.Message". As part of evaluating the path, it will
un-escape the stringified content of Message and return the enclosed JSON
content.
If you want this to be in a new, separate flowfile from
ames
On Wed, Dec 13, 2017 at 11:32 AM, Aruna Sankaralingam <
aruna.sankaralin...@cormac-corp.com> wrote:
> James, I am sorry I am not sure if I follow that. Could you please give an
> example?
>
>
>
> *From:* James Wing [mailto:jvw...@gmail.com]
> *Sent:* Wednesday, December
the
> sub folder inside the main folder.
>
> From: James Wing [mailto:jvw...@gmail.com]
> Sent: Wednesday, December 13, 2017 1:34 AM
> To: users@nifi.apache.org
> Subject: Re: ListS3 Processor Error
>
> Are you able to list the bucket with the AWS CLI (aws s3 ls)? It
Are you able to list the bucket with the AWS CLI (aws s3 ls)? It can be
helpful to compare performance between NiFi and the AWS CLI, especially if
you are able to do so from the same machine, with the same permissions, and
as similar bucket and prefix settings as you can manage.
In the
t; Senior Data Engineer
> SemanticBits, LLC
> jennifer.kissin...@semanticbits.com
> 603-290-1711
>
> On Sun, Nov 19, 2017 at 1:18 PM, James Wing <jvw...@gmail.com> wrote:
>
>> Jenni,
>>
>> Thanks for reporting this. I believe you are correct that expression
>> lang
Neil,
I'm not aware of this problem for ListS3. I do not suggest there are no
issues, rather that many users might not notice or have come to accept some
variance in the accuracy of ListS3. If you can persuade ListS3 to do it
again, that would be great :).
We did recently hear a report of
Tim,
It is most likely that provenance data is being collected, and the search
UI has a few quirks that might explain your experience:
* Provenance dates are in UTC. If you are not careful with your filter,
you can end up fighting over the time zone.
* Processors emit provenance events as
Austin,
Would you please check in your logs/nifi-app.log file to see if there is a
more complete stack trace for the error(s)? I'm guessing/hoping there may
be another, underlying cause.
Does it do this for all flowfiles, or only some?
Thanks,
James
On Thu, Jan 18, 2018 at 11:25 AM, Austin
ogs. any specific place
>
> On Thu, Jan 18, 2018 at 3:33 PM, James Wing <jvw...@gmail.com> wrote:
>
>> Austin,
>>
>> Would you please check in your logs/nifi-app.log file to see if there is
>> a more complete stack trace for the error(s)? I'm guessing/hoping t
ts what i need. My query is pretty robust and i am not
> sure if it will be able to be implemented in that
>
> On Tue, Feb 6, 2018 at 9:30 AM, James Wing <jvw...@gmail.com> wrote:
>
>> Austin,
>>
>> Have you tried QueryDatabaseTable? For some databases and table
Austin,
Have you tried QueryDatabaseTable? For some databases and table schema, it
provides a shortcut to querying for the recently changed records, as long
as you have a "maximum value column" to use.
Georg,
NiFi is quite frequently used for ingesting data into Hadoop/HDFS, and NiFi
can absolutely retry on errors. However, I think of this more of a flow
design and process modeling question than a retry or error handling
capability question.
Orchestration tools start with the concept of a
Paula,
PutS3Object is capable of using S3's multi-part upload feature to upload
very large files, and it checks for the status of multi-part uploads
periodically in the course of normal operation. Ideally, your AWS
credentials would include the s3:ListBucketMultipartUploads permission to
allow
You can use MergeContent to merge more than 10,000 flowfiles, but you may
experience slower performance with very large numbers of files in a single
merge. The recommended configuration is to use two MergeContent processors
in sequence. The first MergeContent would merging groups of individual
Gautier,
I'm not certain exactly what is wrong. But as an experiment, please try
setting the "Max number of Bins" to be greater than 1 (2 might be enough).
My suspicion is that when you are using 100% of the allowed bins, the
processor attempts to process the oldest bin every time. Because you
I recommend checking that there is no whitespace around the keys. This can
sometimes be introduced by copy/paste.
On Wed, Feb 28, 2018 at 7:27 AM, Paula wrote:
> Thanks again! I selected a new PutS3Object and added only Access Key and
> Secret Key (+ Object Key,
I believe you need something like
${my_forcast:replace("'","\\'")}
Using two backslashes \\. The backslash is also used as an escape
character in the expression language string, so you need two consecutive
backslashes to make one literal backslash in the output.
On Thu, Apr 5, 2018 at 7:31 AM,
Simon,
I'm afraid NiFi does not yet support server-side encryption with a specific
KMS key ID, only the more basic SSE. This has certainly come up before,
you are definitely not alone in wishing for this feature. There is a JIRA:
NIFI-4256 Add support for all AWS S3 Encryption Options
Your solution sounds very normal and appropriate to me. Is it performing
slowly or causing you problems?
Thanks,
James
On Mon, Apr 23, 2018 at 2:37 PM, Laurens Vets wrote:
> Hello list,
>
> I'm using NiFi to ship JSON formatted data around. However, I want NiFi to
> drop
Can you share the full stack trace of the error(s) that are logged in the
nifi-app.log file?
On Mon, Feb 26, 2018 at 4:53 AM, Paula wrote:
> Hi James,
>
> thanks for your message!
>
> Unfortunately I can't load the file to S3 using NiFi's PutS3Object. It just
> sends
1 - 100 of 105 matches
Mail list logo