I believe you would have to use multiple processors to update multiple
attributes.
> On Oct 2, 2018, at 8:13 AM, Krish Kumar wrote:
>
> Hello,
>
> I need some advice about the nifi PutDynamodb processor.
>
> I have the following table:
> ID | forename | surname
>
> The Id is the unique key f
For now it’s absolutely basic:
> One table with 2 elements:
> - Artist
> - SongTitle
>
> I want to get all songtitle from one artist
>
> I will look on this Java code
>
> Envoyé de mon iPhone
>
> Le 27 sept. 2018 à 18:26, James Wing a écrit :
>
> If you do not
If you do not mind looking at Java code, there are some sample property
settings in the integration tests that may help you. For example,
testStringHashStringRangePutGetDeleteGetSuccess
https://github.com/apache/nifi/blob/e3155a8a4ba49bb6c4f324ebf090a92d6dd97389/nifi-nar-bundles/nifi-aws-bundle/n
If you wish to consume the latest CloudTrail events as a stream, have you
considered subscribing to CloudTrail notifications via SNS and SQS? NiFi
can read the SQS messages with the GetSQS processor, and extract the S3 key
with EvaluateJsonPath. In contrast, I expect ListS3 would be more useful
f
Gautier,
I'm not certain exactly what is wrong. But as an experiment, please try
setting the "Max number of Bins" to be greater than 1 (2 might be enough).
My suspicion is that when you are using 100% of the allowed bins, the
processor attempts to process the oldest bin every time. Because you a
Tim,
It is most likely that provenance data is being collected, and the search
UI has a few quirks that might explain your experience:
* Provenance dates are in UTC. If you are not careful with your filter,
you can end up fighting over the time zone.
* Processors emit provenance events as flowfi
Your solution sounds very normal and appropriate to me. Is it performing
slowly or causing you problems?
Thanks,
James
On Mon, Apr 23, 2018 at 2:37 PM, Laurens Vets wrote:
> Hello list,
>
> I'm using NiFi to ship JSON formatted data around. However, I want NiFi to
> drop certain data when som
Simon,
I'm afraid NiFi does not yet support server-side encryption with a specific
KMS key ID, only the more basic SSE. This has certainly come up before,
you are definitely not alone in wishing for this feature. There is a JIRA:
NIFI-4256 Add support for all AWS S3 Encryption Options
https://i
I believe you need something like
${my_forcast:replace("'","\\'")}
Using two backslashes \\. The backslash is also used as an escape
character in the expression language string, so you need two consecutive
backslashes to make one literal backslash in the output.
On Thu, Apr 5, 2018 at 7:31 AM,
I recommend checking that there is no whitespace around the keys. This can
sometimes be introduced by copy/paste.
On Wed, Feb 28, 2018 at 7:27 AM, Paula wrote:
> Thanks again! I selected a new PutS3Object and added only Access Key and
> Secret Key (+ Object Key, Bucket and Region). Unfortunate
>From the screenshot of PutS3Object properties, it appears that you have
configured the Access Key, Secret Key and AWS Credentials Provider
Service. You do not need all three, and it is likely that they are not
working together.
You should use only the Access Key and Secret Key together, or only
How are you configuring S3 permissions on the processor? Is that the same
method as you used in your testing with Cyberduck?
Will your permissions allow you to test the same credential with ListS3 or
FetchS3Object for comparison?
On Tue, Feb 27, 2018 at 12:14 AM, Paula wrote:
> Hi, here is the
Can you share the full stack trace of the error(s) that are logged in the
nifi-app.log file?
On Mon, Feb 26, 2018 at 4:53 AM, Paula wrote:
> Hi James,
>
> thanks for your message!
>
> Unfortunately I can't load the file to S3 using NiFi's PutS3Object. It just
> sends the file to failure path and
Georg,
NiFi is quite frequently used for ingesting data into Hadoop/HDFS, and NiFi
can absolutely retry on errors. However, I think of this more of a flow
design and process modeling question than a retry or error handling
capability question.
Orchestration tools start with the concept of a job,
Paula,
PutS3Object is capable of using S3's multi-part upload feature to upload
very large files, and it checks for the status of multi-part uploads
periodically in the course of normal operation. Ideally, your AWS
credentials would include the s3:ListBucketMultipartUploads permission to
allow th
tty robust and i am not
> sure if it will be able to be implemented in that
>
> On Tue, Feb 6, 2018 at 9:30 AM, James Wing wrote:
>
>> Austin,
>>
>> Have you tried QueryDatabaseTable? For some databases and table schema,
>> it provides a shortcut to querying for t
Austin,
Have you tried QueryDatabaseTable? For some databases and table schema, it
provides a shortcut to querying for the recently changed records, as long
as you have a "maximum value column" to use.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.
lingAgent.java:128)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
On Thu, Jan 18, 2018 at 12:50 PM, Austin Duncan
wrote:
> anything that tries to go through that specific processor doesnt get
> processed. i am not sure how to check the logs. any specific place
>
&
Austin,
Would you please check in your logs/nifi-app.log file to see if there is a
more complete stack trace for the error(s)? I'm guessing/hoping there may
be another, underlying cause.
Does it do this for all flowfiles, or only some?
Thanks,
James
On Thu, Jan 18, 2018 at 11:25 AM, Austin Du
Robert,
I had the same problem. One workaround I have used was to add the DNS name
to the /etc/hosts file with a local IP address, so that I could configure
that name in nifi.web.http.host and NiFi would still bind to the right IP.
It sounds like a nasty hack now that I describe it, but it worked
You can use MergeContent to merge more than 10,000 flowfiles, but you may
experience slower performance with very large numbers of files in a single
merge. The recommended configuration is to use two MergeContent processors
in sequence. The first MergeContent would merging groups of individual
fl
ames
On Wed, Dec 13, 2017 at 11:32 AM, Aruna Sankaralingam <
aruna.sankaralin...@cormac-corp.com> wrote:
> James, I am sorry I am not sure if I follow that. Could you please give an
> example?
>
>
>
> *From:* James Wing [mailto:jvw...@gmail.com]
> *Sent:* Wednesday, December
For ListS3, you will want to separate those in the Bucket and Prefix properties.
> On Dec 13, 2017, at 9:34 AM, Aruna Sankaralingam
> wrote:
>
> James,
>
> “part-d-prescription-drug” is the main folder in S3 and “unstructured” is the
> sub folder inside the main fold
Are you able to list the bucket with the AWS CLI (aws s3 ls)? It can be
helpful to compare performance between NiFi and the AWS CLI, especially if
you are able to do so from the same machine, with the same permissions, and
as similar bucket and prefix settings as you can manage.
In the screenshot
Neil,
I'm not aware of this problem for ListS3. I do not suggest there are no
issues, rather that many users might not notice or have come to accept some
variance in the accuracy of ListS3. If you can persuade ListS3 to do it
again, that would be great :).
We did recently hear a report of simil
er Kissinger
> Senior Data Engineer
> SemanticBits, LLC
> jennifer.kissin...@semanticbits.com
> 603-290-1711
>
> On Sun, Nov 19, 2017 at 1:18 PM, James Wing wrote:
>
>> Jenni,
>>
>> Thanks for reporting this. I believe you are correct that expression
>> language is
Jenni,
Thanks for reporting this. I believe you are correct that expression
language is not being applied as expected. There is now a JIRA for this
issue, https://issues.apache.org/jira/browse/NIFI-4619.
Have you been able to work around the issue? Hopefully, file credentials
or named profiles
Eric,
You can use the EvaluateJsonPath processor to extract the SNS message body
using a JSON path of "$.Message". As part of evaluating the path, it will
un-escape the stringified content of Message and return the enclosed JSON
content.
If you want this to be in a new, separate flowfile from th
Manish,
I have never tried to connect NiFi with MSMQ, but NiFi supports enough
connection interfaces to make something possible. An ideal solution would
be a JMS wrapper for MSMQ so you could use NiFi's ConsumeJMS and PublishJMS
processors. JMS is a typical queue interface for Java apps like NiF
There is a 64 KB limit on attributes. If you are processing a lot of
flowfiles with large attributes, you may not get optimal NiFi performance.
It is usually better to store data of that size in the flowfile content,
and only extract smaller, select data points into attributes.
Thanks,
James
On
Laurens,
ListS3 tracks S3 object keys within your bucket+prefix. ListS3 primarily
works on a last read timestamp, but tracks multiple keys when the
timestamps are equal. Directories are something of a hazy concept in S3.
Thanks,
James
On Tue, Aug 8, 2017 at 8:22 AM, Laurens Vets wrote:
> Hi
The NiFi distribution does not ship with support for username/password
authentication outside LDAP and Kerberos. But NiFi supports a pluggable
model for login identity providers, so you can provide your own
implementation.
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#user
Lauren,
It sounds like you are doing the right troubleshooting steps. A few more
ideas off the top of my head:
- When you tested with the s3 cli, did you use the same credentials,
from the same machine NiFi is running on? The CloudTrail events are
written by AWS, so the ownership and p
Ben,
In standalone mode, the state data is written to the NIFI_HOME/state
directory. It is OK, and very normal, for a standalone NiFi to use
processors that specify the CLUSTER scope. Without a cluster state store
like Zookeeper, NiFi gracefully falls back to using the local file system.
A clust
Ben,
Processors typically get the DBCPConnectionPool controller service only
through the DBCPService interface. I believe it is possible to cast the
interface to the DBCPConnectionPool class. However, DBCPConnectionPool
does not expose the property values you are looking for through public
metho
Mike,
I believe templates include controller services by default, as long as one
or more of the processors in the template references the controller
service. Did that not happen for you?
Thanks,
James
On Thu, Jun 8, 2017 at 9:59 AM, Mike Thomsen wrote:
> Is it possible to save the controlle
I do not believe NiFi has any specific features for Zeppelin yet, but it is
possible to write custom Zeppelin code paragraphs that communicate with the
NiFi API to pull data or inspect flow status. For an example, I recommend
Pierre Villard's US Presidential Election: tweet analysis using HDF/NiFi
Are you seeing errors, or just unexpected results? ListS3 only returns
references to objects on S3, but FetchS3Object should return the object
content. I recommend looking at the output of FetchS3Object to make sure
it is right (in size and content type) before trying to unzip it.
Thanks,
James
Jeremy,
Have you tried "nifi.sh run" instead of start/stop? I'm not an expert on
supervisord, but it seems like it might be a better fit. Also, was
anything logged in the bootstrap log?
Thanks,
James
On Tue, May 30, 2017 at 8:28 AM, Jeremy Taylor
wrote:
>
>
>
>
> Andre and all,
>
> So you’v
Uwe,
Please do create a JIRA. I agree with you that SplitRecord should provide
compatible fragment.* attributes like the older split processors.
Partially for a consistent user experience, and partly for compatibility
with MergeContent and other processors that read the fragment.* attributes.
Th
Mike,
I believe you are correct that NiFi does not have built-in support for
several of those tasks:
* Identifying a schema evolution (separate from a non-conforming flowfile)
* Identifying safe schema changes
* Automatically applying schema changes to a SQL database
However, the new schema regi
Jim,
Your description of different success by user makes me think of file
permissions. Does NiFi have equal access to those files? As far as
general troubleshooting, I recommend the following:
1.) Copy/pasting your ListFile and FetchFile separately so you can wrestle
with their properties, cle
Tim,
I think your PropertyDescriptor should reference the controller service
interface type (AWSCredentialsProviderService.class), rather than a
concrete implementing class type. For example, in
AbstractAWSCredentialsProviderProcessor:
public static final PropertyDescriptor AWS_CREDENTIALS_P
Bill,
I think the JSON path expression you are looking for is just
$
Which is a reference to the root object array. This should result in three
splits like {"provider_study_id":1001}
Thanks,
James
On Fri, Apr 14, 2017 at 7:43 AM, Bill Duncan wrote:
> Hi All,
>
> I am working on extracting
Ravi,
Can you share a sample of the data you are splitting and the settings of
the SplitText processor? Is there more error stack trace information?
Thanks,
James
On Thu, Apr 13, 2017 at 3:42 PM, Ravi Papisetti (rpapiset) <
rpapi...@cisco.com> wrote:
> Hi,
>
>
>
> Using Apache Nifi 1.1.2 sing
EvaluateJSONPath does not support expression language in properties. But
you could use an ExecuteScript processor to read the "97" from an attribute
and use it to index the JSON content.
Thanks,
James
On Wed, Apr 12, 2017 at 6:14 AM, Guillaume Pool wrote:
> Hi,
>
>
>
> Has anyone found a way
file handles
> are not eliminated? The initialize() runs at Start only, but if it has been
> stopped and started one or more times prior it inherits all that previous
> baggage. Is that right?
>
> Thanks very much.
>
> Jim
>
> On Tue, Apr 4, 2017 at 2:18 PM, James Wing w
James,
I suspect your call to self.logger.addHandler(fh) is cumulatively adding to
your log results as initialize() is called again. Can you define the log
file and formatting in your NiFi's conf/logback.xml (no restart required)?
Then you can safely call getLogger() and access the shared configu
irst option there --
>> expanding the property list to include the Assume Role properties. I would
>> agree that #2 is a bit more robust and will do some more digging there. As
>> usual, it's a feature that I need yesterday and will likely take the path
>> of least resistanc
though it is running.
>
> Rgds,
>
> Uwe
>
> *Gesendet:* Freitag, 24. März 2017 um 19:27 Uhr
> *Von:* "James Wing"
> *An:* users@nifi.apache.org
> *Betreff:* Re: Nifi - Data provenance not reporting anymore
> Uwe,
>
> Can you share a screenshot of the provena
Adam,
Would you please share a bit more about why the various roles and how many
you would have? I'm curious how it's working in practice, we don't always
get feedback when stuff isn't broken :).
You are correct that the current AWSCredentialsProviderControllerService
assumes a single, unvaried
Steve,
The inferred schemas can be helpful to get you started, but I recommend
providing your own Avro schema based on your knowledge of what should be
guaranteed to downstream systems. If you want to pass untyped data, you
can't really beat JSON. Avro schema isn't so bad, honest.
As part of th
, Uwe Geercken wrote:
> James,
>
> where should I put the screenshot? This mail group does not allow to send
> graphics.
>
> Rgds,
>
> Uwe
>
> *Gesendet:* Freitag, 24. März 2017 um 18:27 Uhr
> *Von:* "James Wing"
> *An:* users@nifi.apache.org
> *Betr
Uwe,
Can you share a screenshot of the provenance search criteria? If you try
the search the other way around, starting with all provenance events and
then filtering down to the time and processor, does that change anything?
Thanks,
James
On Fri, Mar 24, 2017 at 8:39 AM, Uwe Geercken wrote:
Jim,
You could use ListS3 to get existing S3 keys, then parse out the
'directories', and put the directories in a key/value store for a lookup
(like DistributedMapCache). But you might also be able to maintain the
lookup just with your metadata attributes in NiFi alone.
Thanks,
James
On Fri,
Austin,
I think you are on the right track with RouteOnContent. Any chance you can
share a sample CSV header, the settings of your RouteOnContent processor,
including the regex?
Thanks,
James
On Thu, Mar 16, 2017 at 11:14 AM, Austin Heyne wrote:
> Hi,
>
> I have a set of CSV files with heade
Jim,
The good news is that you can delete the entire ./provenance_repository and
NiFi will start up with a fresh clean one. Of course, that won't solve
your analysis challenge.
You might want to play around with some of the provenance config entries to
limit the repository, like nifi.provenance.
Mikhail,
Which version of NiFi are you running?
Thanks,
James
On Fri, Mar 10, 2017 at 9:29 AM, James Wing wrote:
> Mikhail,
>
> This sounds like a known issue with the flow.xml schema validation, where
> new elements have been added to NiFi that have not been updated in the XML
Mikhail,
This sounds like a known issue with the flow.xml schema validation, where
new elements have been added to NiFi that have not been updated in the XML
schema. I believe they should be interpreted as warnings, rather than
errors.
Schema for flow.xml outdated
https://issues.apache.org/jira/
MergeContent's "Defragment" merge strategy might be enough. It is not as
full-featured as the Wait/Notify design Andy suggested, but it might be
something you can use now.
Defragment reads attributes from the flowfiles to more precisely group your
content together than correlation attribute alone
Carlos,
Welcome to NiFi! I believe the Kite dataset is currently the most direct,
built-in solution for writing Parquet files from NiFi.
I'm not an expert on Parquet, but I understand columnar formats like
Parquet and ORC are not easily written to in the incremental, streaming
fashion that NiFi
Carl,
I think logging still works in ExecuteScript. But starting in NiFi 1.0,
the default log level for processors was set from 'INFO' back to 'WARN' to
reduce log volumes. To receive INFO messages from ExecuteScript, you will
want to do one or both of two things:
1.) Set the bulletin level of
Mike,
I am not familiar with a specialized connector for MS Dynamics, but NiFi
has good general-purpose support for databases via JDBC and HTTP(S) web
services (see InvokeHTTP). It is likely that NiFi can do at least some
interaction with Dynamics using only the built-in processors.
But NiFi is
Cheryl,
I'm not aware of any good explanation for that, or open issues on this
topic. Can you share more detail on the OS, JVM, NiFi versions?
Thanks,
James
On Thu, Feb 2, 2017 at 12:14 PM, Cheryl Jennings wrote:
> Hi Everyone,
>
> I had nifi running overnight, communicating between two node
Ashmeet,
Are you still having trouble with HandleHttpRequest? The file data should
be included in the flowfile content for flowfiles generated by
HandleHttpRequest. Are you seeing content, and the number of bytes you
expect?
However, the content will probably be packaged as multipart/form-data
Nick,
You could use ExecuteScript to manipulate the JSON. The sample ECMAScript
below assumes that you already have the transformed timestamp as an
attribute "timestamp":
var flowFile = session.get();
if (flowFile !== null) {
var StreamCallback =
Java.type("org.apache.nifi.processor.io.Stre
Pablo,
An extreme fix is to stop stop NiFi and completely delete the provenance
repository directory. You will get a new, empty provenance repository when
you restart NiFi. All data will be lost, but NiFi will work again.
Obviously, Lee's advice is much better, but there are options if you need
ersions:
>
> Firefox 38.0.1
> Internet Explorer 11.0.9600.18499
>
> Same result in both.
>
> -Nick
>
>
> On Sun, Dec 11, 2016 at 3:58 PM, James Wing wrote:
>
>> Which browser and version are you using?
>>
>> On Sun, Dec 11, 2016 at 12:13 PM, Nichola
Which browser and version are you using?
On Sun, Dec 11, 2016 at 12:13 PM, Nicholas Hughes <
nicholasmhughes.n...@gmail.com> wrote:
> Each of the three objects in the nodes array has a roles attribute as an
> array.
>
> -Nick
>
>
> On Sun, Dec 11, 2016 at 2:51 PM, J
M, Nicholas Hughes <
> nicholasmhughes.n...@gmail.com> wrote:
>
>> James,
>>
>> I just have some 304 "not modified" codes. No errors.
>>
>>
>>
>> -Nick
>>
>>
>> On Sun, Dec 11, 2016 at 1:35 PM, James Wing wrote:
>>
>
Nick,
Are there any errors shown in the Javascript console of your browser
developer tools?
Thanks,
James
On Sun, Dec 11, 2016 at 8:49 AM, Nicholas Hughes <
nicholasmhughes.n...@gmail.com> wrote:
> I'm seeing some interesting behavior in a secured NiFi cluster, and I
> can't seem to track dow
Brian,
Did you add entries for the node DNs in the conf/authorizers.xml file?
Something like:
CN=node1.nifi, ...
CN=node2.nifi, ...
...
Thanks,
James
On Wed, Dec 7, 2016 at 8:28 AM, Brian Jeltema wrote:
> I’m trying to create my first cluster using NiFi 1.1.0. It’s a simple
> 3-node unsecur
Andreas,
I'm not familiar with PutElasticSearch, but the code you point to does
appear strange. It doesn't look like there are any unit tests where
multiple results are returned from ElasticSearch with one or more failures
that would exercise this case.
L329...
// All items are returned whet
Pablo,
The GetDynamoDB processor maps the JSON document from a single DynamoDB
table attribute to the NiFi flowfile content. There is currently no
built-in way to read all of the attributes into the flowfile content as
JSON. I recommend filing a JIRA ticket to enable this use case (see
https://i
what further configuration steps I need to take to get
> S3/SQS working behind our proxy.
>
> Many thanks
>
> John
>
>
> On Thu, Nov 3, 2016 at 6:42 PM, James Wing wrote:
>
>> The short answer is no, PutS3Object does not currently support a direct
>> equiv
The short answer is no, PutS3Object does not currently support a direct
equivalent of the AWS CLI's --no-verify-ssl option. There is an option to
provide your own SSLContextService, if you need to establish trust with
your proxy server (maybe, I'm not sure).
https://nifi.apache.org/docs/nifi-docs
This is absolutely possible. A sample sequence of processors might include:
1. UpdateAttribute - to extract a record date from the flowfile content
into an attribute, 'recordgroup' for example
2. MergeContent - to group related records together, setting the
Correlation Attribute Name property to
Thanks,
James
On Wed, Nov 2, 2016 at 5:31 AM, John Burns wrote:
> Hi James
>
> I seemed to have solved this by installing the AWS command line tools, I
> wan't aware I needed to do that.
>
> Many thanks for your help.
>
> John
>
> On Tue, Nov 1, 2016 at 1
;
> The key and secret key are set correctly too. Do I need to modify my
> /etc/hosts perhaps?
>
> Thanks
>
> John
>
> On Nov 1, 2016 20:53, "James Wing" wrote:
>
>> John,
>>
>> When you configure the PutS3Object processor, you should configure
John,
When you configure the PutS3Object processor, you should configure the
Bucket Name as just 'nifibucket' (without quotes), and set the Region to
'eu-west-1'. You do not need the long-form '
nifibucket.s3-eu-west-1.amazonaws.com', I think that might be what is
confusing it.
Thanks,
James
O
>From the screenshot and the error message, I interpret the sequence of
events to be something like this:
1.) ListS3 succeeds and generates flowfiles with attributes referencing S3
objects, but no content (0 bytes)
2.) FetchS3Object fails to pull the S3 object content with an Access Denied
error,
> Thanks
> Joe
>
> On Wed, Oct 26, 2016 at 1:50 AM, Pablo Lopez
> wrote:
> > It worked!
> > I had to actually delete the Processor and add a new new. Stopping and
> > starting the Processor or the Flow didn't actually work.
> >
> > Thanks,
> >
Pablo,
Did you by any chance start the GetDynamoDB processor first without the
credentials, then stop, configure credentials, and restart? I suspect
there might be a bug where GetDynamoDB caches the client even through
stopping and starting the processor.
To test this theory, you might try copy/
of the DynamoDB table and put it into another table. We have to do it
> for one record at a time?
>
> On Fri, Oct 14, 2016 at 10:50 AM, James Wing wrote:
>
>> NiFi's GetDynamoDB processor uses the underlying BatchGetItem API, which
>> requires item keys as inputs. It
it through the expression
> language? if I write an script to do that, how do I pass it to my processor?
> Thanks
> Niraj
>
> On Thu, Oct 13, 2016 at 1:42 PM, James Wing wrote:
>
>> Rai,
>>
>> The GetDynamoDB processor requires a hash key value to look up an i
Rai,
The GetDynamoDB processor requires a hash key value to look up an item in
the table. The default setting is an Expression Language statement that
reads the hash key value from a flowfile attribute,
dynamodb.item.hash.key.value. But this is not required. You can change it
to any attribute e
Niraj,
Is this message displayed as a validation warning on the GetDynamoDB
processor? GetDynamoDB will retrieve content from DynamoDB, but it
requires an input flowfile to define which item keys to look up.
GetDynamoDB will not enumerate the items in a table like a List* processor.
Thanks,
Jam
Selvam,
Can you describe what is happening when you run FetchS3Object? Does the
processor start? Are flowfiles routed to the failure relationship? Are
there error messages?
To test your configuration, I recommend using ListS3 to get flowfiles with
attributes referencing the S3 objects, then us
In your ReplaceText processor, is the Replacement Strategy property set to
the default "Regex Replace"? You might try "Always Replace" or changing
the Search Value to ".*" (without quotes). I suspect it is escaping your $
as part of processing the regex.
Thanks,
James
On Fri, Sep 30, 2016 at
Brett,
The default provenance store, PersistentProvenanceRepository, does require
I/O in proportion to flowfile events. Flowfiles with many attributes,
especially large attributes, are a frequent contributor to provenance
overload because attribute state is tracked in provenance events. But this
to use tools like salt or puppet to pre-position
> templates.
>
> Thanks,
>
> Dan
> M: 443-992-2848
> GV: 410-861-0206
>
> On Sep 14, 2016, at 3:02 PM, James Wing wrote:
>
> Manish, you are absolutely right to back up your flow.xml.gz and conf
> files. But I woul
Manish, you are absolutely right to back up your flow.xml.gz and conf
files. But I would carefully distinguish between using these backups to
recreate an equivalent new NiFi, versus attempting to reset the state of
your existing NiFi. The difference is the live data in your flow, in the
provenanc
test.xml .
>
> 3. I need to process the /tmp/test.xml file using SplitXML
> processor
>
> 4. Put these into HDFS
>
>
>
>
>
> Thanks,
>
> Ram
>
> *From:* James Wing [mailto:jvw...@gmail.com]
> *Sent:* Monday, August 29, 2016 12:47 AM
> *To:* us
gt;>
>>
>> 1. READ CSV file from HDFS
>>
>> 2. Execute python script – reads CSV file and produces XML output
>> file – example /tmp/test.xml .
>>
>> 3. I need to process the /tmp/test.xml file using SplitXML
>> processor
>>
Koustav,
How are you running the Sqoop job? Can you share some code? Python is
sequential by default, but your Sqoop job might run asynchronously. I
believe the answer depends on your code (or library) not only starting the
Sqoop job, but polling for it's status until it is complete.
Thanks,
Thursday, August 18, 2016 at 7:23 AM
>> *To: *
>> *Subject: *Re: PutS3Object Error
>>
>>
>>
>> Thanks James. So when you say the JRE security provider, are you
>> referring to bouncycastle? If so, I am currently using
>> bcprov-jdk16-1.46.jar.
>
t;84 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_101]
>
>85 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_101]
>
>86 at java.lang.Thread.run(Thread.java:745) [na:1.7.0
Dan,
Would you please share the version of NiFi you are using? Also, would you
please look in logs/nifi-app.log for the stack trace of the exception and
any nested exceptions?
Thanks,
James
On Wed, Aug 17, 2016 at 5:05 PM, dgm wrote:
> I’m just staring to use nifi and having an issue with th
Sai,
Welcome to NiFi! This sounds like an "interesting" error. Would you
please share the version of NiFi you are using? Also, can you get the
complete stack trace of the error from NIFI_HOME/logs/nifi-app.log? The
error suggests that NiFi cannot connect to SQS at all, and has not
proceeded as
Michael,
I believe the general issue you are experiencing in NiFi is the difference
between authentication and authorization. The UI is allowing you to
request and grant authorization (roles), but it is not providing
authentication (a password). In contrast, the login form is used for
authentica
1 - 100 of 124 matches
Mail list logo