Lorenzo was not clear how we tested. I wrote the sample java program using the
Azure SDK, then uploaded the 50 MB file and it’s worked without error. Nifi
custom processor used the same SDK code; however, it’s failing when the custom
processor is tried to uploaded 50 MB file.
Thanks
Kumiko
Here is the code: https://github.com/kyada1/ADL_UploadFile.
I removed the following values for the security reason.
final static String ADLS_ACCOUNT_NAME = "";
final static String RESOURCE_GROUP_NAME = "";
final static String LOCATION = "";
final static String TENANT_ID = "";
final static
I was referring to this: "Then we wrote a java sdk script to upload this
same file without Nifi into the data lake and it worked successfully."
Is that code somewhere?
On Sep 6, 2016 5:38 PM, "Kumiko Yada" wrote:
> I didn’t add any test code. This custom controller and
I didn't see the test script that worked in the source code - did I miss
it, or is it not in the tree?
On Sep 6, 2016 3:17 PM, "Kumiko Yada" wrote:
> Joe,
>
>
>
> Here is the log (there was no callstack related to this error) and code,
>
Hi Joe,
I am not sure whether I sounds stupid. I was also researching on the
option 2.
I felt nifi lack the capability to process the big data jobs where data
need to be processed with locality.
Initially I was thinking about making the modification to run nifi on
yarn. But this approach
Yes the security mapping is 'off' by default.
For the LDAP integration, we simply grab the DN that is being returned [1].
In your login-identity-providers.xml are your 'User search' properties
configured with upper or lower case?
Matt
[1]
Matt,
This was on 1.0
No big deviations from standards setup as described by Admin Guide:
file-provider
...
CN=BOFH, DC=evil, DC=com
...
Configuring to:
file-provider
...
cn=BOFH, dc=evil, dc=com
...
solved.
The environment I was testing on seemed
Andre,
Which version of Apache NiFi are you using?
In 0.x authorities are defined using the string that is returned from the
authenticator so there should be no issue here.
In 1.x users are entered by hand either through the UI or in the
authorizers.xml file (for the initial admin). This value
All,
just to clarify as my previous message was missing this "small" detail):
I defined the initial admin as:
CN=BOFH, DC=evil, DC=com
Logged in as
BOFH / password
Password gets recognised but NiFi tells me I am not authorised to access
the Flow.
Rewrite the config so that initial admin
Ram,
There was some design discussion about this here [1], see the part about
"getAuthorizationHistory", but I don't think we currently have any JIRAs
defined.
The file-based authorizer provided in the 1.0.0 release does not provide
this capability yet, so you would need to implement a custom
All,
I accidentally bumped into this situation and I was wondering if anyone
else seen this as well.
When provisioning a LDAP authenticated NiFi instance I defined the initial
admin as:
CN=BOFH, DC=evil, DC=com
To my surprise this did not work. Upon inspection of the authentication and
Ram,
Are you talking about isolation from a security perspective? If so, that is
now available in the 1.0.0 release.
You can restrict access to any portion of the flow through a policy model.
-Bryan
On Tue, Sep 6, 2016 at 10:16 AM, Nathamuni, Ramanujam
wrote:
> Hi Joe
Hi Joe and Team:
Enterprise needs very high data security and audit trial. What is the vision to
have canvas/processor/processor group ...etc. isolated for project/LOBs?
Thanks,
Ram
-Original Message-
From: Joe Witt [mailto:joe.w...@gmail.com]
Sent: Tuesday, September 06, 2016 9:12
Gunjan/Joe,
I was looking into this at one point... I think the idea would be to
implement a Beam source/sink (whatever their terminology is) for NiFi, and
this would allow someone to write a single Beam job pulling/pushing data
to/from NiFi, and then that single job could be executed in Flink,
Gunjan
No plans at this point. What sort of beam related integration are you
envisioning? There are a couple scenarios that come to mind:
1) NiFi as a beam runner. Probably not a great fit as by design we'd
not be attempting to address and important cross section of the types
of processing you
Thank Joe,
w.r.t to NiFi externals, like we currently have NiFi spark and storm
externals, any plans for NiFi beam external? Reason is I saw your name as
one of the contributors there in incubation proposal.
I was studying beam in more depth hence the question.
On Tue, Sep 6, 2016, 6:42 PM Joe
Gunjan
The best indicator of areas of focus I think are listed here
https://cwiki.apache.org/confluence/display/NIFI/NiFi+Feature+Proposals
In a roadmap discussion thread back in January of this year the items
mentioned specifically as trailing the 1.0 release were extension and
variable
17 matches
Mail list logo