The order of arrival does not matter.
Wait processor has 'Expiration Duration' configuration, defaults to 10
min. Please adjust it according to your needs, the longest period to
wait for a delayed file.
If a FlowFile exceeds the duration, it will be sent to 'expired'
relationship, and can be
Cool, that will make things a lot simpler. Does it matter that the ext2
files arrive in random order? Sometimes there can be a very long delay in
some of them showing up, and we have some concerns about the overall flow
blocking. If we have a longer wait for a file, we'd like processing for the
Glad to hear that was helpful.
"4 same type for each extension", can be treated as "8 distinct types"
if an extension is included in a type.
ab.ex1, cd.ex1, ef.ex1, gh.ex1, ab.ex2, cd.ex2, ef.ex2, gh.ex2
Then route only 'ab.ex1' (or whichever but just 1 of them) to the Wait
branch, and the rest
Hi Koji, Many thanks for your continued assistance!
> - 1 file per second is relatively low in terms of traffic, it should
> be processed fine with 1 thread
> - A flow like this, which is stateful across different parts of the
> flow works at best with single thread, because using multiple
Hi Martijn,
Thanks for elaborating your requirement. Here are few comments:
- 1 file per second is relatively low in terms of traffic, it should
be processed fine with 1 thread
- A flow like this, which is stateful across different parts of the
flow works at best with single thread, because
We’re using SSSD not LDAP directly so ours look like this. You can use LDAP as
the source for SSSD but I’m not exactly sure why the behavior would be
different.
passwd: files sss
shadow: files sss
group:files sss
I’m looking back through the thread and I haven’t seen it
Yes sir - we are indeed able to create files with that group. By chance,
are you using /etc/nsswitch.conf? Do your entries for passwd, shadow, and
group look something like this?
passwd: files ldap
shadow: files ldap
group:files ldap
(We did try to reverse that order to "ldap
Vishal Dutt,
Your issue relates to an existing JIRA [1] and as luck would have it, it's
already resolved! :) The fix for that JIRA [1] has been merged to master
and will be in the next NiFi release.
[1] https://issues.apache.org/jira/browse/NIFI-5134
[2] https://github.com/apache/nifi/pull/2667
Logged in as the user NiFi is running as on the same host are you able to
create files with that group? We use PutFile and none of our groups are local
to the host.
Thanks
Shawn
From: James McMahon
Sent: Wednesday, May 30, 2018 8:21 AM
To: users@nifi.apache.org
Subject: Re: User, Group in
I did indeed configure PutFile as follows:
Permissions . 775
Owner . nifi
Group . ext_dev
When nifi is in local /etc/passwd and ext_dev is in local /etc/group, the
PutFile succeeds.
When neither exists in the local files, I get the Warning in both case and
the file is output with
Sounds like a pretty safe plan. You said your development instance is at
1.1 and your prod on 1.3. Is that right? If it is, you might want to
shutdown, backup NiFi and just do a clean installation without bringing
over your security settings until you have NiFi 1.6 up and running because
a lot has
Hi Mike,
Thanks for the warning. In prod, we are still running NiFi 1.3 with Java 8. I
guess for now, I will just install the JRE 8 onto the dev server, and
experiment with NiFi 1.7 at a later stage.
From: Mike Thomsen
Sent: Wednesday, May 30, 2018 8:49 AM
To: users@nifi.apache.org
Subject:
Shot in the dark, if you have a user named nifi in the LDAP and one in the
OS it might not actually be treated as the same unless the OS is using LDAP
to provide the user listing. Something as simple as /etc/users having a
password for "nifi" and the LDAP not having it or it being a different hash
By default, PutFile will set the ownership of the file to the user running
the NiFi instance (nifi if NiFi is running as nifi user). Then, if you
configured a different ownership in the processor configuration it'll try
to set the ownership using the username you configured in the processor.
What
Alexander,
You're welcome to try the patch in a dev environment and send us feedback
on what you find. The more the merrier in that respect, however, just to be
clear... this is absolutely not recommended as an immediate production fix
for your issue (assuming this environment you're emailing
Yes sir, sure does. In this instance my user nifi does indeed resolve at
the OS level - I think that gives us some confidence it does resolve. The
lookupPrincipalByName(owner) within the PutFile is where I believe the
failure is rooted, but I do not understand how that function executes its
I think we're saying the same :) Let me rephrase it differently: to set the
owner of a file, the user needs to be resolved at OS level. If the user
does not exist (from the OS point of view), NiFi won't be able to set the
owner (even though the username is in the LDAP configured for NiFi
Excellent, thanks everyone! Let me try the patch first. Plan B would be to
install Java 8 and bootstrap NiFi to it.
From: Mike Thomsen
Sent: Wednesday, May 30, 2018 7:59 AM
To: users@nifi.apache.org
Subject: Re: NiFi 1.6.0 fails to start
We added a patch for 1.7.0-SNAPSHOT that should handle
I don't understand this:
"Until you *can't* resolve the user with OS commands, I don't think NiFi
will be able to set the expected owner on the file"
Did you intend to say can there - don't we want to be able to resolve the
user at the OS as an initial validation that we can get to the ldap and as
We added a patch for 1.7.0-SNAPSHOT that should handle that issue going
forward. You're welcome to download the code and try a build (in your dev
environment obviously) to see if that solves your problem.
On Wed, May 30, 2018 at 7:43 AM Bryan Bende wrote:
> Hello,
>
> There is some work being
What's the error from Spark? Logical data types are just a variant on
existing data types in Avro 1.8.
On Wed, May 30, 2018 at 7:54 AM Mohit
wrote:
> Hi all,
>
>
>
> I’m fetching the data from RDBMS and writing it to parquet using
> PutParquet processor. I’m not able to read the data from Spark
Hi all,
I'm fetching the data from RDBMS and writing it to parquet using PutParquet
processor. I'm not able to read the data from Spark when Logical Data Type
is true. I'm able to read it from Hive.
Do I have to set some specific properties in the PutParquet processor to
make it readable from
It depends how your OS is configured, you could leverage tools like SSSD to
resolve users against your LDAP but that's something to be configured at OS
level.
Until you can't resolve the user with OS commands, I don't think NiFi will
be able to set the expected owner on the file.
2018-05-30 11:54
Hello,
There is some work being done to support Java 9/10, but at this time NiFi
currently requires Java 8.
There is an issue when using Java 9/10 where some javax.xml classes are now
part of the JDK, but before NiFi had to bundle them, so it creates conflicting
classes.
Thanks,
Bryan
> On
Try java 8, and set JAVA_HOME etc.
On 30 May 2018 at 13:20, Saip, Alexander (NIH/CC/BTRIS) [C] <
alexander.s...@nih.gov> wrote:
> Hello,
>
>
>
> We have recently tried to upgrade the development instance of NiFi running
> on Windows 2012 from 1.1.1 to 1.6.0, following the process described here
My apologies. It is sometimes difficult to decide if the root cause of a
challenge is related to code or related to a user level configuration
issue. In this case I had included the developer group because it seemed it
might relate to the PutFile after I had eliminated the possible
configuration
I'll let Koji give more information about the Wait/Notify, he is clearly
the expert here.
I'm just jumping in regarding your "and when viewing the queue, the dialog
states that the queue is empty.". You're seeing this behavior because, even
though the UI shows some flow files in the queue, the
Hi,
Could you share additional details about the processor/CS configuration as
well?
Thanks
2018-05-30 7:03 GMT+02:00 Koji Kawamura :
> Hello,
>
> Although I encountered various Kerberos related error, I haven't
> encountered that one.
> I tried to reproduce the same error by changing Kerberos
Hi Koji,
Thank you for responding. I had adjusted the run schedule to closely mimic
our environment. We are expecting about 1 file per second or so.
We are also seeing some random "orphans" sitting in a wait queue every now
and again that don't trigger a debug message, and when viewing the queue,
29 matches
Mail list logo