Hello guys,
Ever since I tried Apache Nifi, I was amazed with its powerful features and
perfomance.
Every day, I would like to understand and learn more of Nifi and I try to
improve my knowledge of by reading articles and following on the forums.
The applications of nifi have simplied a lot of
Thanks, that's what I figured. I probably can't use InferAvroSchema because
this data is pretty tricky in terms of them suddenly deciding to change the
types.
On Thu, Aug 17, 2017 at 6:18 PM, Matt Burgess wrote:
> Mike,
>
> String is usually the safe bet, InferAvroSchema
Mike,
String is usually the safe bet, InferAvroSchema and ExecuteSQL default to
String if they can't figure out what type to use.
Regards,
Matt
> On Aug 17, 2017, at 6:00 PM, Mike Thomsen wrote:
>
> Is it safe to choose "string" as a default type with Avro? I'm
I had same observation. I was trying to collect concise logs to open a JIRA
and deleted all the existing logs but NiFi did not generate new logs after
cleanup. I had to restart NiFi to generate new logs.
On Thu, Aug 17, 2017 at 4:57 PM, Russell Bateman
wrote:
> James,
>
>
Margarita,
Sorry to hear you're having trouble connecting. In this case, I believe it
is the Database Driver Class Name that is the issue. According to [1],
you'll want to use the following driver class name (rather than the
DataSource one you are using now):
Is it safe to choose "string" as a default type with Avro? I'm trudging
through some really dirty data right now and that seems to behave fine when
I I do something like this:
Flowfile content:
{
"x": 1
}
Avro field definition:
{ "name": "x", "type": ["null", "string"] }
Where sometimes X
I didn't realize that. I'll see if -F is available in my case. Thank you
very much Andrew. -Jim
On Thu, Aug 17, 2017 at 5:36 PM, Andrew Grande wrote:
> Typically this is better handled by using the -F switch instead of -f, it
> has more robust file handling and manages files
Typically this is better handled by using the -F switch instead of -f, it
has more robust file handling and manages files disappearing correctly.
Unfortunately, some OS don't have that switch in their toolchain.
Andrew
On Thu, Aug 17, 2017, 5:07 PM James McMahon wrote:
>
Interesting. Thank you Russ. I wonder whether I somehow interrupted or
deleted the log file when I "ctrl-C"`ed out of the tail -f ? I'll have to
test that and see. -Jim
On Thu, Aug 17, 2017 at 4:57 PM, Russell Bateman
wrote:
> James,
>
> It's the case that, NiFi running,
I tried copying nifi-kafka-0-10-nar-1.3.0.nar into the lib directory of my
MiNiFi installation but MiNiFi was still unable to instantiate the
PublishKafka_0_10 processor. Am I including the correct one? Do I need to
include any other files or make any other changes?
In our application it is
Hi Pierre. Very interesting you ask that. While we do not, when I last
performed the service nifi stop/start, I was indeed employing a 'tail -f
./nifi-app.log' so that I could see when the jetty server was ready to
support access through the UI. Why do you ask about this?
Jim
On Thu, Aug 17,
Hi James,
Out of curiosity do you have a TailFile processor configured to tail the
NiFi log file?
Thanks
Le 17 août 2017 22:11, "James McMahon" a écrit :
Thank you Joe. I agree and will monitor it closely going forward. I suspect
there were some external factors at play
Thank you Joe. I agree and will monitor it closely going forward. I suspect
there were some external factors at play here.
On Thu, Aug 17, 2017 at 4:05 PM, Joe Witt wrote:
> Ok if 50,000 is the max then i'm doubtful that it ran out.
>
> In the event of exhaustion of allowed
Ok if 50,000 is the max then i'm doubtful that it ran out.
In the event of exhaustion of allowed open file handle count NiFi will
run but its behavior will be hard to reason over. That means it
cannot create any new files or open existing files but can merely
operate using the handles it already
Hi Jeremy,
Thanks for bringing the lack of a listing to our attention. We actually
have documentation created to support this [1] but failed to actually
publish and/or reference it anywhere. We address that with MINIFI-379 [2].
To the issue at hand, it is possible to make use of NARs from NiFi
50,000.
Is NiFi robust enough that it can continue to run without the log file for
write attempts?
It is back up and running like a champ now, so I will keep an eye on it.
On Thu, Aug 17, 2017 at 3:40 PM, Joe Witt wrote:
> It sounds like a case of exhausted file handles.
>
>
It sounds like a case of exhausted file handles.
Ulimit -a
How many open files are allowed for the user nifi runs as?
On Aug 17, 2017 12:26 PM, "James McMahon" wrote:
> Our nifi instance appeared to be running fine but we noticed that there
> were no log files for today
Our nifi instance appeared to be running fine but we noticed that there
were no log files for today in the logs subdirectory. We could not find
any nifi logs for today anywhere on our system.
I was surprised that NiFi continued to run. Has anyone experienced such
behavior?
How is NiFi able to
Steve,
We have dev and prod hdfs/NiFi/kafka/etc.
For directory stuff in HDFS from NiFi, I feed data in raw based on the
kafka topic and data type. I keep my directories in HDFS bucketed on
static/feed then topic/feature then timeframe/granularity then if necessary
type.
e.g.
Hello,
How would I handle environment separation in HDFS? My initial thought was
to use a directory structure like /data///,
but I'm running into problems with reading the files back out of HDFS (for
example merging small files into larger files). For the ListHDFS processor,
it doesn't allow
Hey Andy,
Thanks for getting back to me. I’ve linked to the log files below. I do see
in nifi-bootstrap.log that the cert is trusted but like you said it doesn’t
look to be an SSL-specific issue. I will work on a remote debug session and
see if that gives me any additional clues.
I used the MergeContent processor to work out whether I had the five unique
files. Thanks for your suggestion - I like it.
On Thu, 17 Aug 2017 at 15:35 Carl Berndt wrote:
> Hi,
> Just thinking out loud, I wonder if you could take advantage of the
> MergeContent processor.
>
Hi,
Just thinking out loud, I wonder if you could take advantage of the
MergeContent processor.
Using the mergcontent processor, you would need to build a "correlation
attribute name" - aka the "bin key". The binkey would be common to all 5 of
your files. Your binkey would group all 5 of your
23 matches
Mail list logo