Thanks Joe. I got out of the hung state now, I actually deleted those folders
before but probably some data did not delete in first go.
I am using Nifi 1.1. My flow is like:
ListS3 -> FetchS3Object -> RouteOnAttribute -> SplitText -> JoltTransform ->
CustomProcessor -> MergeContent (Bins=1, Entries=100) -> PostHttp
Max data is not more than 200 mb and but its uploaded every 15 mins.
I can imagine what happened is my server got out of memory what following
code inside onTrigger() of a custom processor started giving null pointer
exceptions at value.get() in CustomProcessor:
final AtomicReference<String> value = new AtomicReference<>();
session.read(flowFile, new InputStreamCallback() {
@Override
public void process(final InputStream in) throws IOException
{
String results = null;
try {
String json = IOUtils.toString(in, "UTF-8");
results = tranform(json);
} catch (Exception e) { }
value.set(results);
}
});
String results = value.get();
if (results != null && !results.isEmpty()) {
flowFile = session.putAttribute(flowFile, "json", results);
}
flowFile = session.write(flowFile, new OutputStreamCallback() {
@Override
public void process(OutputStream out) throws IOException {
out.write(value.get().getBytes());
}
});
1. Code already looks not ideal but not sure what else can cause flowfile's
content to be null?
2. What should be done in case of these space issues to immediately get out?
You do not know what has been processed and what could not go through. How
to make it reprocess files?
I am new to Nifi, may be it is something I can configure my dataflow to deal
with but good to know what people do normally. Thanks
--
View this message in context:
http://apache-nifi-developer-list.39713.n7.nabble.com/Nifi-in-a-hung-state-tp14713p14717.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.