[
https://issues.apache.org/jira/browse/MINIFI-356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16106459#comment-16106459
]
Joseph Niemiec commented on MINIFI-356:
---------------------------------------
This is a novel idea considering my findings from a few smaller device tests
were the aggregate backpressure of all connections should equal less then total
system memory; otherwise its possible the kernel will OOM kill the Minifi-CPP
process. I think this begs the question first if we plan to keep this behavior
or come up with some type of intelligent cache loading to enable us to load
content without memory issues.. Then based on the result of that we could come
up with some behavior here.
Perhaps some questions for all - (this may be a better convo for the mailing
lists)
# Do we plan on keeping the minifi-cpp connection storage limitations I
uncovered in my tests?
# How do we define a failure to write to disk? Is it a device failure?
Excessive IO wait? Checking some hash of what was written?
# If we went to a volatile storage would it be a % of each connection? ie - a
100MB connection in volatile mode would be 10% of this aka 10MB ?
Overall I like the idea but wonder what kind of failure rates really exists for
storage media at this level... I'll try to look more into some failure rates of
differing medias.
> Create repository failure policy
> --------------------------------
>
> Key: MINIFI-356
> URL: https://issues.apache.org/jira/browse/MINIFI-356
> Project: Apache NiFi MiNiFi
> Issue Type: Improvement
> Components: C++
> Reporter: marco polo
> Assignee: marco polo
>
> Create a failure policy for continuing operations if a repo failure occurs.
> I.e. If writing to disk fails above a threshold ( 100 % for example ), we can
> move to a volatile repo where we can continue operations and report that we
> have a failure.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)