I concur with Thomas: as with other chunks of Brooklyn state, in-memory
task history leads to unpleasant surprises following a server restart or
on promotion of a node in HA situations. It's that category of data that
is 99% discardable, but in a few cases can make the difference between
successfully debugging a problem and tearing one's hair out ;o)
A secondary benefit of using a persistent store will be to dramatically
lower the memory usage of long-running deployments with complex activity
patterns.
A.
--
Alasdair Hodge
Principal Engineer,
Cloudsoft Corporation
On 17/05/2017 12:01, Thomas Bouron wrote:
Hi Richard.
I'm really happy that you bring this topic because it did happen to me in
the past and becomes more and more frequent now that we are running
Brooklyn in Karaf.
AFAIK, the only way to get this information back is to scan the Brooklyn
debug log but it neither easy or ideal. You could setup an ELK stack to
process the logs but again, that means setup something external which we
don't advocate for.
From a personal point of view, I really think it's a big flaw in Brooklyn
because:
1. you loose very important data over time
2. even worse, this data is not persisted so if you restart brooklyn,
it's just gone
Typically, I would be wary to use it in production if I cannot quickly
debug what is going on from the UI/CLI.
I think that shipping Brooklyn with an embedded datastore would be a good
solution to solve this. I would lean toward elasticsearch for this but we
could choose something else (such as cassandra) if we want to be 100%
Apache.
Best.
On Wed, 17 May 2017 at 11:41 Richard Downer <[email protected]> wrote:
I see to be seeing this:
<contents-garbage-collected>
a lot recently, when I inspect stdin/stdout/stderr of a failed task. Often
I never have a chance to see the contents even if I go to look at a failed
task as soon as it fails.
Is there any way I can extend the availability of this data, or prevent it
from being garbage collected?
Thanks
Richard