Hi,

I was trying to run a feature model based application with slightly
less memory that usual and noticed that it failed outright when I tried
to launch it with 256MiB of RAM. The heap dump was very very small, so
when trying to trace back the problem I noticed that the feature
launcher uses a 256MiB byte array at [1] for reading feature archive
files. It's a local variable so it does not leak, but it does impose a
hard limit on the memory size.

Is this something that is done intentionally? I think that for more
light-weight applications that don't consume a lot of heap at runtime
this allocation is too aggresive. If we want to keep this size (
although maybe we wanted 256 KiB? ) we can at least allocate it on-
demand the first time when reading a feature archive.

As a data point, I can run the Starter just fine on my laptop with a
280MiB heap.

Thoughts?

Thanks,
Robert

[1]:
https://github.com/apache/sling-org-apache-sling-feature-launcher/blob/36b7fe229780b06f81db0a97f2e8e86726a3158c/src/main/java/org/apache/sling/feature/launcher/impl/FeatureProcessor.java#L111

Reply via email to