stefan-egli commented on PR #1619: URL: https://github.com/apache/jackrabbit-oak/pull/1619#issuecomment-2269266471
IIUC this PR has now addressed the [suggestion](https://issues.apache.org/jira/browse/OAK-10803?focusedCommentId=17869342&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17869342) on OAK-10803. That I think we should follow-up with indeed. What about doing it in 2 steps though: * first address those suggestions (bring memory consumption back to pre-compression) * then look into performance improvements Currently this PR seems to mix both concerns, which makes review discussion a bit more complex. Having said that, reg the performance improvements: I still think we need to address the performance aspect differently. As it stands now, the first call to `decompress()` will expand the value again to its original state (I would then have perhaps set `value` instead of introducing/duplicating `decompressedValue`) - which means it will use up again the original amount of memory (at which point the gain done by compression is over). The issue is that `decompress()` will be call pretty much immediately after a property was created - namely when it needs to be put into the cache and when its memory consumption is estimated. Hence there would be zero gain of compression if decompression was done as it stands now. (we could for example verify how the memory consumption calculation could be fixed - after all it's probably broken now with compression anyway - and perhaps that could make the compression state live longer) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
