It is possible that during regular filesystem operation filesystem footprint is 
shaped such that during subsequent NFFS restore operation maximum number of 
block will be exceeded in peek. This is caused by difference in algorithm which 
should lead to expected (the same) NFFS runtime variables state.

Proposition for resolution of that is to rework NFFS restore in such manner 
that will be more sequential process than is now:
* 1st restore inodes to RAM
* 2nd restore valid block to RAM, thanks to that inodes state are known, the 
maximum block number shouldn’t be exceeded.


[ Full content available at: https://github.com/apache/mynewt-nffs/issues/10 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to