On Wed, Aug 28, 2019 at 07:22:36PM +0000, Vikas MANOCHA wrote: > Hi, > > > -----Original Message----- > > From: Stephen Warren <[email protected]> > > Sent: Tuesday, August 27, 2019 7:50 PM > > To: Vikas MANOCHA <[email protected]>; Tom Rini > > <[email protected]> > > Cc: [email protected]; [email protected]; Stephen Warren > > <[email protected]> > > Subject: Re: [PATCH] board_f: fix noncached reservation calculation > > > > On 8/27/19 6:01 PM, Vikas MANOCHA wrote: > > > Stephen Warren wrote at Tuesday, August 27, 2019 3:50 PM > > >> On 8/27/19 4:10 PM, Vikas MANOCHA wrote: > > >>> Stephen Warren wrote at Tuesday, August 27, 2019 10:55 AM > > >>>> The current code in reserve_noncached() has two issues: > > >>>> > > >>>> 1) The first update of gd->start_addr_sp always rounds down to a > > >>>> section start. However, the equivalent calculation in > > >>>> cache.c:noncached_init() always first rounds up to a section start, > > >>>> then subtracts a section size. > > >>>> These two calculations differ if the initial value is already > > >>>> rounded to section alignment. > > >>> > > >>> It shouldn't cause any issue, first one round down to section size. > > >>> Second > > >>> one(cache.c: noncached_init()) rounds up, so needs section size > > >>> subtraction. > > >> > > >> Here's an example where it fails, based on code before my patch: > > >> > > >> Assume that MMU section size is 2, and that mem_malloc_start and > > >> gd->start_addr_sp are both 1000M on entry to the functions, and the > > >> noncached region is 1 (what Jetson TX1 uses). The example uses values > > >> assumed to be multiples of 1M to make the numbers easier to read. > > >> > > >> noncached_init: > > >> > > >> // mem_malloc_start = 1000 > > >> end = ALIGN(mem_malloc_start, MMU_SECTION_SIZE) - > > MMU_SECTION_SIZE; > > >> // end = 1000 - 2 = 998 // was already aligned, so 1000 not 1002 size > > >> = ALIGN(CONFIG_SYS_NONCACHED_MEMORY, > > >> MMU_SECTION_SIZE); // size = 2 start = end - size; // start = 998 - 2 > > >> = 996 // region is 996...998 > > > > > > Thanks for this example, it definitely seems a bug. Just that we are > > > fixing it > > by adding this gap in the reserve_noncached() also. > > > Better would be to fix this subtraction of MMU_SECTION_SIZE by aligning > > down "end" location, like: > > > > > > end = ALIGN_DOWN(mem_malloc_start, MMU_SECTION_SIZE); // end = > > 1000 > > > size = ALIGN(CONFIG_SYS_NONCACHED_MEMORY, MMU_SECTION_SIZE); > > // size = > > > 2 start = end -size; // start = 998 > > > > That would change yet another piece of code that's been stable for a while. > > It's late in the U-Boot release cycle, so I think we should be > > conservative, and > > not change any more code than necessary. Changing lots of extra code > > would run the risk of introducing more regressions. I'd rather (a) apply the > > original change I posted, which adjusts only the code that caused the > > regression, or (b) revert the patch that caused the regression. > > Ok, Either way is fine. > > > > > If you want to adjust the code in noncached_init, we can do that > > immediately after the release, to give maximum time for any regressions to > > be debugged and fixed before the next release. > > Ok.
So this patch keeps your use case working and fixes Stephen's problem, to be clear? Thanks guys! -- Tom
signature.asc
Description: PGP signature
_______________________________________________ U-Boot mailing list [email protected] https://lists.denx.de/listinfo/u-boot

