Re: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region

2019-01-16 Thread Yueyi Li


On 2019/1/16 15:51, Ard Biesheuvel wrote:
> On Wed, 16 Jan 2019 at 04:37, Yueyi Li  wrote:
>> OK, thanks. But seems this mail be ignored, do i need re-sent the patch?
>>
>> On 2018/12/26 21:49, Ard Biesheuvel wrote:
>>> On Tue, 25 Dec 2018 at 03:30, Yueyi Li  wrote:
>>>> Hi Ard,
>>>>
>>>>
>>>> On 2018/12/24 17:45, Ard Biesheuvel wrote:
>>>>> Does the following change fix your issue as well?
>>>>>
>>>>> index 9b432d9fcada..9dcf0ff75a11 100644
>>>>> --- a/arch/arm64/mm/init.c
>>>>> +++ b/arch/arm64/mm/init.c
>>>>> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
>>>>> * memory spans, randomize the linear region as well.
>>>>> */
>>>>>if (memstart_offset_seed > 0 && range >= 
>>>>> ARM64_MEMSTART_ALIGN) {
>>>>> -   range = range / ARM64_MEMSTART_ALIGN + 1;
>>>>> +   range /= ARM64_MEMSTART_ALIGN;
>>>>>memstart_addr -= ARM64_MEMSTART_ALIGN *
>>>>> ((range * 
>>>>> memstart_offset_seed) >> 16);
>>>>>}
>>>> Yes, it can fix this also. I just think modify the first *range*
>>>> calculation would be easier to grasp, what do you think?
>>>>
>>> I don't think there is a difference, to be honest, but I will leave it
>>> up to the maintainers to decide which approach they prefer.
> No it has been merged already. It is in v5.0-rc2 I think.

OK, thanks. :-)


Re: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region

2019-01-15 Thread Yueyi Li
OK, thanks. But seems this mail be ignored, do i need re-sent the patch?

On 2018/12/26 21:49, Ard Biesheuvel wrote:
> On Tue, 25 Dec 2018 at 03:30, Yueyi Li  wrote:
>> Hi Ard,
>>
>>
>> On 2018/12/24 17:45, Ard Biesheuvel wrote:
>>> Does the following change fix your issue as well?
>>>
>>> index 9b432d9fcada..9dcf0ff75a11 100644
>>> --- a/arch/arm64/mm/init.c
>>> +++ b/arch/arm64/mm/init.c
>>> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
>>>* memory spans, randomize the linear region as well.
>>>*/
>>>   if (memstart_offset_seed > 0 && range >= 
>>> ARM64_MEMSTART_ALIGN) {
>>> -   range = range / ARM64_MEMSTART_ALIGN + 1;
>>> +   range /= ARM64_MEMSTART_ALIGN;
>>>   memstart_addr -= ARM64_MEMSTART_ALIGN *
>>>((range * memstart_offset_seed) 
>>> >> 16);
>>>   }
>> Yes, it can fix this also. I just think modify the first *range*
>> calculation would be easier to grasp, what do you think?
>>
> I don't think there is a difference, to be honest, but I will leave it
> up to the maintainers to decide which approach they prefer.



Re: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region

2018-12-24 Thread Yueyi Li
Hi Ard,


On 2018/12/24 17:45, Ard Biesheuvel wrote:
> Does the following change fix your issue as well?
>
> index 9b432d9fcada..9dcf0ff75a11 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
>   * memory spans, randomize the linear region as well.
>   */
>  if (memstart_offset_seed > 0 && range >= 
> ARM64_MEMSTART_ALIGN) {
> -   range = range / ARM64_MEMSTART_ALIGN + 1;
> +   range /= ARM64_MEMSTART_ALIGN;
>  memstart_addr -= ARM64_MEMSTART_ALIGN *
>   ((range * memstart_offset_seed) >> 
> 16);
>  }

Yes, it can fix this also. I just think modify the first *range* 
calculation would be easier to
  grasp, what do you think?



Thanks,
Yueyi


[PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region

2018-12-23 Thread Yueyi Li
When KASLR enaled(CONFIG_RANDOMIZE_BASE=y), the top 4K virtual
address have chance to be mapped to physical address, but which
is expected to leave room for ERR_PTR.

Also, it might cause some other warparound issue when somewhere
use the last memory page but no overflow check. Such as the last
page compressed by LZO:

[ 2738.034508] Unable to handle kernel NULL pointer dereference at virtual 
address 0009
[ 2738.034515] Mem abort info:
[ 2738.034518]   Exception class = DABT (current EL), IL = 32 bits
[ 2738.034520]   SET = 0, FnV = 0
[ 2738.034523]   EA = 0, S1PTW = 0
[ 2738.034524]   FSC = 5
[ 2738.034526] Data abort info:
[ 2738.034528]   ISV = 0, ISS = 0x0005
[ 2738.034530]   CM = 0, WnR = 0
[ 2738.034533] user pgtable: 4k pages, 39-bit VAs, pgd = 94cee000
[ 2738.034535] [0009] *pgd=, *pud=
...
[ 2738.034592] pc : lzo1x_1_do_compress+0x198/0x610
[ 2738.034595] lr : lzo1x_1_compress+0x98/0x3d8
[ 2738.034597] sp : ff801caa3470 pstate : 00c00145
[ 2738.034598] x29: ff801caa3500 x28: 1000
[ 2738.034601] x27: 1000 x26: f000
[ 2738.034604] x25: 4ebc x24: 
[ 2738.034607] x23: 004c x22: f7b8
[ 2738.034610] x21: 2e2ee0b3 x20: 2e2ee0bb
[ 2738.034612] x19: 0fcc x18: f84a
[ 2738.034615] x17: 801b03d6 x16: 0782
[ 2738.034618] x15: 2e2ee0bf x14: fff0
[ 2738.034620] x13: 000f x12: 0020
[ 2738.034623] x11: 1824429d x10: ffec
[ 2738.034626] x9 : 0009 x8 : 
[ 2738.034628] x7 : 0868 x6 : 0434
[ 2738.034631] x5 : 4ebc x4 : 
[ 2738.034633] x3 : ff801caa3510 x2 : 2e2ee000
[ 2738.034636] x1 :  x0 : f000
...
[ 2738.034717] Process kworker/u16:1 (pid: 8705, stack limit = 
0xff801caa)
[ 2738.034720] Call trace:
[ 2738.034722]  lzo1x_1_do_compress+0x198/0x610
[ 2738.034725]  lzo_compress+0x48/0x88
[ 2738.034729]  crypto_compress+0x14/0x20
[ 2738.034733]  zcomp_compress+0x2c/0x38
[ 2738.034736]  zram_bvec_rw+0x3d0/0x860
[ 2738.034738]  zram_rw_page+0x88/0xe0
[ 2738.034742]  bdev_write_page+0x70/0xc0
[ 2738.034745]  __swap_writepage+0x58/0x3f8
[ 2738.034747]  swap_writepage+0x40/0x50
[ 2738.034750]  shrink_page_list+0x4fc/0xe58
[ 2738.034753]  reclaim_pages_from_list+0xa0/0x150
[ 2738.034756]  reclaim_pte_range+0x18c/0x1f8
[ 2738.034759]  __walk_page_range+0xf8/0x1e0
[ 2738.034762]  walk_page_range+0xf8/0x130
[ 2738.034765]  reclaim_task_anon+0xcc/0x168
[ 2738.034767]  swap_fn+0x438/0x668
[ 2738.034771]  process_one_work+0x1fc/0x460
[ 2738.034773]  worker_thread+0x2d0/0x478
[ 2738.034775]  kthread+0x110/0x120
[ 2738.034778]  ret_from_fork+0x10/0x18
[ 2738.034781] Code: 3800167f 54a8 d100066f 1431 (b9400131)
[ 2738.034784] ---[ end trace 9b5cca106f0e54d1 ]---
[ 2738.035473] Kernel panic - not syncing: Fatal exception

in = 0xf000
in_len = 4096
ip = x9 = 0x0009 overflowed.

Always leave room the last size of ARM64_MEMSTART_ALIGN region
in linear region.

Signed-off-by: liyueyi 
---
 arch/arm64/mm/init.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 0340e45..20fe11e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -439,7 +439,8 @@ void __init arm64_memblock_init(void)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
extern u16 memstart_offset_seed;
u64 range = linear_region_size -
-   (memblock_end_of_DRAM() - memblock_start_of_DRAM());
+  (memblock_end_of_DRAM() - memblock_start_of_DRAM()) -
+  ARM64_MEMSTART_ALIGN;
 
/*
 * If the size of the linear region exceeds, by a sufficient
-- 
2.7.4



Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-18 Thread Yueyi Li
Hi Markus & Kees,

On 2018/12/17 0:56, Markus F.X.J. Oberhumer wrote:
> Yueyi,
>
> if ASLR does indeed exclude the last page (like it should), how do
> you get the invalid (0xf000, 4096) mapping then?
Regarding following code, seems ASLR is align to ARM64_MEMSTART_ALIGN,I
don`t think it will exclude the top 4K address space.

```
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
 extern u16 memstart_offset_seed;
 u64 range = linear_region_size -
(bootloader_memory_limit - memblock_start_of_DRAM());

 /*
  * If the size of the linear region exceeds, by a sufficient
  * margin, the size of the region that the available physical
  * memory spans, randomize the linear region as well.
  */
 if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
 range = range / ARM64_MEMSTART_ALIGN + 1;
 memstart_addr -= ARM64_MEMSTART_ALIGN *
  ((range * memstart_offset_seed) >> 16);
 }
}
```

Thanks,
Yueyi



[PATCH v3] lzo: fix ip overrun during compress.

2018-12-11 Thread Yueyi Li
It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
point to the end of memory and which virtual address is 0xf000.
Leading to a NULL pointer access during the get_unaligned_le32(ip).

Fix this panic:
[ 2738.034508] Unable to handle kernel NULL pointer dereference at virtual 
address 0009
[ 2738.034515] Mem abort info:
[ 2738.034518]   Exception class = DABT (current EL), IL = 32 bits
[ 2738.034520]   SET = 0, FnV = 0
[ 2738.034523]   EA = 0, S1PTW = 0
[ 2738.034524]   FSC = 5
[ 2738.034526] Data abort info:
[ 2738.034528]   ISV = 0, ISS = 0x0005
[ 2738.034530]   CM = 0, WnR = 0
[ 2738.034533] user pgtable: 4k pages, 39-bit VAs, pgd = 94cee000
[ 2738.034535] [0009] *pgd=, *pud=
...
[ 2738.034592] pc : lzo1x_1_do_compress+0x198/0x610
[ 2738.034595] lr : lzo1x_1_compress+0x98/0x3d8
[ 2738.034597] sp : ff801caa3470 pstate : 00c00145
[ 2738.034598] x29: ff801caa3500 x28: 1000
[ 2738.034601] x27: 1000 x26: f000
[ 2738.034604] x25: 4ebc x24: 
[ 2738.034607] x23: 004c x22: f7b8
[ 2738.034610] x21: 2e2ee0b3 x20: 2e2ee0bb
[ 2738.034612] x19: 0fcc x18: f84a
[ 2738.034615] x17: 801b03d6 x16: 0782
[ 2738.034618] x15: 2e2ee0bf x14: fff0
[ 2738.034620] x13: 000f x12: 0020
[ 2738.034623] x11: 1824429d x10: ffec
[ 2738.034626] x9 : 0009 x8 : 
[ 2738.034628] x7 : 0868 x6 : 0434
[ 2738.034631] x5 : 4ebc x4 : 
[ 2738.034633] x3 : ff801caa3510 x2 : 2e2ee000
[ 2738.034636] x1 :  x0 : f000
...
[ 2738.034717] Process kworker/u16:1 (pid: 8705, stack limit = 
0xff801caa)
[ 2738.034720] Call trace:
[ 2738.034722]  lzo1x_1_do_compress+0x198/0x610
[ 2738.034725]  lzo_compress+0x48/0x88
[ 2738.034729]  crypto_compress+0x14/0x20
[ 2738.034733]  zcomp_compress+0x2c/0x38
[ 2738.034736]  zram_bvec_rw+0x3d0/0x860
[ 2738.034738]  zram_rw_page+0x88/0xe0
[ 2738.034742]  bdev_write_page+0x70/0xc0
[ 2738.034745]  __swap_writepage+0x58/0x3f8
[ 2738.034747]  swap_writepage+0x40/0x50
[ 2738.034750]  shrink_page_list+0x4fc/0xe58
[ 2738.034753]  reclaim_pages_from_list+0xa0/0x150
[ 2738.034756]  reclaim_pte_range+0x18c/0x1f8
[ 2738.034759]  __walk_page_range+0xf8/0x1e0
[ 2738.034762]  walk_page_range+0xf8/0x130
[ 2738.034765]  reclaim_task_anon+0xcc/0x168
[ 2738.034767]  swap_fn+0x438/0x668
[ 2738.034771]  process_one_work+0x1fc/0x460
[ 2738.034773]  worker_thread+0x2d0/0x478
[ 2738.034775]  kthread+0x110/0x120
[ 2738.034778]  ret_from_fork+0x10/0x18
[ 2738.034781] Code: 3800167f 54a8 d100066f 1431 (b9400131)
[ 2738.034784] ---[ end trace 9b5cca106f0e54d1 ]---
[ 2738.035473] Kernel panic - not syncing: Fatal exception

crash> dis lzo1x_1_do_compress+100 3 -l
../kernel/msm-4.14/lib/lzo/lzo1x_compress.c: 44
0xff8dec8c6af4 :   cmp x9, x10
0xff8dec8c6af8 :   b.cc0xff8dec8c6c28
0xff8dec8c6afc :   b   0xff8dec8c7094

crash> dis lzo1x_1_do_compress+0x198
0xff8dec8c6c28 :   ldr w17, [x9]

ip = x9 = 0x0009 is overflow.

Signed-off-by: liyueyi 
---
 Changes in v3: Check overflow in lzo1x_1_compress.
 Changes in v2: Re-define OVERFLOW_ADD_CHEK.

 lib/lzo/lzo1x_compress.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
index 236eb21..959dec4 100644
--- a/lib/lzo/lzo1x_compress.c
+++ b/lib/lzo/lzo1x_compress.c
@@ -224,8 +224,8 @@ int lzo1x_1_compress(const unsigned char *in, size_t in_len,
 
while (l > 20) {
size_t ll = l <= (M4_MAX_OFFSET + 1) ? l : (M4_MAX_OFFSET + 1);
-   uintptr_t ll_end = (uintptr_t) ip + ll;
-   if ((ll_end + ((t + ll) >> 5)) <= ll_end)
+   // check for address space wraparound
+   if (((uintptr_t) ip + ll + ((t + ll) >> 5)) <= (uintptr_t) ip)
break;
BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > 
LZO1X_1_MEM_COMPRESS);
memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t));
-- 
2.7.4



Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-11 Thread Yueyi Li
Hi Markus,

OK, thanks. I`ll change it in v3.

Thanks,
Yueyi

On 2018/12/6 23:03, Markus F.X.J. Oberhumer wrote:
> Hi Yueyi,
>
> yes, my LZO patch works for all cases.
>
> The reason behind the issue in the first place is that if KASLR
> includes the very last page 0xf000 then we do not have a
> valid C "pointer to an object" anymore because of address wraparound.
>
> Unrelated to my patch I'd argue that KASLR should *NOT* include the
> very last page - indeed that might cause similar wraparound problems
> in lots of code.
>
> Eg, look at this simple clear_memory() implementation:
>
> void clear_memory(char *p, size_t len) {
>  char *end = p + len;
>  while (p < end)
>  *p++= 0;
> }
>
> Valid code like this will fail horribly when (p, len) is the very
> last virtual page (because end will be the NULL pointer in this case).
>
> Cheers,
> Markus
>
>
>
> On 2018-12-05 04:07, Yueyi Li wrote:
>> Hi Markus,
>>
>> Thanks for your review.
>>
>> On 2018/12/4 18:20, Markus F.X.J. Oberhumer wrote:
>>> Hi,
>>>
>>> I don't think that address space wraparound is legal in C, but I
>>> understand that we are in kernel land and if you really want to
>>> compress the last virtual page 0xf000 the following
>>> small patch should fix that dubious case.
>> I guess the VA 0xf000 is available because KASLR be
>> enabled. For this case we can see:
>>
>> crash> kmem 0xf000
>> PAGE   PHYSICAL  MAPPING   INDEX CNT FLAGS
>> ffbfffc01f000 1655ecb9  7181fd5  2
>> 80064209 locked,uptodate,owner_priv_1,writeback,reclaim,swapbacked
>>
>>> This also avoids slowing down the the hot path of the compression
>>> core function.
>>>
>>> Cheers,
>>> Markus
>>>
>>>
>>>
>>> diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
>>> index 236eb21167b5..959dec45f6fe 100644
>>> --- a/lib/lzo/lzo1x_compress.c
>>> +++ b/lib/lzo/lzo1x_compress.c
>>> @@ -224,8 +224,8 @@ int lzo1x_1_compress(const unsigned char *in, size_t 
>>> in_len,
>>>
>>>   while (l > 20) {
>>>   size_t ll = l <= (M4_MAX_OFFSET + 1) ? l : (M4_MAX_OFFSET 
>>> + 1);
>>> -   uintptr_t ll_end = (uintptr_t) ip + ll;
>>> -   if ((ll_end + ((t + ll) >> 5)) <= ll_end)
>>> +   // check for address space wraparound
>>> +   if (((uintptr_t) ip + ll + ((t + ll) >> 5)) <= (uintptr_t) 
>>> ip)
>>>   break;
>>>   BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > 
>>> LZO1X_1_MEM_COMPRESS);
>>>   memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t));
>> I parsed panic ramdump and loaded CPU register values,  can see:
>>
>> -000|lzo1x_1_do_compress(
>>   |in = 0xF000,
>>   |  ?,
>>   |out = 0x2E2EE000,
>>   |out_len = 0xFF801CAA3510,
>>   |  ?,
>>   |wrkmem = 0x4EBC)
>>   |  dict = 0x4EBC
>>   |  op = 0x1
>>   |  ip = 0x9
>>   |  ii = 0x9
>>   |  in_end = 0x0
>>   |  ip_end = 0xFFEC
>>   |  m_len = 0
>>   |  m_off = 1922
>> -001|lzo1x_1_compress(
>>   |in = 0xF000,
>>   |in_len = 0,
>>   |out = 0x2E2EE000,
>>   |out_len = 0x0001616FB7D0,
>>   |wrkmem = 0x4EBC)
>>   |  ip = 0xF000
>>   |  op = 0x2E2EE000
>>   |  l = 4096
>>   |  t = 0
>>   |  ll = 4096
>>
>> ll = l = in_len = 4096 in lzo1x_1_compress,  so your patch is working
>> for this panic case, but, I`m
>> not sure, is it possible that in = 0xF000 and  in_len < 4096?
>>
>>
>> Thanks,
>> Yueyi
>>



Re: [PATCH v2] memblock: Anonotate memblock_is_reserved() with __init_memblock.

2018-12-04 Thread Yueyi Li


On 2018/12/4 11:04, Wei Yang wrote:
> On Mon, Dec 03, 2018 at 04:00:08AM +0000, Yueyi Li wrote:
>> Found warning:
>>
>> WARNING: EXPORT symbol "gsi_write_channel_scratch" [vmlinux] version 
>> generation failed, symbol will not be versioned.
>> WARNING: vmlinux.o(.text+0x1e0a0): Section mismatch in reference from the 
>> function valid_phys_addr_range() to the function 
>> .init.text:memblock_is_reserved()
>> The function valid_phys_addr_range() references
>> the function __init memblock_is_reserved().
>> This is often because valid_phys_addr_range lacks a __init
>> annotation or the annotation of memblock_is_reserved is wrong.
>>
>> Use __init_memblock instead of __init.
> Not familiar with this error, the change looks good to me while have
> some questions.
>
> 1. I don't see valid_phys_addr_range() reference memblock_is_reserved().
> This is in which file or arch?

Yes,  I modified valid_phys_addr_range() for some other debugging.

> 2. In case a function reference memblock_is_reserved(), should it has
> the annotation of __init_memblock too? Or just __init is ok? If my
> understanding is correct, annotation __init is ok. Well, I don't see
> valid_phys_addr_range() has an annotation.
> 3. The only valid_phys_addr_range() reference some memblock function is
> the one in arch/arm64/mm/mmap.c. Do we suppose to add an annotation to
> this?

Actually, __init_memblock is null in arch arm64, this warning is due to
CONFIG_DEBUG_SECTION_MISMATCH enabled,  the help text in lib/Kconfig.debug.



Thanks,
Yueyi


Re: [PATCH v2] memblock: Anonotate memblock_is_reserved() with __init_memblock.

2018-12-04 Thread Yueyi Li


On 2018/12/4 11:04, Wei Yang wrote:
> On Mon, Dec 03, 2018 at 04:00:08AM +0000, Yueyi Li wrote:
>> Found warning:
>>
>> WARNING: EXPORT symbol "gsi_write_channel_scratch" [vmlinux] version 
>> generation failed, symbol will not be versioned.
>> WARNING: vmlinux.o(.text+0x1e0a0): Section mismatch in reference from the 
>> function valid_phys_addr_range() to the function 
>> .init.text:memblock_is_reserved()
>> The function valid_phys_addr_range() references
>> the function __init memblock_is_reserved().
>> This is often because valid_phys_addr_range lacks a __init
>> annotation or the annotation of memblock_is_reserved is wrong.
>>
>> Use __init_memblock instead of __init.
> Not familiar with this error, the change looks good to me while have
> some questions.
>
> 1. I don't see valid_phys_addr_range() reference memblock_is_reserved().
> This is in which file or arch?

Yes,  I modified valid_phys_addr_range() for some other debugging.

> 2. In case a function reference memblock_is_reserved(), should it has
> the annotation of __init_memblock too? Or just __init is ok? If my
> understanding is correct, annotation __init is ok. Well, I don't see
> valid_phys_addr_range() has an annotation.
> 3. The only valid_phys_addr_range() reference some memblock function is
> the one in arch/arm64/mm/mmap.c. Do we suppose to add an annotation to
> this?

Actually, __init_memblock is null in arch arm64, this warning is due to
CONFIG_DEBUG_SECTION_MISMATCH enabled,  the help text in lib/Kconfig.debug.



Thanks,
Yueyi


Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-04 Thread Yueyi Li
Hi Markus,

Thanks for your review.

On 2018/12/4 18:20, Markus F.X.J. Oberhumer wrote:
> Hi,
>
> I don't think that address space wraparound is legal in C, but I
> understand that we are in kernel land and if you really want to
> compress the last virtual page 0xf000 the following
> small patch should fix that dubious case.

I guess the VA 0xf000 is available because KASLR be
enabled. For this case we can see:

crash> kmem 0xf000
   PAGE   PHYSICAL  MAPPING   INDEX CNT FLAGS
ffbfffc01f000 1655ecb9  7181fd5  2 
80064209 locked,uptodate,owner_priv_1,writeback,reclaim,swapbacked

> This also avoids slowing down the the hot path of the compression
> core function.
>
> Cheers,
> Markus
>
>
>
> diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
> index 236eb21167b5..959dec45f6fe 100644
> --- a/lib/lzo/lzo1x_compress.c
> +++ b/lib/lzo/lzo1x_compress.c
> @@ -224,8 +224,8 @@ int lzo1x_1_compress(const unsigned char *in, size_t 
> in_len,
>   
>  while (l > 20) {
>  size_t ll = l <= (M4_MAX_OFFSET + 1) ? l : (M4_MAX_OFFSET + 
> 1);
> -   uintptr_t ll_end = (uintptr_t) ip + ll;
> -   if ((ll_end + ((t + ll) >> 5)) <= ll_end)
> +   // check for address space wraparound
> +   if (((uintptr_t) ip + ll + ((t + ll) >> 5)) <= (uintptr_t) ip)
>  break;
>  BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > 
> LZO1X_1_MEM_COMPRESS);
>  memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t));
I parsed panic ramdump and loaded CPU register values,  can see:

-000|lzo1x_1_do_compress(
 |in = 0xF000,
 |  ?,
 |out = 0x2E2EE000,
 |out_len = 0xFF801CAA3510,
 |  ?,
 |wrkmem = 0x4EBC)
 |  dict = 0x4EBC
 |  op = 0x1
 |  ip = 0x9
 |  ii = 0x9
 |  in_end = 0x0
 |  ip_end = 0xFFEC
 |  m_len = 0
 |  m_off = 1922
-001|lzo1x_1_compress(
 |in = 0xF000,
 |in_len = 0,
 |out = 0x2E2EE000,
 |out_len = 0x0001616FB7D0,
 |wrkmem = 0x4EBC)
 |  ip = 0xF000
 |  op = 0x2E2EE000
 |  l = 4096
 |  t = 0
 |  ll = 4096

ll = l = in_len = 4096 in lzo1x_1_compress,  so your patch is working 
for this panic case, but, I`m
not sure, is it possible that in = 0xF000 and  in_len < 4096?


Thanks,
Yueyi


Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-04 Thread Yueyi Li
Hi Markus,

Thanks for your review.

On 2018/12/4 18:20, Markus F.X.J. Oberhumer wrote:
> Hi,
>
> I don't think that address space wraparound is legal in C, but I
> understand that we are in kernel land and if you really want to
> compress the last virtual page 0xf000 the following
> small patch should fix that dubious case.

I guess the VA 0xf000 is available because KASLR be
enabled. For this case we can see:

crash> kmem 0xf000
   PAGE   PHYSICAL  MAPPING   INDEX CNT FLAGS
ffbfffc01f000 1655ecb9  7181fd5  2 
80064209 locked,uptodate,owner_priv_1,writeback,reclaim,swapbacked

> This also avoids slowing down the the hot path of the compression
> core function.
>
> Cheers,
> Markus
>
>
>
> diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
> index 236eb21167b5..959dec45f6fe 100644
> --- a/lib/lzo/lzo1x_compress.c
> +++ b/lib/lzo/lzo1x_compress.c
> @@ -224,8 +224,8 @@ int lzo1x_1_compress(const unsigned char *in, size_t 
> in_len,
>   
>  while (l > 20) {
>  size_t ll = l <= (M4_MAX_OFFSET + 1) ? l : (M4_MAX_OFFSET + 
> 1);
> -   uintptr_t ll_end = (uintptr_t) ip + ll;
> -   if ((ll_end + ((t + ll) >> 5)) <= ll_end)
> +   // check for address space wraparound
> +   if (((uintptr_t) ip + ll + ((t + ll) >> 5)) <= (uintptr_t) ip)
>  break;
>  BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > 
> LZO1X_1_MEM_COMPRESS);
>  memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t));
I parsed panic ramdump and loaded CPU register values,  can see:

-000|lzo1x_1_do_compress(
 |in = 0xF000,
 |  ?,
 |out = 0x2E2EE000,
 |out_len = 0xFF801CAA3510,
 |  ?,
 |wrkmem = 0x4EBC)
 |  dict = 0x4EBC
 |  op = 0x1
 |  ip = 0x9
 |  ii = 0x9
 |  in_end = 0x0
 |  ip_end = 0xFFEC
 |  m_len = 0
 |  m_off = 1922
-001|lzo1x_1_compress(
 |in = 0xF000,
 |in_len = 0,
 |out = 0x2E2EE000,
 |out_len = 0x0001616FB7D0,
 |wrkmem = 0x4EBC)
 |  ip = 0xF000
 |  op = 0x2E2EE000
 |  l = 4096
 |  t = 0
 |  ll = 4096

ll = l = in_len = 4096 in lzo1x_1_compress,  so your patch is working 
for this panic case, but, I`m
not sure, is it possible that in = 0xF000 and  in_len < 4096?


Thanks,
Yueyi


[PATCH v2] memblock: Anonotate memblock_is_reserved() with __init_memblock.

2018-12-02 Thread Yueyi Li
Found warning:

WARNING: EXPORT symbol "gsi_write_channel_scratch" [vmlinux] version generation 
failed, symbol will not be versioned.
WARNING: vmlinux.o(.text+0x1e0a0): Section mismatch in reference from the 
function valid_phys_addr_range() to the function 
.init.text:memblock_is_reserved()
The function valid_phys_addr_range() references
the function __init memblock_is_reserved().
This is often because valid_phys_addr_range lacks a __init
annotation or the annotation of memblock_is_reserved is wrong.

Use __init_memblock instead of __init.

Signed-off-by: liyueyi 
---

 Changes v2: correct typo in 'warning'.

 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 9a2d5ae..81ae63c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1727,7 +1727,7 @@ static int __init_memblock memblock_search(struct 
memblock_type *type, phys_addr
return -1;
 }
 
-bool __init memblock_is_reserved(phys_addr_t addr)
+bool __init_memblock memblock_is_reserved(phys_addr_t addr)
 {
return memblock_search(, addr) != -1;
 }
-- 
2.7.4



[PATCH v2] memblock: Anonotate memblock_is_reserved() with __init_memblock.

2018-12-02 Thread Yueyi Li
Found warning:

WARNING: EXPORT symbol "gsi_write_channel_scratch" [vmlinux] version generation 
failed, symbol will not be versioned.
WARNING: vmlinux.o(.text+0x1e0a0): Section mismatch in reference from the 
function valid_phys_addr_range() to the function 
.init.text:memblock_is_reserved()
The function valid_phys_addr_range() references
the function __init memblock_is_reserved().
This is often because valid_phys_addr_range lacks a __init
annotation or the annotation of memblock_is_reserved is wrong.

Use __init_memblock instead of __init.

Signed-off-by: liyueyi 
---

 Changes v2: correct typo in 'warning'.

 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 9a2d5ae..81ae63c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1727,7 +1727,7 @@ static int __init_memblock memblock_search(struct 
memblock_type *type, phys_addr
return -1;
 }
 
-bool __init memblock_is_reserved(phys_addr_t addr)
+bool __init_memblock memblock_is_reserved(phys_addr_t addr)
 {
return memblock_search(, addr) != -1;
 }
-- 
2.7.4



[PATCH] memblock: Anonotate memblock_is_reserved() with __init_memblock.

2018-12-02 Thread Yueyi Li
Found warring:

WARNING: EXPORT symbol "gsi_write_channel_scratch" [vmlinux] version generation 
failed, symbol will not be versioned.
WARNING: vmlinux.o(.text+0x1e0a0): Section mismatch in reference from the 
function valid_phys_addr_range() to the function 
.init.text:memblock_is_reserved()
The function valid_phys_addr_range() references
the function __init memblock_is_reserved().
This is often because valid_phys_addr_range lacks a __init
annotation or the annotation of memblock_is_reserved is wrong.

Use __init_memblock instead of __init.

Signed-off-by: liyueyi 
---
 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 9a2d5ae..81ae63c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1727,7 +1727,7 @@ static int __init_memblock memblock_search(struct 
memblock_type *type, phys_addr
return -1;
 }
 
-bool __init memblock_is_reserved(phys_addr_t addr)
+bool __init_memblock memblock_is_reserved(phys_addr_t addr)
 {
return memblock_search(, addr) != -1;
 }
-- 
2.7.4



[PATCH] memblock: Anonotate memblock_is_reserved() with __init_memblock.

2018-12-02 Thread Yueyi Li
Found warring:

WARNING: EXPORT symbol "gsi_write_channel_scratch" [vmlinux] version generation 
failed, symbol will not be versioned.
WARNING: vmlinux.o(.text+0x1e0a0): Section mismatch in reference from the 
function valid_phys_addr_range() to the function 
.init.text:memblock_is_reserved()
The function valid_phys_addr_range() references
the function __init memblock_is_reserved().
This is often because valid_phys_addr_range lacks a __init
annotation or the annotation of memblock_is_reserved is wrong.

Use __init_memblock instead of __init.

Signed-off-by: liyueyi 
---
 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 9a2d5ae..81ae63c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1727,7 +1727,7 @@ static int __init_memblock memblock_search(struct 
memblock_type *type, phys_addr
return -1;
 }
 
-bool __init memblock_is_reserved(phys_addr_t addr)
+bool __init_memblock memblock_is_reserved(phys_addr_t addr)
 {
return memblock_search(, addr) != -1;
 }
-- 
2.7.4



Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-02 Thread Yueyi Li


On 2018/12/3 10:46, Yueyi Li wrote:
>
>
> On 2018/11/30 20:20, Dave Rodgman wrote:
>>> On 2018/11/30 0:49, Dave Rodgman wrote:
>>>> On 28/11/2018 1:52 pm, David Sterba wrote:
>>>>
>>>>> The fix is adding a few branches to code that's supposed to be as 
>>>>> fast
>>>>> as possible. The branches would be evaluated all the time while
>>>>> protecting against one signle bad page address. This does not look 
>>>>> like
>>>>> a good performance tradeoff.
>>>> As an alternative, for all but the first case, instead of:
>>>>
>>>> if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len) || (ip + m_len >= ip_end)))
>>>>
>>>> I'd suggest we do:
>>>>
>>>> if (unlikely((ip_end - ip) <= m_len))
>>>>
>>>> which will be about as efficient as what's currently there, but 
>>>> doesn't
>>>> have issues with overflow.
>>> Ooh, yes, pretty good solution to this, thanks.
>> Np :-)
>>
>> Actually, looking more closely at the first case, something like this
>> works quite well:
>>
>> size_t inc = 1 + ((ip - ii) >> 5);
>> if (unlikely((ip_end - ip) <= inc))
>> break;
>> ip += inc;
>>
>> On arm64, this generates only a single branch instruction, so it's only
>> two extra arithmetic operations more than the original code (using the
>> macro results in an additional compare & branch).
>>
> How about just instead of:
>
>   if (unlikely(ip >= ip_end))
> break;
>
> to:
>
>   if(unlikely((ip - ip_end) < ~in_len))
> break;
>
> This just generates only one more arithmetic operation than original 
> code, not easy to grasp but more efficient.
>
>

Sorry, should be:

   if(unlikely((ip - ip_end) < ~(in_len - 20)))
 break;

> Thanks,
> Yueyi
>
>



Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-02 Thread Yueyi Li


On 2018/12/3 10:46, Yueyi Li wrote:
>
>
> On 2018/11/30 20:20, Dave Rodgman wrote:
>>> On 2018/11/30 0:49, Dave Rodgman wrote:
>>>> On 28/11/2018 1:52 pm, David Sterba wrote:
>>>>
>>>>> The fix is adding a few branches to code that's supposed to be as 
>>>>> fast
>>>>> as possible. The branches would be evaluated all the time while
>>>>> protecting against one signle bad page address. This does not look 
>>>>> like
>>>>> a good performance tradeoff.
>>>> As an alternative, for all but the first case, instead of:
>>>>
>>>> if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len) || (ip + m_len >= ip_end)))
>>>>
>>>> I'd suggest we do:
>>>>
>>>> if (unlikely((ip_end - ip) <= m_len))
>>>>
>>>> which will be about as efficient as what's currently there, but 
>>>> doesn't
>>>> have issues with overflow.
>>> Ooh, yes, pretty good solution to this, thanks.
>> Np :-)
>>
>> Actually, looking more closely at the first case, something like this
>> works quite well:
>>
>> size_t inc = 1 + ((ip - ii) >> 5);
>> if (unlikely((ip_end - ip) <= inc))
>> break;
>> ip += inc;
>>
>> On arm64, this generates only a single branch instruction, so it's only
>> two extra arithmetic operations more than the original code (using the
>> macro results in an additional compare & branch).
>>
> How about just instead of:
>
>   if (unlikely(ip >= ip_end))
> break;
>
> to:
>
>   if(unlikely((ip - ip_end) < ~in_len))
> break;
>
> This just generates only one more arithmetic operation than original 
> code, not easy to grasp but more efficient.
>
>

Sorry, should be:

   if(unlikely((ip - ip_end) < ~(in_len - 20)))
 break;

> Thanks,
> Yueyi
>
>



Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-02 Thread Yueyi Li


On 2018/11/30 20:20, Dave Rodgman wrote:
>> On 2018/11/30 0:49, Dave Rodgman wrote:
>>> On 28/11/2018 1:52 pm, David Sterba wrote:
>>>
 The fix is adding a few branches to code that's supposed to be as fast
 as possible. The branches would be evaluated all the time while
 protecting against one signle bad page address. This does not look like
 a good performance tradeoff.
>>> As an alternative, for all but the first case, instead of:
>>>
>>> if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len) || (ip + m_len >= ip_end)))
>>>
>>> I'd suggest we do:
>>>
>>> if (unlikely((ip_end - ip) <= m_len))
>>>
>>> which will be about as efficient as what's currently there, but doesn't
>>> have issues with overflow.
>> Ooh, yes, pretty good solution to this, thanks.
> Np :-)
>
> Actually, looking more closely at the first case, something like this
> works quite well:
>
> size_t inc = 1 + ((ip - ii) >> 5);
> if (unlikely((ip_end - ip) <= inc))
>   break;
> ip += inc;
>
> On arm64, this generates only a single branch instruction, so it's only
> two extra arithmetic operations more than the original code (using the
> macro results in an additional compare & branch).
>
How about just instead of:

   if (unlikely(ip >= ip_end))
 break;

to:

   if(unlikely((ip - ip_end) < ~in_len))
 break;

This just generates only one more arithmetic operation than original 
code, not easy to grasp but more efficient.


Thanks,
Yueyi




Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-12-02 Thread Yueyi Li


On 2018/11/30 20:20, Dave Rodgman wrote:
>> On 2018/11/30 0:49, Dave Rodgman wrote:
>>> On 28/11/2018 1:52 pm, David Sterba wrote:
>>>
 The fix is adding a few branches to code that's supposed to be as fast
 as possible. The branches would be evaluated all the time while
 protecting against one signle bad page address. This does not look like
 a good performance tradeoff.
>>> As an alternative, for all but the first case, instead of:
>>>
>>> if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len) || (ip + m_len >= ip_end)))
>>>
>>> I'd suggest we do:
>>>
>>> if (unlikely((ip_end - ip) <= m_len))
>>>
>>> which will be about as efficient as what's currently there, but doesn't
>>> have issues with overflow.
>> Ooh, yes, pretty good solution to this, thanks.
> Np :-)
>
> Actually, looking more closely at the first case, something like this
> works quite well:
>
> size_t inc = 1 + ((ip - ii) >> 5);
> if (unlikely((ip_end - ip) <= inc))
>   break;
> ip += inc;
>
> On arm64, this generates only a single branch instruction, so it's only
> two extra arithmetic operations more than the original code (using the
> macro results in an additional compare & branch).
>
How about just instead of:

   if (unlikely(ip >= ip_end))
 break;

to:

   if(unlikely((ip - ip_end) < ~in_len))
 break;

This just generates only one more arithmetic operation than original 
code, not easy to grasp but more efficient.


Thanks,
Yueyi




Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-11-29 Thread Yueyi Li
Hi Dave,


On 2018/11/30 0:49, Dave Rodgman wrote:
> On 28/11/2018 1:52 pm, David Sterba wrote:
>
>> The fix is adding a few branches to code that's supposed to be as fast
>> as possible. The branches would be evaluated all the time while
>> protecting against one signle bad page address. This does not look like
>> a good performance tradeoff.
> As an alternative, for all but the first case, instead of:
>
> if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len) || (ip + m_len >= ip_end)))
>
> I'd suggest we do:
>
> if (unlikely((ip_end - ip) <= m_len))
>
> which will be about as efficient as what's currently there, but doesn't
> have issues with overflow.

Ooh, yes, pretty good solution to this, thanks.

> Dave

Thanks,
Yueyi


Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-11-29 Thread Yueyi Li
Hi Dave,


On 2018/11/30 0:49, Dave Rodgman wrote:
> On 28/11/2018 1:52 pm, David Sterba wrote:
>
>> The fix is adding a few branches to code that's supposed to be as fast
>> as possible. The branches would be evaluated all the time while
>> protecting against one signle bad page address. This does not look like
>> a good performance tradeoff.
> As an alternative, for all but the first case, instead of:
>
> if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len) || (ip + m_len >= ip_end)))
>
> I'd suggest we do:
>
> if (unlikely((ip_end - ip) <= m_len))
>
> which will be about as efficient as what's currently there, but doesn't
> have issues with overflow.

Ooh, yes, pretty good solution to this, thanks.

> Dave

Thanks,
Yueyi


[PATCH v2] lzo: fix ip overrun during compress.

2018-11-27 Thread Yueyi Li
It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
point to the end of memory and which virtual address is 0xf000.
Leading to a NULL pointer access during the get_unaligned_le32(ip).

Fix this panic:
[ 2738.034508] Unable to handle kernel NULL pointer dereference at virtual 
address 0009
[ 2738.034515] Mem abort info:
[ 2738.034518]   Exception class = DABT (current EL), IL = 32 bits
[ 2738.034520]   SET = 0, FnV = 0
[ 2738.034523]   EA = 0, S1PTW = 0
[ 2738.034524]   FSC = 5
[ 2738.034526] Data abort info:
[ 2738.034528]   ISV = 0, ISS = 0x0005
[ 2738.034530]   CM = 0, WnR = 0
[ 2738.034533] user pgtable: 4k pages, 39-bit VAs, pgd = 94cee000
[ 2738.034535] [0009] *pgd=, *pud=
...
[ 2738.034592] pc : lzo1x_1_do_compress+0x198/0x610
[ 2738.034595] lr : lzo1x_1_compress+0x98/0x3d8
[ 2738.034597] sp : ff801caa3470 pstate : 00c00145
[ 2738.034598] x29: ff801caa3500 x28: 1000
[ 2738.034601] x27: 1000 x26: f000
[ 2738.034604] x25: 4ebc x24: 
[ 2738.034607] x23: 004c x22: f7b8
[ 2738.034610] x21: 2e2ee0b3 x20: 2e2ee0bb
[ 2738.034612] x19: 0fcc x18: f84a
[ 2738.034615] x17: 801b03d6 x16: 0782
[ 2738.034618] x15: 2e2ee0bf x14: fff0
[ 2738.034620] x13: 000f x12: 0020
[ 2738.034623] x11: 1824429d x10: ffec
[ 2738.034626] x9 : 0009 x8 : 
[ 2738.034628] x7 : 0868 x6 : 0434
[ 2738.034631] x5 : 4ebc x4 : 
[ 2738.034633] x3 : ff801caa3510 x2 : 2e2ee000
[ 2738.034636] x1 :  x0 : f000
...
[ 2738.034717] Process kworker/u16:1 (pid: 8705, stack limit = 
0xff801caa)
[ 2738.034720] Call trace:
[ 2738.034722]  lzo1x_1_do_compress+0x198/0x610
[ 2738.034725]  lzo_compress+0x48/0x88
[ 2738.034729]  crypto_compress+0x14/0x20
[ 2738.034733]  zcomp_compress+0x2c/0x38
[ 2738.034736]  zram_bvec_rw+0x3d0/0x860
[ 2738.034738]  zram_rw_page+0x88/0xe0
[ 2738.034742]  bdev_write_page+0x70/0xc0
[ 2738.034745]  __swap_writepage+0x58/0x3f8
[ 2738.034747]  swap_writepage+0x40/0x50
[ 2738.034750]  shrink_page_list+0x4fc/0xe58
[ 2738.034753]  reclaim_pages_from_list+0xa0/0x150
[ 2738.034756]  reclaim_pte_range+0x18c/0x1f8
[ 2738.034759]  __walk_page_range+0xf8/0x1e0
[ 2738.034762]  walk_page_range+0xf8/0x130
[ 2738.034765]  reclaim_task_anon+0xcc/0x168
[ 2738.034767]  swap_fn+0x438/0x668
[ 2738.034771]  process_one_work+0x1fc/0x460
[ 2738.034773]  worker_thread+0x2d0/0x478
[ 2738.034775]  kthread+0x110/0x120
[ 2738.034778]  ret_from_fork+0x10/0x18
[ 2738.034781] Code: 3800167f 54a8 d100066f 1431 (b9400131)
[ 2738.034784] ---[ end trace 9b5cca106f0e54d1 ]---
[ 2738.035473] Kernel panic - not syncing: Fatal exception

crash> dis lzo1x_1_do_compress+100 3 -l
../kernel/msm-4.14/lib/lzo/lzo1x_compress.c: 44
0xff8dec8c6af4 :   cmp x9, x10
0xff8dec8c6af8 :   b.cc0xff8dec8c6c28
0xff8dec8c6afc :   b   0xff8dec8c7094

crash> dis lzo1x_1_do_compress+0x198
0xff8dec8c6c28 :   ldr w17, [x9]

ip = x9 = 0x0009 is overflow.

Signed-off-by: liyueyi 
---
 lib/lzo/lzo1x_compress.c | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
index 236eb21..b15082b 100644
--- a/lib/lzo/lzo1x_compress.c
+++ b/lib/lzo/lzo1x_compress.c
@@ -17,6 +17,9 @@
 #include 
 #include "lzodefs.h"
 
+#define OVERFLOW_ADD_CHECK(a, b)  \
+   (((a) + (b)) < (a))
+
 static noinline size_t
 lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
unsigned char *out, size_t *out_len,
@@ -39,6 +42,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
size_t t, m_len, m_off;
u32 dv;
 literal:
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, 1 + ((ip - ii) >> 5
+   break;
ip += 1 + ((ip - ii) >> 5);
 next:
if (unlikely(ip >= ip_end))
@@ -99,7 +104,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 8;
v = get_unaligned((const u64 *) (ip + m_len)) ^
get_unaligned((const u64 *) (m_pos + 
m_len));
-   if (unlikely(ip + m_len >= ip_end))
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len)
+   || (ip + m_len >= ip_end)))
goto m_len_done;
} while (v == 0);
}
@@ -124,7 +130,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 4;
  

[PATCH v2] lzo: fix ip overrun during compress.

2018-11-27 Thread Yueyi Li
It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
point to the end of memory and which virtual address is 0xf000.
Leading to a NULL pointer access during the get_unaligned_le32(ip).

Fix this panic:
[ 2738.034508] Unable to handle kernel NULL pointer dereference at virtual 
address 0009
[ 2738.034515] Mem abort info:
[ 2738.034518]   Exception class = DABT (current EL), IL = 32 bits
[ 2738.034520]   SET = 0, FnV = 0
[ 2738.034523]   EA = 0, S1PTW = 0
[ 2738.034524]   FSC = 5
[ 2738.034526] Data abort info:
[ 2738.034528]   ISV = 0, ISS = 0x0005
[ 2738.034530]   CM = 0, WnR = 0
[ 2738.034533] user pgtable: 4k pages, 39-bit VAs, pgd = 94cee000
[ 2738.034535] [0009] *pgd=, *pud=
...
[ 2738.034592] pc : lzo1x_1_do_compress+0x198/0x610
[ 2738.034595] lr : lzo1x_1_compress+0x98/0x3d8
[ 2738.034597] sp : ff801caa3470 pstate : 00c00145
[ 2738.034598] x29: ff801caa3500 x28: 1000
[ 2738.034601] x27: 1000 x26: f000
[ 2738.034604] x25: 4ebc x24: 
[ 2738.034607] x23: 004c x22: f7b8
[ 2738.034610] x21: 2e2ee0b3 x20: 2e2ee0bb
[ 2738.034612] x19: 0fcc x18: f84a
[ 2738.034615] x17: 801b03d6 x16: 0782
[ 2738.034618] x15: 2e2ee0bf x14: fff0
[ 2738.034620] x13: 000f x12: 0020
[ 2738.034623] x11: 1824429d x10: ffec
[ 2738.034626] x9 : 0009 x8 : 
[ 2738.034628] x7 : 0868 x6 : 0434
[ 2738.034631] x5 : 4ebc x4 : 
[ 2738.034633] x3 : ff801caa3510 x2 : 2e2ee000
[ 2738.034636] x1 :  x0 : f000
...
[ 2738.034717] Process kworker/u16:1 (pid: 8705, stack limit = 
0xff801caa)
[ 2738.034720] Call trace:
[ 2738.034722]  lzo1x_1_do_compress+0x198/0x610
[ 2738.034725]  lzo_compress+0x48/0x88
[ 2738.034729]  crypto_compress+0x14/0x20
[ 2738.034733]  zcomp_compress+0x2c/0x38
[ 2738.034736]  zram_bvec_rw+0x3d0/0x860
[ 2738.034738]  zram_rw_page+0x88/0xe0
[ 2738.034742]  bdev_write_page+0x70/0xc0
[ 2738.034745]  __swap_writepage+0x58/0x3f8
[ 2738.034747]  swap_writepage+0x40/0x50
[ 2738.034750]  shrink_page_list+0x4fc/0xe58
[ 2738.034753]  reclaim_pages_from_list+0xa0/0x150
[ 2738.034756]  reclaim_pte_range+0x18c/0x1f8
[ 2738.034759]  __walk_page_range+0xf8/0x1e0
[ 2738.034762]  walk_page_range+0xf8/0x130
[ 2738.034765]  reclaim_task_anon+0xcc/0x168
[ 2738.034767]  swap_fn+0x438/0x668
[ 2738.034771]  process_one_work+0x1fc/0x460
[ 2738.034773]  worker_thread+0x2d0/0x478
[ 2738.034775]  kthread+0x110/0x120
[ 2738.034778]  ret_from_fork+0x10/0x18
[ 2738.034781] Code: 3800167f 54a8 d100066f 1431 (b9400131)
[ 2738.034784] ---[ end trace 9b5cca106f0e54d1 ]---
[ 2738.035473] Kernel panic - not syncing: Fatal exception

crash> dis lzo1x_1_do_compress+100 3 -l
../kernel/msm-4.14/lib/lzo/lzo1x_compress.c: 44
0xff8dec8c6af4 :   cmp x9, x10
0xff8dec8c6af8 :   b.cc0xff8dec8c6c28
0xff8dec8c6afc :   b   0xff8dec8c7094

crash> dis lzo1x_1_do_compress+0x198
0xff8dec8c6c28 :   ldr w17, [x9]

ip = x9 = 0x0009 is overflow.

Signed-off-by: liyueyi 
---
 lib/lzo/lzo1x_compress.c | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
index 236eb21..b15082b 100644
--- a/lib/lzo/lzo1x_compress.c
+++ b/lib/lzo/lzo1x_compress.c
@@ -17,6 +17,9 @@
 #include 
 #include "lzodefs.h"
 
+#define OVERFLOW_ADD_CHECK(a, b)  \
+   (((a) + (b)) < (a))
+
 static noinline size_t
 lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
unsigned char *out, size_t *out_len,
@@ -39,6 +42,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
size_t t, m_len, m_off;
u32 dv;
 literal:
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, 1 + ((ip - ii) >> 5
+   break;
ip += 1 + ((ip - ii) >> 5);
 next:
if (unlikely(ip >= ip_end))
@@ -99,7 +104,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 8;
v = get_unaligned((const u64 *) (ip + m_len)) ^
get_unaligned((const u64 *) (m_pos + 
m_len));
-   if (unlikely(ip + m_len >= ip_end))
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len)
+   || (ip + m_len >= ip_end)))
goto m_len_done;
} while (v == 0);
}
@@ -124,7 +130,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 4;
  

Re: [PATCH] lzo: fix ip overrun during compress.

2018-11-27 Thread Yueyi Li
Hi Willy,


On 2018/11/28 12:08, Willy Tarreau wrote:
> Hi Yueyi,
>
> On Tue, Nov 27, 2018 at 10:34:26AM +0000, Yueyi Li wrote:
>> It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
>> point to the end of memory and which virtual address is 0xf000.
>> Leading to a NULL pointer access during the get_unaligned_le32(ip).
> Thanks for this report.
>
> (...)
>> diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
>> index 236eb21..a0265a6 100644
>> --- a/lib/lzo/lzo1x_compress.c
>> +++ b/lib/lzo/lzo1x_compress.c
>> @@ -17,6 +17,9 @@
>>   #include 
>>   #include "lzodefs.h"
>>   
>> +#define OVERFLOW_ADD_CHECK(a, b)  \
>> +((size_t)~0 - (size_t)(a) < (size_t)(b) ? true : false)
>> +
> I think the test would be easier to grasp this way :
>
> #define OVERFLOW_ADD_CHECK(a, b)  \
>   ((size_t)(b) >= (size_t)((void*)0 - (void *)(a)))
>
> But the following should be more efficient since we build with
> -fno-strict-overflow :
>
> #define OVERFLOW_ADD_CHECK(a, b)  \
>   (((a) + (b)) < (a))
Thanks for the suggestion,  I will change it in next version.
> Could you please recheck ?
>
> Thanks,
> Willy

Thanks,
Yueyi


Re: [PATCH] lzo: fix ip overrun during compress.

2018-11-27 Thread Yueyi Li
Hi Willy,


On 2018/11/28 12:08, Willy Tarreau wrote:
> Hi Yueyi,
>
> On Tue, Nov 27, 2018 at 10:34:26AM +0000, Yueyi Li wrote:
>> It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
>> point to the end of memory and which virtual address is 0xf000.
>> Leading to a NULL pointer access during the get_unaligned_le32(ip).
> Thanks for this report.
>
> (...)
>> diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
>> index 236eb21..a0265a6 100644
>> --- a/lib/lzo/lzo1x_compress.c
>> +++ b/lib/lzo/lzo1x_compress.c
>> @@ -17,6 +17,9 @@
>>   #include 
>>   #include "lzodefs.h"
>>   
>> +#define OVERFLOW_ADD_CHECK(a, b)  \
>> +((size_t)~0 - (size_t)(a) < (size_t)(b) ? true : false)
>> +
> I think the test would be easier to grasp this way :
>
> #define OVERFLOW_ADD_CHECK(a, b)  \
>   ((size_t)(b) >= (size_t)((void*)0 - (void *)(a)))
>
> But the following should be more efficient since we build with
> -fno-strict-overflow :
>
> #define OVERFLOW_ADD_CHECK(a, b)  \
>   (((a) + (b)) < (a))
Thanks for the suggestion,  I will change it in next version.
> Could you please recheck ?
>
> Thanks,
> Willy

Thanks,
Yueyi


[PATCH] lzo: fix ip overrun during compress.

2018-11-27 Thread Yueyi Li
It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
point to the end of memory and which virtual address is 0xf000.
Leading to a NULL pointer access during the get_unaligned_le32(ip).

Fix this panic:
[ 2738.034508] Unable to handle kernel NULL pointer dereference at virtual 
address 0009
[ 2738.034515] Mem abort info:
[ 2738.034518]   Exception class = DABT (current EL), IL = 32 bits
[ 2738.034520]   SET = 0, FnV = 0
[ 2738.034523]   EA = 0, S1PTW = 0
[ 2738.034524]   FSC = 5
[ 2738.034526] Data abort info:
[ 2738.034528]   ISV = 0, ISS = 0x0005
[ 2738.034530]   CM = 0, WnR = 0
[ 2738.034533] user pgtable: 4k pages, 39-bit VAs, pgd = 94cee000
[ 2738.034535] [0009] *pgd=, *pud=
...
[ 2738.034592] pc : lzo1x_1_do_compress+0x198/0x610
[ 2738.034595] lr : lzo1x_1_compress+0x98/0x3d8
[ 2738.034597] sp : ff801caa3470 pstate : 00c00145
[ 2738.034598] x29: ff801caa3500 x28: 1000
[ 2738.034601] x27: 1000 x26: f000
[ 2738.034604] x25: 4ebc x24: 
[ 2738.034607] x23: 004c x22: f7b8
[ 2738.034610] x21: 2e2ee0b3 x20: 2e2ee0bb
[ 2738.034612] x19: 0fcc x18: f84a
[ 2738.034615] x17: 801b03d6 x16: 0782
[ 2738.034618] x15: 2e2ee0bf x14: fff0
[ 2738.034620] x13: 000f x12: 0020
[ 2738.034623] x11: 1824429d x10: ffec
[ 2738.034626] x9 : 0009 x8 : 
[ 2738.034628] x7 : 0868 x6 : 0434
[ 2738.034631] x5 : 4ebc x4 : 
[ 2738.034633] x3 : ff801caa3510 x2 : 2e2ee000
[ 2738.034636] x1 :  x0 : f000
...
[ 2738.034717] Process kworker/u16:1 (pid: 8705, stack limit = 
0xff801caa)
[ 2738.034720] Call trace:
[ 2738.034722]  lzo1x_1_do_compress+0x198/0x610
[ 2738.034725]  lzo_compress+0x48/0x88
[ 2738.034729]  crypto_compress+0x14/0x20
[ 2738.034733]  zcomp_compress+0x2c/0x38
[ 2738.034736]  zram_bvec_rw+0x3d0/0x860
[ 2738.034738]  zram_rw_page+0x88/0xe0
[ 2738.034742]  bdev_write_page+0x70/0xc0
[ 2738.034745]  __swap_writepage+0x58/0x3f8
[ 2738.034747]  swap_writepage+0x40/0x50
[ 2738.034750]  shrink_page_list+0x4fc/0xe58
[ 2738.034753]  reclaim_pages_from_list+0xa0/0x150
[ 2738.034756]  reclaim_pte_range+0x18c/0x1f8
[ 2738.034759]  __walk_page_range+0xf8/0x1e0
[ 2738.034762]  walk_page_range+0xf8/0x130
[ 2738.034765]  reclaim_task_anon+0xcc/0x168
[ 2738.034767]  swap_fn+0x438/0x668
[ 2738.034771]  process_one_work+0x1fc/0x460
[ 2738.034773]  worker_thread+0x2d0/0x478
[ 2738.034775]  kthread+0x110/0x120
[ 2738.034778]  ret_from_fork+0x10/0x18
[ 2738.034781] Code: 3800167f 54a8 d100066f 1431 (b9400131)
[ 2738.034784] ---[ end trace 9b5cca106f0e54d1 ]---
[ 2738.035473] Kernel panic - not syncing: Fatal exception

crash> dis lzo1x_1_do_compress+100 3 -l
../kernel/msm-4.14/lib/lzo/lzo1x_compress.c: 44
0xff8dec8c6af4 :   cmp x9, x10
0xff8dec8c6af8 :   b.cc0xff8dec8c6c28
0xff8dec8c6afc :   b   0xff8dec8c7094

crash> dis lzo1x_1_do_compress+0x198
0xff8dec8c6c28 :   ldr w17, [x9]

ip = x9 = 0x0009 is overflow.

Signed-off-by: liyueyi 
---
 lib/lzo/lzo1x_compress.c | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
index 236eb21..a0265a6 100644
--- a/lib/lzo/lzo1x_compress.c
+++ b/lib/lzo/lzo1x_compress.c
@@ -17,6 +17,9 @@
 #include 
 #include "lzodefs.h"
 
+#define OVERFLOW_ADD_CHECK(a, b)  \
+   ((size_t)~0 - (size_t)(a) < (size_t)(b) ? true : false)
+
 static noinline size_t
 lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
unsigned char *out, size_t *out_len,
@@ -39,6 +42,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
size_t t, m_len, m_off;
u32 dv;
 literal:
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, 1 + ((ip - ii) >> 5
+   break;
ip += 1 + ((ip - ii) >> 5);
 next:
if (unlikely(ip >= ip_end))
@@ -99,7 +104,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 8;
v = get_unaligned((const u64 *) (ip + m_len)) ^
get_unaligned((const u64 *) (m_pos + 
m_len));
-   if (unlikely(ip + m_len >= ip_end))
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len)
+   || (ip + m_len >= ip_end)))
goto m_len_done;
} while (v == 0);
}
@@ -124,7 +130,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 

[PATCH] lzo: fix ip overrun during compress.

2018-11-27 Thread Yueyi Li
It`s possible ip overrun in lzo1x_1_do_compress() when compressed page is
point to the end of memory and which virtual address is 0xf000.
Leading to a NULL pointer access during the get_unaligned_le32(ip).

Fix this panic:
[ 2738.034508] Unable to handle kernel NULL pointer dereference at virtual 
address 0009
[ 2738.034515] Mem abort info:
[ 2738.034518]   Exception class = DABT (current EL), IL = 32 bits
[ 2738.034520]   SET = 0, FnV = 0
[ 2738.034523]   EA = 0, S1PTW = 0
[ 2738.034524]   FSC = 5
[ 2738.034526] Data abort info:
[ 2738.034528]   ISV = 0, ISS = 0x0005
[ 2738.034530]   CM = 0, WnR = 0
[ 2738.034533] user pgtable: 4k pages, 39-bit VAs, pgd = 94cee000
[ 2738.034535] [0009] *pgd=, *pud=
...
[ 2738.034592] pc : lzo1x_1_do_compress+0x198/0x610
[ 2738.034595] lr : lzo1x_1_compress+0x98/0x3d8
[ 2738.034597] sp : ff801caa3470 pstate : 00c00145
[ 2738.034598] x29: ff801caa3500 x28: 1000
[ 2738.034601] x27: 1000 x26: f000
[ 2738.034604] x25: 4ebc x24: 
[ 2738.034607] x23: 004c x22: f7b8
[ 2738.034610] x21: 2e2ee0b3 x20: 2e2ee0bb
[ 2738.034612] x19: 0fcc x18: f84a
[ 2738.034615] x17: 801b03d6 x16: 0782
[ 2738.034618] x15: 2e2ee0bf x14: fff0
[ 2738.034620] x13: 000f x12: 0020
[ 2738.034623] x11: 1824429d x10: ffec
[ 2738.034626] x9 : 0009 x8 : 
[ 2738.034628] x7 : 0868 x6 : 0434
[ 2738.034631] x5 : 4ebc x4 : 
[ 2738.034633] x3 : ff801caa3510 x2 : 2e2ee000
[ 2738.034636] x1 :  x0 : f000
...
[ 2738.034717] Process kworker/u16:1 (pid: 8705, stack limit = 
0xff801caa)
[ 2738.034720] Call trace:
[ 2738.034722]  lzo1x_1_do_compress+0x198/0x610
[ 2738.034725]  lzo_compress+0x48/0x88
[ 2738.034729]  crypto_compress+0x14/0x20
[ 2738.034733]  zcomp_compress+0x2c/0x38
[ 2738.034736]  zram_bvec_rw+0x3d0/0x860
[ 2738.034738]  zram_rw_page+0x88/0xe0
[ 2738.034742]  bdev_write_page+0x70/0xc0
[ 2738.034745]  __swap_writepage+0x58/0x3f8
[ 2738.034747]  swap_writepage+0x40/0x50
[ 2738.034750]  shrink_page_list+0x4fc/0xe58
[ 2738.034753]  reclaim_pages_from_list+0xa0/0x150
[ 2738.034756]  reclaim_pte_range+0x18c/0x1f8
[ 2738.034759]  __walk_page_range+0xf8/0x1e0
[ 2738.034762]  walk_page_range+0xf8/0x130
[ 2738.034765]  reclaim_task_anon+0xcc/0x168
[ 2738.034767]  swap_fn+0x438/0x668
[ 2738.034771]  process_one_work+0x1fc/0x460
[ 2738.034773]  worker_thread+0x2d0/0x478
[ 2738.034775]  kthread+0x110/0x120
[ 2738.034778]  ret_from_fork+0x10/0x18
[ 2738.034781] Code: 3800167f 54a8 d100066f 1431 (b9400131)
[ 2738.034784] ---[ end trace 9b5cca106f0e54d1 ]---
[ 2738.035473] Kernel panic - not syncing: Fatal exception

crash> dis lzo1x_1_do_compress+100 3 -l
../kernel/msm-4.14/lib/lzo/lzo1x_compress.c: 44
0xff8dec8c6af4 :   cmp x9, x10
0xff8dec8c6af8 :   b.cc0xff8dec8c6c28
0xff8dec8c6afc :   b   0xff8dec8c7094

crash> dis lzo1x_1_do_compress+0x198
0xff8dec8c6c28 :   ldr w17, [x9]

ip = x9 = 0x0009 is overflow.

Signed-off-by: liyueyi 
---
 lib/lzo/lzo1x_compress.c | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
index 236eb21..a0265a6 100644
--- a/lib/lzo/lzo1x_compress.c
+++ b/lib/lzo/lzo1x_compress.c
@@ -17,6 +17,9 @@
 #include 
 #include "lzodefs.h"
 
+#define OVERFLOW_ADD_CHECK(a, b)  \
+   ((size_t)~0 - (size_t)(a) < (size_t)(b) ? true : false)
+
 static noinline size_t
 lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
unsigned char *out, size_t *out_len,
@@ -39,6 +42,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
size_t t, m_len, m_off;
u32 dv;
 literal:
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, 1 + ((ip - ii) >> 5
+   break;
ip += 1 + ((ip - ii) >> 5);
 next:
if (unlikely(ip >= ip_end))
@@ -99,7 +104,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len += 8;
v = get_unaligned((const u64 *) (ip + m_len)) ^
get_unaligned((const u64 *) (m_pos + 
m_len));
-   if (unlikely(ip + m_len >= ip_end))
+   if (unlikely(OVERFLOW_ADD_CHECK(ip, m_len)
+   || (ip + m_len >= ip_end)))
goto m_len_done;
} while (v == 0);
}
@@ -124,7 +130,8 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
m_len +=