> Every ten minutes I was greeted with the following splat in the kernel log:
> >
> > [2122311.383389] warn_alloc: 3 callbacks suppressed
> > [2122311.383403] cat: page allocation failure: order:5,
> > mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO),
> > nodemask=(null),cpuset=lxc
383389] warn_alloc: 3 callbacks suppressed
> [2122311.383403] cat: page allocation failure: order:5,
> mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO),
> nodemask=(null),cpuset=lxc.payload.openwrt,mems_allowed=0
You want this patch:
https://lore.kernel.org/linux-fsdevel/6345270a2c1160b89dd5e
Good Afternoon,
I have been tracking down a regular bug that triggers when running OpenWRT in a
lxd container.
Every ten minutes I was greeted with the following splat in the kernel log:
[2122311.383389] warn_alloc: 3 callbacks suppressed
[2122311.383403] cat: page allocation failure: order:5
On Mon, 8 Feb 2021 at 14:18, Christian König
wrote:
>
> Are the other problems gone as well?
>
And yes and no.
The issue with monitor turns off was gone after rc6 (git3aaf0a27ffc2)
But both traces
1) BUG: sleeping function called from invalid context at
include/linux/sched/mm.h:196 (kernel 5.11
Am 06.02.21 um 19:17 schrieb Mikhail Gavrilov:
On Sun, 31 Jan 2021 at 22:22, Christian König
wrote:
Yeah, known issue. I already pushed Michel's fix to drm-misc-fixes.
Should land in the next -rc by the weekend.
Regards,
Christian.
I checked this patch [1] for several days.
And I can
On Sun, 31 Jan 2021 at 22:22, Christian König
wrote:
>
>
> Yeah, known issue. I already pushed Michel's fix to drm-misc-fixes.
> Should land in the next -rc by the weekend.
>
> Regards,
> Christian.
I checked this patch [1] for several days.
And I can confirm that the reported issue was gone.
Am 31.01.21 um 02:03 schrieb David Rientjes:
On Sat, 30 Jan 2021, David Rientjes wrote:
On Sun, 31 Jan 2021, Mikhail Gavrilov wrote:
The 5.11-rc5 (git 76c057c84d28) brought a new issue.
Now the kernel log is flooded with the message "page allocation failure".
Trace:
msedge
On Sat, 30 Jan 2021, David Rientjes wrote:
> On Sun, 31 Jan 2021, Mikhail Gavrilov wrote:
>
> > The 5.11-rc5 (git 76c057c84d28) brought a new issue.
> > Now the kernel log is flooded with the message "page allocation failure".
> >
> > Trace:
> >
On Sun, 31 Jan 2021, Mikhail Gavrilov wrote:
> The 5.11-rc5 (git 76c057c84d28) brought a new issue.
> Now the kernel log is flooded with the message "page allocation failure".
>
> Trace:
> msedge:cs0: page allocation failure: order:10,
Order-10, wow!
ttm_pool_alloc
The 5.11-rc5 (git 76c057c84d28) brought a new issue.
Now the kernel log is flooded with the message "page allocation failure".
Trace:
msedge:cs0: page allocation failure: order:10,
mode:0x190cc2(GFP_HIGHUSER|__GFP_NORETRY|__GFP_NOMEMALLOC),
nodemask=(null),cpuset=/,mems_allowed=0
C
On Tue, Nov 3, 2020 at 4:05 PM Mikhail Gavrilov
wrote:
>
> Hi folks.
> I observed hard reproductible the set of bugs.
> It always started as
> 1) kworker/u64:2: page allocation failure: order:5,
> mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO),
> nodemask=(null),cp
Hi folks.
I observed hard reproductible the set of bugs.
It always started as
1) kworker/u64:2: page allocation failure: order:5,
mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO),
nodemask=(null),cpuset=/,mems_allowed=0
Continious as:
2) WARNING: CPU: 21 PID: 806649 at
drivers/gpu/drm/amd/amdgpu
it firstly chrt -r 20 and then changed it later to renice -n
-20) I observed lru-add-drain , writeback threads were executing at normal
priority .
what I mean above is 2 separate iterations for process priority settings ( 1st
iteration :: chrt -r 20 , 2nd iteration : renice -n -20 , there was no
iteration in which both chrt and renice were used together) .
although in both priority settings , we got the page allocation failure
problem .
From: Matthew Wilcox [mailto:wi...@infradead.org]
Sent: Monday, June 3, 2019 5:42 PM
To: Nagal, Amit UTC CCS
On Mon, Jun 03, 2019 at 05:30:57AM +, Nagal, Amit UTC CCS
wrote:
> > [ 776.174308] Mem-Info:
> > [ 776.176650] active_anon:2037 inactive_anon:23 isolated_anon:0 [
On Mon, Jun 03, 2019 at 05:30:57AM +, Nagal, Amit UTC CCS
wrote:
> > [ 776.174308] Mem-Info:
> > [ 776.176650] active_anon:2037 inactive_anon:23 isolated_anon:0 [
> > 776.176650] active_file:2636 inactive_file:7391 isolated_file:32 [
> > 776.176650] unevictable:0
: [External] Re: linux kernel page allocation failure and tuning of page
cache
> 1) the platform is low memory platform having memory 64MB.
>
> 2) we are doing around 45MB TCP data transfer from PC to target using netcat
> utility .On Target , a process receives data over socket and wri
-Original Message-
From: Alexander Duyck [mailto:alexander.du...@gmail.com]
Sent: Saturday, June 1, 2019 2:57 AM
To: Nagal, Amit UTC CCS
Cc: linux-kernel@vger.kernel.org; linux...@kvack.org; CHAWLA, RITU UTC CCS
Subject: [External] Re: linux kernel page allocation failure and tuning
think your network is faster than your disk ...
Ok . I need to check it . But how does this affect page reclaim procedure .
> 5) sometimes , we observed kernel memory getting exhausted as page allocation
> failure happens in kernel with the backtrace is printed below :
> # [ 775.9
1245
Swap: 0 0 0
5) sometimes , we observed kernel memory getting exhausted as page
allocation failure happens in kernel with the backtrace as printed
below in point 7 ):
6) we have certain questions as below :
a) how the kern
buffers cached
> Mem: 5756 1
>02 42
> -/+ buffers/cache: 1245
> Swap: 0 0 0
>
> 5) sometimes , we obse
r disk ...
> 5) sometimes , we observed kernel memory getting exhausted as page allocation
> failure happens in kernel with the backtrace is printed below :
> # [ 775.947949] nc.traditional: page allocation failure: order:0,
> mode:0x2080020(GFP_ATOMIC)
We're in the soft interrup
1245
Swap: 0 0 0
5) sometimes , we observed kernel memory getting exhausted as page allocation
failure happens in kernel with the backtrace is printed below :
# [ 775.947949] nc.traditional: page allocation failure: o
On Mon, May 6, 2019 at 2:35 PM Vlastimil Babka wrote:
>
> On 5/3/19 7:44 PM, Pankaj Suryawanshi wrote:
> >> First possibility that comes to mind is that a usermodehelper got
> >> launched, and
> >> it then tried to fork with a very large active process image. Do we have
> >> any
> >> clues
On Thu, May 2, 2019 at 10:51 AM Valdis Klētnieks
wrote:
>
> On Thu, 02 May 2019 04:56:05 +0530, Pankaj Suryawanshi said:
>
> > Please help me to decode the error messages and reason for this errors.
>
> > [ 3205.818891] HwBinder:1894_6: page allocation failure: orde
From: Pankaj Suryawanshi
Sent: 28 March 2019 13:17
To: linux-kernel@vger.kernel.org; linux...@kvack.org
Subject: Re: Page-allocation-failure
From: Pankaj Suryawanshi
Sent: 28 March 2019 13:12
To: linux-kernel
From: Pankaj Suryawanshi
Sent: 28 March 2019 13:12
To: linux-kernel@vger.kernel.org; linux...@kvack.org
Subject: Page-allocation-failure
Hello ,
I am facing issue related to page allocation failure.
If anyone is familiar with this issue, let me know
Hello ,
I am facing issue related to page allocation failure.
If anyone is familiar with this issue, let me know what is the issue?
How to solved/debug it.
Failure logs
, the local filesystem is just as inaccessible as
trying to use it remotely - i couldn't find any change that looked
related to it in v4.18-rc3 so here goes:
(the same report is attached as well)
[1609167.131537] nfsd: page allocation failure: order:8,
mode:0x14000c0(GFP_KERNEL), nodemask=(null
, the local filesystem is just as inaccessible as
trying to use it remotely - i couldn't find any change that looked
related to it in v4.18-rc3 so here goes:
(the same report is attached as well)
[1609167.131537] nfsd: page allocation failure: order:8,
mode:0x14000c0(GFP_KERNEL), nodemask=(null
ial: probe of fake-design-for-testing-f001
> > > > > failed with error -95
> > > > > [ 75.044323] fmc fake-design-for-testing-f001: Driver has no ID:
> > > > > matches all
> > > > > [ 75.045644] fmc_chardev fake-desig
ial: probe of fake-design-for-testing-f001
> > > > > failed with error -95
> > > > > [ 75.044323] fmc fake-design-for-testing-f001: Driver has no ID:
> > > > > matches all
> > > > > [ 75.045644] fmc_chardev fake-desig
75.040995] fmc fake-design-for-testing-f001: Driver has no ID:
> > > > matches all
> > > > [ 75.042509] fmc_trivial: probe of fake-design-for-testing-f001
> > > > failed with error -95
> > > > [ 75.044323] fmc fake-design-for-testing-f001: Driver h
75.040995] fmc fake-design-for-testing-f001: Driver has no ID:
> > > > matches all
> > > > [ 75.042509] fmc_trivial: probe of fake-design-for-testing-f001
> > > > failed with error -95
> > > > [ 75.044323] fmc fake-design-for-testing-f001: Driver h
ake-design-for-testing-f001 failed
> > > with error -95
> > > [ 75.044323] fmc fake-design-for-testing-f001: Driver has no ID:
> > > matches all
> > > [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc
> > > device &q
ake-design-for-testing-f001 failed
> > > with error -95
> > > [ 75.044323] fmc fake-design-for-testing-f001: Driver has no ID:
> > > matches all
> > > [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc
> > > device &q
D: matches
> > all
> > [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc
> > device "fake-design-for-testing-f001"
> > [ 75.061570] swapper/0: page allocation failure: order:9,
> > mode:0x14040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null)
>
D: matches
> > all
> > [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc
> > device "fake-design-for-testing-f001"
> > [ 75.061570] swapper/0: page allocation failure: order:9,
> > mode:0x14040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null)
>
D: matches
> > all
> > [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc
> > device "fake-design-for-testing-f001"
> > [ 75.061570] swapper/0: page allocation failure: order:9,
> > mode:0x14040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null)
>
D: matches
> > all
> > [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc
> > device "fake-design-for-testing-f001"
> > [ 75.061570] swapper/0: page allocation failure: order:9,
> > mode:0x14040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null)
>
ial: probe of fake-design-for-testing-f001 failed with
> error -95
> [ 75.044323] fmc fake-design-for-testing-f001: Driver has no ID: matches all
> [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc device
> "fake-design-for-testing-f001"
> [ 75.061
ial: probe of fake-design-for-testing-f001 failed with
> error -95
> [ 75.044323] fmc fake-design-for-testing-f001: Driver has no ID: matches all
> [ 75.045644] fmc_chardev fake-design-for-testing-f001: Created misc device
> "fake-design-for-testing-f001"
> [ 75.061
Hello,
I observed this page allocation failure in fbcon, while copying files
from one XFS filesystem to another. As far as I know, there wasn't
anything else unusual going on at the time. The system uptime was about
a day. After the allocation failure, I could not allocate any more
ttys
Hello,
I observed this page allocation failure in fbcon, while copying files
from one XFS filesystem to another. As far as I know, there wasn't
anything else unusual going on at the time. The system uptime was about
a day. After the allocation failure, I could not allocate any more
ttys
On Thu 30-11-17 22:01:03, Wu Fengguang wrote:
> On Thu, Nov 30, 2017 at 02:50:16PM +0100, Michal Hocko wrote:
> > On Thu 30-11-17 21:38:40, Wu Fengguang wrote:
> > > Hello,
> > >
> > > It looks like a regression in 4.15.0-rc1 -- the test case simply run a
> > > set of parallel dd's and there
On Thu 30-11-17 22:01:03, Wu Fengguang wrote:
> On Thu, Nov 30, 2017 at 02:50:16PM +0100, Michal Hocko wrote:
> > On Thu 30-11-17 21:38:40, Wu Fengguang wrote:
> > > Hello,
> > >
> > > It looks like a regression in 4.15.0-rc1 -- the test case simply run a
> > > set of parallel dd's and there
On Thu, Nov 30, 2017 at 10:08:04PM +0800, Fengguang Wu wrote:
[ 78.848629] dd: page allocation failure: order:0,
mode:0x1080020(GFP_ATOMIC), nodemask=(null)
[ 78.857841] dd cpuset=/ mems_allowed=0-1
[ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1
#1
On Thu, Nov 30, 2017 at 10:08:04PM +0800, Fengguang Wu wrote:
[ 78.848629] dd: page allocation failure: order:0,
mode:0x1080020(GFP_ATOMIC), nodemask=(null)
[ 78.857841] dd cpuset=/ mems_allowed=0-1
[ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1
#1
[ 78.848629] dd: page allocation failure: order:0,
mode:0x1080020(GFP_ATOMIC), nodemask=(null)
[ 78.857841] dd cpuset=/ mems_allowed=0-1
[ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1
#1
[ 78.870437] Call Trace:
[ 78.873610]
[ 78.876342] dump_stack
[ 78.848629] dd: page allocation failure: order:0,
mode:0x1080020(GFP_ATOMIC), nodemask=(null)
[ 78.857841] dd cpuset=/ mems_allowed=0-1
[ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1
#1
[ 78.870437] Call Trace:
[ 78.873610]
[ 78.876342] dump_stack
rely on atomic allocations.
I just wonder if any changes make the pressure more tight than before.
It may not even be a MM change -- in theory drivers might also use atomic
allocations more aggressively than before.
[...]
[ 71.088242] dd: page allocation failure: order:0,
mode:0x1080020
rely on atomic allocations.
I just wonder if any changes make the pressure more tight than before.
It may not even be a MM change -- in theory drivers might also use atomic
allocations more aggressively than before.
[...]
[ 71.088242] dd: page allocation failure: order:0,
mode:0x1080020
the failure really depends on the
state of the free memory and that can vary between runs depending on
timing I guess. So I am not really sure this is a regression. But maybe
there is something reclaim related going on here.
[...]
> [ 71.088242] dd: page allocation failure: order:0,
> m
the failure really depends on the
state of the free memory and that can vary between runs depending on
timing I guess. So I am not really sure this is a regression. But maybe
there is something reclaim related going on here.
[...]
> [ 71.088242] dd: page allocation failure: order:0,
> m
Hello,
Am 15.08.2016 um 15:26 schrieb Philipp Hahn:
> this Sunday one of our virtual servers running linux-4.1.16 inside
> OpenStack using qemu "crashed" while doing a backup using rsync to a
> slow NFS server.
This happened again last weekend, with the same stack trace:
>> Call Trace:
>>[]
Hello,
Am 15.08.2016 um 15:26 schrieb Philipp Hahn:
> this Sunday one of our virtual servers running linux-4.1.16 inside
> OpenStack using qemu "crashed" while doing a backup using rsync to a
> slow NFS server.
This happened again last weekend, with the same stack trace:
>> Call Trace:
>>[]
ng
order=0 allocation failed:
> swapper/0: page allocation failure: order:0, mode:0x120
4KiB
src/extern/linux/include/linux/gfp.h:
18 #define ___GFP_HIGH»»···0x20u
21 #define ___GFP_COLD»»···0x100u
72 #define __GFP_HIGH»·((__force gfp_t)___GFP_HIGH)»···/* Should
access emergency pools? */
ng
order=0 allocation failed:
> swapper/0: page allocation failure: order:0, mode:0x120
4KiB
src/extern/linux/include/linux/gfp.h:
18 #define ___GFP_HIGH»»···0x20u
21 #define ___GFP_COLD»»···0x100u
72 #define __GFP_HIGH»·((__force gfp_t)___GFP_HIGH)»···/* Should
access emergency pools? */
On Thu 28-07-16 15:50:32, Xishi Qiu wrote:
> On 2016/7/20 15:47, Michal Hocko wrote:
>
> > On Wed 20-07-16 09:33:30, Yisheng Xie wrote:
> >>
> >>
> >> On 2016/7/19 22:14, Vlastimil Babka wrote:
> >>> On 07/19/2016 03:48 PM, Xishi Qiu wrote:
> > [...]
> mode:0x2000d1 means it expects to alloc
On Thu 28-07-16 15:50:32, Xishi Qiu wrote:
> On 2016/7/20 15:47, Michal Hocko wrote:
>
> > On Wed 20-07-16 09:33:30, Yisheng Xie wrote:
> >>
> >>
> >> On 2016/7/19 22:14, Vlastimil Babka wrote:
> >>> On 07/19/2016 03:48 PM, Xishi Qiu wrote:
> > [...]
> mode:0x2000d1 means it expects to alloc
On 2016/7/20 15:47, Michal Hocko wrote:
> On Wed 20-07-16 09:33:30, Yisheng Xie wrote:
>>
>>
>> On 2016/7/19 22:14, Vlastimil Babka wrote:
>>> On 07/19/2016 03:48 PM, Xishi Qiu wrote:
> [...]
mode:0x2000d1 means it expects to alloc from zone_dma, (on arm64 zone_dma
is 0-4G)
>>>
>>>
On 2016/7/20 15:47, Michal Hocko wrote:
> On Wed 20-07-16 09:33:30, Yisheng Xie wrote:
>>
>>
>> On 2016/7/19 22:14, Vlastimil Babka wrote:
>>> On 07/19/2016 03:48 PM, Xishi Qiu wrote:
> [...]
mode:0x2000d1 means it expects to alloc from zone_dma, (on arm64 zone_dma
is 0-4G)
>>>
>>>
On Wed 20-07-16 09:33:30, Yisheng Xie wrote:
>
>
> On 2016/7/19 22:14, Vlastimil Babka wrote:
> > On 07/19/2016 03:48 PM, Xishi Qiu wrote:
[...]
> >> mode:0x2000d1 means it expects to alloc from zone_dma, (on arm64 zone_dma
> >> is 0-4G)
> >
> > Yes, but I don't see where the __GFP_DMA comes
On Wed 20-07-16 09:33:30, Yisheng Xie wrote:
>
>
> On 2016/7/19 22:14, Vlastimil Babka wrote:
> > On 07/19/2016 03:48 PM, Xishi Qiu wrote:
[...]
> >> mode:0x2000d1 means it expects to alloc from zone_dma, (on arm64 zone_dma
> >> is 0-4G)
> >
> > Yes, but I don't see where the __GFP_DMA comes
On 2016/7/19 22:14, Vlastimil Babka wrote:
> On 07/19/2016 03:48 PM, Xishi Qiu wrote:
>> On 2016/7/19 21:17, Vlastimil Babka wrote:
>>
>>> On 07/19/2016 02:43 PM, Yisheng Xie wrote:
>>>> hi all,
>>>> I'm getting a 2-order page allocation f
On 2016/7/19 22:14, Vlastimil Babka wrote:
> On 07/19/2016 03:48 PM, Xishi Qiu wrote:
>> On 2016/7/19 21:17, Vlastimil Babka wrote:
>>
>>> On 07/19/2016 02:43 PM, Yisheng Xie wrote:
>>>> hi all,
>>>> I'm getting a 2-order page allocation f
On 07/19/2016 03:48 PM, Xishi Qiu wrote:
On 2016/7/19 21:17, Vlastimil Babka wrote:
On 07/19/2016 02:43 PM, Yisheng Xie wrote:
hi all,
I'm getting a 2-order page allocation failure problem on 4.1.18.
From the Mem-info, it seems the system have much zero order free pages which
can be used
On 07/19/2016 03:48 PM, Xishi Qiu wrote:
On 2016/7/19 21:17, Vlastimil Babka wrote:
On 07/19/2016 02:43 PM, Yisheng Xie wrote:
hi all,
I'm getting a 2-order page allocation failure problem on 4.1.18.
From the Mem-info, it seems the system have much zero order free pages which
can be used
On 2016/7/19 21:17, Vlastimil Babka wrote:
> On 07/19/2016 02:43 PM, Yisheng Xie wrote:
>> hi all,
>> I'm getting a 2-order page allocation failure problem on 4.1.18.
>> From the Mem-info, it seems the system have much zero order free pages which
>> can be
On 2016/7/19 21:17, Vlastimil Babka wrote:
> On 07/19/2016 02:43 PM, Yisheng Xie wrote:
>> hi all,
>> I'm getting a 2-order page allocation failure problem on 4.1.18.
>> From the Mem-info, it seems the system have much zero order free pages which
>> can be
On 07/19/2016 02:43 PM, Yisheng Xie wrote:
hi all,
I'm getting a 2-order page allocation failure problem on 4.1.18.
From the Mem-info, it seems the system have much zero order free pages which
can be used for memory compaction.
Is it possible that the memory compacted by current process used
On 07/19/2016 02:43 PM, Yisheng Xie wrote:
hi all,
I'm getting a 2-order page allocation failure problem on 4.1.18.
From the Mem-info, it seems the system have much zero order free pages which
can be used for memory compaction.
Is it possible that the memory compacted by current process used
hi all,
I'm getting a 2-order page allocation failure problem on 4.1.18.
>From the Mem-info, it seems the system have much zero order free pages which
>can be used for memory compaction.
Is it possible that the memory compacted by current process used by other
process soon, which caus
hi all,
I'm getting a 2-order page allocation failure problem on 4.1.18.
>From the Mem-info, it seems the system have much zero order free pages which
>can be used for memory compaction.
Is it possible that the memory compacted by current process used by other
process soon, which caus
Hi,
On 23-04-16 23:10, Johannes Stezenbach wrote:
Hi,
I bought a new backup disk which turned out to be UAS capable,
but when I plugged it in I got an order 7 page allocation failure.
My hunch is that the .can_queue = 65536 in drivers/usb/storage/uas.c
is much too large. Maybe 256 would
Hi,
On 23-04-16 23:10, Johannes Stezenbach wrote:
Hi,
I bought a new backup disk which turned out to be UAS capable,
but when I plugged it in I got an order 7 page allocation failure.
My hunch is that the .can_queue = 65536 in drivers/usb/storage/uas.c
is much too large. Maybe 256 would
Hi,
I bought a new backup disk which turned out to be UAS capable,
but when I plugged it in I got an order 7 page allocation failure.
My hunch is that the .can_queue = 65536 in drivers/usb/storage/uas.c
is much too large. Maybe 256 would be a pratical value that matches
the capabilities
Hi,
I bought a new backup disk which turned out to be UAS capable,
but when I plugged it in I got an order 7 page allocation failure.
My hunch is that the .can_queue = 65536 in drivers/usb/storage/uas.c
is much too large. Maybe 256 would be a pratical value that matches
the capabilities
On (11/23/15 16:43), Sergey Senozhatsky wrote:
[..]
> agree. we also would want to switch from vzalloc() to
> __vmalloc_node_flags(size, NUMA_NO_NODE,
> GFP_NOIO | __GFP_HIGHMEM | __GFP_ZERO)
[..]
> > So, Kyeongdon's patch will remove warning overhead and likely to
>
On (11/23/15 16:43), Sergey Senozhatsky wrote:
[..]
> agree. we also would want to switch from vzalloc() to
> __vmalloc_node_flags(size, NUMA_NO_NODE,
> GFP_NOIO | __GFP_HIGHMEM | __GFP_ZERO)
[..]
> > So, Kyeongdon's patch will remove warning overhead and likely to
>
On (11/23/15 13:18), Minchan Kim wrote:
[..]
> > https://lkml.org/lkml/2015/6/16/465
>
> Sorry, I have missed that.
> It's worth to fix that you proved it that could happen.
> But when I read your patch, GFP_NOIO instead GFP_NOFS would
> better way. Could you resend it?
no problem.
agree. we
New thread
On Mon, Nov 23, 2015 at 12:14:00PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (11/23/15 11:15), Minchan Kim wrote:
> [..]
> > > static void *zcomp_lz4_create(void)
> > > {
> > > - return kzalloc(LZ4_MEM_COMPRESS, GFP_KERNEL);
> > > + void *ret;
> > > +
> > > + ret =
On (11/23/15 12:14), Sergey Senozhatsky wrote:
>
> yes, GFP_KERNEL looks a bit fragile to me too. And may be zcomp_strm_alloc()
> and comp->backend->create() deserve GFP_NOFS. I believe I sent a patch doing
> this a while ago: https://lkml.org/lkml/2015/6/16/465
>
perhaps just __GFP_RECLAIM
Hello,
On (11/23/15 11:15), Minchan Kim wrote:
[..]
> > static void *zcomp_lz4_create(void)
> > {
> > - return kzalloc(LZ4_MEM_COMPRESS, GFP_KERNEL);
> > + void *ret;
> > +
> > + ret = kzalloc(LZ4_MEM_COMPRESS,
> > + __GFP_NORETRY|__GFP_NOWARN|__GFP_NOMEMALLOC);
> > +
Hello,
On Fri, Nov 20, 2015 at 07:02:44PM +0900, Kyeongdon Kim wrote:
> When we're using LZ4 multi compression streams for zram swap,
> we found out page allocation failure message in system running test.
> That was not only once, but a few(2 - 5 times per test).
> Also, some failur
Hello,
On (11/23/15 11:15), Minchan Kim wrote:
[..]
> > static void *zcomp_lz4_create(void)
> > {
> > - return kzalloc(LZ4_MEM_COMPRESS, GFP_KERNEL);
> > + void *ret;
> > +
> > + ret = kzalloc(LZ4_MEM_COMPRESS,
> > + __GFP_NORETRY|__GFP_NOWARN|__GFP_NOMEMALLOC);
> > +
On (11/23/15 13:18), Minchan Kim wrote:
[..]
> > https://lkml.org/lkml/2015/6/16/465
>
> Sorry, I have missed that.
> It's worth to fix that you proved it that could happen.
> But when I read your patch, GFP_NOIO instead GFP_NOFS would
> better way. Could you resend it?
no problem.
agree. we
On (11/23/15 12:14), Sergey Senozhatsky wrote:
>
> yes, GFP_KERNEL looks a bit fragile to me too. And may be zcomp_strm_alloc()
> and comp->backend->create() deserve GFP_NOFS. I believe I sent a patch doing
> this a while ago: https://lkml.org/lkml/2015/6/16/465
>
perhaps just __GFP_RECLAIM
Hello,
On Fri, Nov 20, 2015 at 07:02:44PM +0900, Kyeongdon Kim wrote:
> When we're using LZ4 multi compression streams for zram swap,
> we found out page allocation failure message in system running test.
> That was not only once, but a few(2 - 5 times per test).
> Also, some failur
New thread
On Mon, Nov 23, 2015 at 12:14:00PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (11/23/15 11:15), Minchan Kim wrote:
> [..]
> > > static void *zcomp_lz4_create(void)
> > > {
> > > - return kzalloc(LZ4_MEM_COMPRESS, GFP_KERNEL);
> > > + void *ret;
> > > +
> > > + ret =
On (11/21/15 11:15), Sergey Senozhatsky wrote:
[..]
>
> with the only nit that the subject should be "try kmalloc() before vmalloc()"
> or similar, not "prevent page allocation failure", I think.
>
Oh, and one more thing
> static void zcomp_lz4_destro
On (11/21/15 11:15), Sergey Senozhatsky wrote:
[..]
>
> with the only nit that the subject should be "try kmalloc() before vmalloc()"
> or similar, not "prevent page allocation failure", I think.
>
Oh, and one more thing
> static void zcomp_lz4_destro
On (11/21/15 11:10), Sergey Senozhatsky wrote:
> Cc Andrew
>
> On (11/20/15 19:02), Kyeongdon Kim wrote:
> > When we're using LZ4 multi compression streams for zram swap,
> > we found out page allocation failure message in system running test.
> > That was not only once
Cc Andrew
On (11/20/15 19:02), Kyeongdon Kim wrote:
> When we're using LZ4 multi compression streams for zram swap,
> we found out page allocation failure message in system running test.
> That was not only once, but a few(2 - 5 times per test).
> Also, some failure cases were
When we're using LZ4 multi compression streams for zram swap,
we found out page allocation failure message in system running test.
That was not only once, but a few(2 - 5 times per test).
Also, some failure cases were continually occurring to try allocation
order 3.
In order to make parallel
When we're using LZ4 multi compression streams for zram swap,
we found out page allocation failure message in system running test.
That was not only once, but a few(2 - 5 times per test).
Also, some failure cases were continually occurring to try allocation
order 3.
In order to make parallel
On (11/21/15 11:10), Sergey Senozhatsky wrote:
> Cc Andrew
>
> On (11/20/15 19:02), Kyeongdon Kim wrote:
> > When we're using LZ4 multi compression streams for zram swap,
> > we found out page allocation failure message in system running test.
> > That was not only once
Cc Andrew
On (11/20/15 19:02), Kyeongdon Kim wrote:
> When we're using LZ4 multi compression streams for zram swap,
> we found out page allocation failure message in system running test.
> That was not only once, but a few(2 - 5 times per test).
> Also, some failure cases were
Hello,
On (11/20/15 01:00), Minchan Kim wrote:
[..]
> [1] 42614b05825, crypto: lzo - try kmalloc() before vmalloc()
> So, could you make vmalloc as fallback of kmalloc?
Looks good to me.
-ss
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
Hello,
On (11/19/15 22:49), kyeongdon.kim wrote:
[..]
> I know what you mean (streams are not free).
> First of all, I'm sorry I would have to tell you why I try this patch.
nothing to be sorry about.
> When we're using LZ4 multi stream for zram swap, I found out this
> allocation failure
Hello,
On 2015-11-19 오후 6:45, Sergey Senozhatsky wrote:
> Hello,
>
> On (11/19/15 15:54), kyeongdon.kim wrote:
>> When we use lzo/lz4 multi compression streams for zram,
>> we might face page allocation failure. In order to make parallel
>> compression private d
1 - 100 of 282 matches
Mail list logo