On 4/3/19 11:40 AM, Oscar Salvador wrote:
> On Wed, Apr 03, 2019 at 10:37:57AM +0200, Michal Hocko wrote:
>> That being said it should be the caller of the hotplug code to tell
>> the vmemmap allocation strategy. For starter, I would only pack vmemmaps
>> for "regular" kernel zone memory. Movable
On Wed 03-04-19 11:40:54, Oscar Salvador wrote:
> On Wed, Apr 03, 2019 at 10:37:57AM +0200, Michal Hocko wrote:
> > That being said it should be the caller of the hotplug code to tell
> > the vmemmap allocation strategy. For starter, I would only pack vmemmaps
> > for "regular" kernel zone memory.
On Wed, Apr 03, 2019 at 10:37:57AM +0200, Michal Hocko wrote:
> That being said it should be the caller of the hotplug code to tell
> the vmemmap allocation strategy. For starter, I would only pack vmemmaps
> for "regular" kernel zone memory. Movable zones should be more careful.
> We can always
On 03.04.19 10:50, Oscar Salvador wrote:
> On Wed, Apr 03, 2019 at 10:41:35AM +0200, David Hildenbrand wrote:
>>> That being said it should be the caller of the hotplug code to tell
>>> the vmemmap allocation strategy. For starter, I would only pack vmemmaps
>>> for "regular" kernel zone memory.
On 03.04.19 10:49, Michal Hocko wrote:
> On Wed 03-04-19 10:41:35, David Hildenbrand wrote:
>> On 03.04.19 10:37, Michal Hocko wrote:
> [...]
>>> That being said it should be the caller of the hotplug code to tell
>>> the vmemmap allocation strategy. For starter, I would only pack vmemmaps
>>> for
On Wed, Apr 03, 2019 at 10:41:35AM +0200, David Hildenbrand wrote:
> > That being said it should be the caller of the hotplug code to tell
> > the vmemmap allocation strategy. For starter, I would only pack vmemmaps
> > for "regular" kernel zone memory. Movable zones should be more careful.
> > We
On Wed 03-04-19 10:41:35, David Hildenbrand wrote:
> On 03.04.19 10:37, Michal Hocko wrote:
[...]
> > That being said it should be the caller of the hotplug code to tell
> > the vmemmap allocation strategy. For starter, I would only pack vmemmaps
> > for "regular" kernel zone memory. Movable zones
On 03.04.19 10:37, Michal Hocko wrote:
> On Wed 03-04-19 10:17:26, David Hildenbrand wrote:
>> On 03.04.19 10:12, Michal Hocko wrote:
>>> On Wed 03-04-19 10:01:16, Oscar Salvador wrote:
On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
> So what is going to happen when you
On Wed 03-04-19 10:17:26, David Hildenbrand wrote:
> On 03.04.19 10:12, Michal Hocko wrote:
> > On Wed 03-04-19 10:01:16, Oscar Salvador wrote:
> >> On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
> >>> So what is going to happen when you hotadd two memblocks. The first one
> >>>
On 03.04.19 10:34, Oscar Salvador wrote:
> On Wed, Apr 03, 2019 at 10:12:32AM +0200, Michal Hocko wrote:
>> What does prevent calling somebody arch_add_memory for a range spanning
>> multiple memblocks from a driver directly. In other words aren't you
>> making assumptions about a future usage
On Wed, Apr 03, 2019 at 10:12:32AM +0200, Michal Hocko wrote:
> What does prevent calling somebody arch_add_memory for a range spanning
> multiple memblocks from a driver directly. In other words aren't you
> making assumptions about a future usage based on the qemu usecase?
Well, right now they
On 03.04.19 10:12, Michal Hocko wrote:
> On Wed 03-04-19 10:01:16, Oscar Salvador wrote:
>> On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
>>> So what is going to happen when you hotadd two memblocks. The first one
>>> holds memmaps and then you want to hotremove (not just offline)
On Wed 03-04-19 10:01:16, Oscar Salvador wrote:
> On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
> > So what is going to happen when you hotadd two memblocks. The first one
> > holds memmaps and then you want to hotremove (not just offline) it?
>
> If you hot-add two memblocks,
On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
> So what is going to happen when you hotadd two memblocks. The first one
> holds memmaps and then you want to hotremove (not just offline) it?
If you hot-add two memblocks, this means that either:
a) you hot-add a 256MB-memory-device
On Tue 02-04-19 10:28:15, Oscar Salvador wrote:
> On Mon, Apr 01, 2019 at 01:53:06PM +0200, Michal Hocko wrote:
> > On Mon 01-04-19 09:59:36, Oscar Salvador wrote:
> > > On Fri, Mar 29, 2019 at 02:42:43PM +0100, Michal Hocko wrote:
> > > > Having a larger contiguous area is definitely nice to have
On 02.04.19 10:28, Oscar Salvador wrote:
> On Mon, Apr 01, 2019 at 01:53:06PM +0200, Michal Hocko wrote:
>> On Mon 01-04-19 09:59:36, Oscar Salvador wrote:
>>> On Fri, Mar 29, 2019 at 02:42:43PM +0100, Michal Hocko wrote:
Having a larger contiguous area is definitely nice to have but you also
On Mon, Apr 01, 2019 at 01:53:06PM +0200, Michal Hocko wrote:
> On Mon 01-04-19 09:59:36, Oscar Salvador wrote:
> > On Fri, Mar 29, 2019 at 02:42:43PM +0100, Michal Hocko wrote:
> > > Having a larger contiguous area is definitely nice to have but you also
> > > have to consider the other side of
On Mon 01-04-19 09:59:36, Oscar Salvador wrote:
> On Fri, Mar 29, 2019 at 02:42:43PM +0100, Michal Hocko wrote:
> > Having a larger contiguous area is definitely nice to have but you also
> > have to consider the other side of the thing. If we have a movable
> > memblock with unmovable memory then
On Fri, Mar 29, 2019 at 02:42:43PM +0100, Michal Hocko wrote:
> Having a larger contiguous area is definitely nice to have but you also
> have to consider the other side of the thing. If we have a movable
> memblock with unmovable memory then we are breaking the movable
> property. So there should
On Fri, Mar 29, 2019 at 03:23:00PM -0700, John Hubbard wrote:
> On 3/28/19 6:43 AM, Oscar Salvador wrote:
> > Hi,
> >
> > since last two RFCs were almost unnoticed (thanks David for the feedback),
> > I decided to re-work some parts to make it more simple and give it a more
> > testing, and drop
On 3/28/19 6:43 AM, Oscar Salvador wrote:
> Hi,
>
> since last two RFCs were almost unnoticed (thanks David for the feedback),
> I decided to re-work some parts to make it more simple and give it a more
> testing, and drop the RFC, to see if it gets more attention.
> I also added David's
On Fri 29-03-19 09:45:47, Oscar Salvador wrote:
[...]
> * memblock granularity 128M
>
> (qemu) object_add memory-backend-ram,id=ram0,size=256M
> (qemu) device_add pc-dimm,id=dimm0,memdev=ram0,node=1
>
> This will create two memblocks (2 sections), and if we allocate the vmemmap
> data for each
On Fri, Mar 29, 2019 at 09:56:37AM +0100, David Hildenbrand wrote:
> Oh okay, so actually the way I guessed it would be now.
>
> While this makes totally sense, I'll have to look how it is currently
> handled, meaning if there is a change. I somewhat remembering that
> delayed struct pages
On 29.03.19 09:56, David Hildenbrand wrote:
> On 29.03.19 09:45, Oscar Salvador wrote:
>> On Thu, Mar 28, 2019 at 04:31:44PM +0100, David Hildenbrand wrote:
>>> Correct me if I am wrong. I think I was confused - vmemmap data is still
>>> allocated *per memory block*, not for the whole added
On 29.03.19 09:45, Oscar Salvador wrote:
> On Thu, Mar 28, 2019 at 04:31:44PM +0100, David Hildenbrand wrote:
>> Correct me if I am wrong. I think I was confused - vmemmap data is still
>> allocated *per memory block*, not for the whole added memory, correct?
>
> No, vmemap data is allocated per
> Great, I would like to see how this works there :-).
>
>> I guess one important thing to mention is that it is no longer possible
>> to remove memory in a different granularity it was added. I slightly
>> remember that ACPI code sometimes "reuses" parts of already added
>> memory. We would
On Thu, Mar 28, 2019 at 04:31:44PM +0100, David Hildenbrand wrote:
> Correct me if I am wrong. I think I was confused - vmemmap data is still
> allocated *per memory block*, not for the whole added memory, correct?
No, vmemap data is allocated per memory-resource added.
In case a DIMM, would be a
On Thu, Mar 28, 2019 at 04:09:06PM +0100, David Hildenbrand wrote:
> On 28.03.19 14:43, Oscar Salvador wrote:
> > Hi,
> >
> > since last two RFCs were almost unnoticed (thanks David for the feedback),
> > I decided to re-work some parts to make it more simple and give it a more
> > testing, and
On 28.03.19 16:09, David Hildenbrand wrote:
> On 28.03.19 14:43, Oscar Salvador wrote:
>> Hi,
>>
>> since last two RFCs were almost unnoticed (thanks David for the feedback),
>> I decided to re-work some parts to make it more simple and give it a more
>> testing, and drop the RFC, to see if it
On 28.03.19 14:43, Oscar Salvador wrote:
> Hi,
>
> since last two RFCs were almost unnoticed (thanks David for the feedback),
> I decided to re-work some parts to make it more simple and give it a more
> testing, and drop the RFC, to see if it gets more attention.
> I also added David's feedback,
Hi,
since last two RFCs were almost unnoticed (thanks David for the feedback),
I decided to re-work some parts to make it more simple and give it a more
testing, and drop the RFC, to see if it gets more attention.
I also added David's feedback, so now all users of add_memory/__add_memory/
On 23.11.18 12:55, Oscar Salvador wrote:
> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
>> 1. How are we going to present such memory to the system statistics?
>>
>> In my opinion, this vmemmap memory should
>> a) still account to total memory
>> b) show up as allocated
>>
>>
On 23.11.18 12:55, Oscar Salvador wrote:
> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
>> 1. How are we going to present such memory to the system statistics?
>>
>> In my opinion, this vmemmap memory should
>> a) still account to total memory
>> b) show up as allocated
>>
>>
On Fri 23-11-18 13:51:57, David Hildenbrand wrote:
> On 23.11.18 13:42, Michal Hocko wrote:
> > On Fri 23-11-18 12:55:41, Oscar Salvador wrote:
[...]
> >> It is not memory that the system can use.
> >
> > same as bootmem ;)
>
> Fair enough, just saying that it represents a change :)
>
> (but
On Fri 23-11-18 13:51:57, David Hildenbrand wrote:
> On 23.11.18 13:42, Michal Hocko wrote:
> > On Fri 23-11-18 12:55:41, Oscar Salvador wrote:
[...]
> >> It is not memory that the system can use.
> >
> > same as bootmem ;)
>
> Fair enough, just saying that it represents a change :)
>
> (but
On 23.11.18 13:42, Michal Hocko wrote:
> On Fri 23-11-18 12:55:41, Oscar Salvador wrote:
>> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
>>> 1. How are we going to present such memory to the system statistics?
>>>
>>> In my opinion, this vmemmap memory should
>>> a) still
On 23.11.18 13:42, Michal Hocko wrote:
> On Fri 23-11-18 12:55:41, Oscar Salvador wrote:
>> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
>>> 1. How are we going to present such memory to the system statistics?
>>>
>>> In my opinion, this vmemmap memory should
>>> a) still
On Fri 23-11-18 12:55:41, Oscar Salvador wrote:
> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
> > 1. How are we going to present such memory to the system statistics?
> >
> > In my opinion, this vmemmap memory should
> > a) still account to total memory
> > b) show up as
On Fri 23-11-18 12:55:41, Oscar Salvador wrote:
> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
> > 1. How are we going to present such memory to the system statistics?
> >
> > In my opinion, this vmemmap memory should
> > a) still account to total memory
> > b) show up as
On 23.11.18 12:55, Oscar Salvador wrote:
> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
>> 1. How are we going to present such memory to the system statistics?
>>
>> In my opinion, this vmemmap memory should
>> a) still account to total memory
>> b) show up as allocated
>>
>>
On 23.11.18 12:55, Oscar Salvador wrote:
> On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
>> 1. How are we going to present such memory to the system statistics?
>>
>> In my opinion, this vmemmap memory should
>> a) still account to total memory
>> b) show up as allocated
>>
>>
On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
> 1. How are we going to present such memory to the system statistics?
>
> In my opinion, this vmemmap memory should
> a) still account to total memory
> b) show up as allocated
>
> So just like before.
No, it does not show up
On Thu, Nov 22, 2018 at 10:21:24AM +0100, David Hildenbrand wrote:
> 1. How are we going to present such memory to the system statistics?
>
> In my opinion, this vmemmap memory should
> a) still account to total memory
> b) show up as allocated
>
> So just like before.
No, it does not show up
On 16.11.18 11:12, Oscar Salvador wrote:
> Hi,
>
> this patchset is based on Michal's patchset [1].
> Patch#1, patch#2 and patch#4 are quite the same.
> They just needed little changes to adapt it to current codestream,
> so it seemed fair to leave them.
>
> -
> Original cover:
>
> This
On 16.11.18 11:12, Oscar Salvador wrote:
> Hi,
>
> this patchset is based on Michal's patchset [1].
> Patch#1, patch#2 and patch#4 are quite the same.
> They just needed little changes to adapt it to current codestream,
> so it seemed fair to leave them.
>
> -
> Original cover:
>
> This
Hi,
this patchset is based on Michal's patchset [1].
Patch#1, patch#2 and patch#4 are quite the same.
They just needed little changes to adapt it to current codestream,
so it seemed fair to leave them.
-
Original cover:
This is another step to make the memory hotplug more usable. The
Hi,
this patchset is based on Michal's patchset [1].
Patch#1, patch#2 and patch#4 are quite the same.
They just needed little changes to adapt it to current codestream,
so it seemed fair to leave them.
-
Original cover:
This is another step to make the memory hotplug more usable. The
47 matches
Mail list logo