On 6/26/2013 11:37 PM, Yinghai Lu wrote:
> On Tue, Jun 25, 2013 at 11:58 AM, Mike Travis wrote:
>> experimenting as soon as I can. Our 32TB system is being
>> brought back to 16TB (we found a number of problems as we
>> get closer and closer to the 64TB limit), but that's still
>> a
On Tue, Jun 25, 2013 at 11:58 AM, Mike Travis wrote:
> experimenting as soon as I can. Our 32TB system is being
> brought back to 16TB (we found a number of problems as we
> get closer and closer to the 64TB limit), but that's still
> a significant size.
Hi, Mike,
Can you post e820 memory map
On Tue, Jun 25, 2013 at 11:58 AM, Mike Travis tra...@sgi.com wrote:
experimenting as soon as I can. Our 32TB system is being
brought back to 16TB (we found a number of problems as we
get closer and closer to the 64TB limit), but that's still
a significant size.
Hi, Mike,
Can you post e820
On 6/26/2013 11:37 PM, Yinghai Lu wrote:
On Tue, Jun 25, 2013 at 11:58 AM, Mike Travis tra...@sgi.com wrote:
experimenting as soon as I can. Our 32TB system is being
brought back to 16TB (we found a number of problems as we
get closer and closer to the 64TB limit), but that's still
a
* Yinghai Lu wrote:
> On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin wrote:
> > On 06/25/2013 10:35 AM, Mike Travis wrote:
>
> > However, please consider Ingo's counterproposal of doing this via the
> > buddy allocator, i.e. hugepages being broken on demand. That is a
> > *very* powerful
* Yinghai Lu ying...@kernel.org wrote:
On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/25/2013 10:35 AM, Mike Travis wrote:
However, please consider Ingo's counterproposal of doing this via the
buddy allocator, i.e. hugepages being broken on demand. That is
On Tue, Jun 25, 2013 at 12:09 PM, H. Peter Anvin wrote:
>> According to Intel SDM, CPU could support 52bits physical addressing.
>>
>> So how linux kernel will handle it? as we only have 48bits virtual
>> addressing.
>>
>
> The Linux kernel will not support more than V-2 bits of physical address
On 06/25/2013 12:03 PM, Yinghai Lu wrote:
>> It is worth experimenting with but the big question would be,
>> if it still avoids the very expensive "memmap_init_zone" and
>> it's sub-functions using huge expanses of memory. I'll do some
>> experimenting as soon as I can. Our 32TB system is being
> It is worth experimenting with but the big question would be,
> if it still avoids the very expensive "memmap_init_zone" and
> it's sub-functions using huge expanses of memory. I'll do some
> experimenting as soon as I can. Our 32TB system is being
> brought back to 16TB (we found a number of
On 6/25/2013 11:44 AM, H. Peter Anvin wrote:
> On 06/25/2013 11:40 AM, Yinghai Lu wrote:
>> On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin wrote:
>>> On 06/25/2013 10:35 AM, Mike Travis wrote:
>>
>>> However, please consider Ingo's counterproposal of doing this via the
>>> buddy allocator,
On 06/25/2013 11:40 AM, Yinghai Lu wrote:
> On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin wrote:
>> On 06/25/2013 10:35 AM, Mike Travis wrote:
>
>> However, please consider Ingo's counterproposal of doing this via the
>> buddy allocator, i.e. hugepages being broken on demand. That is a
>>
On 6/25/2013 11:38 AM, Yinghai Lu wrote:
> On Tue, Jun 25, 2013 at 10:35 AM, Mike Travis wrote:
>>
>>
>> On 6/21/2013 5:23 PM, Yinghai Lu wrote:
>>> On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis wrote:
Exactly. That's why I left both low and high memory on each node.
>>>
>>> looks like
On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin wrote:
> On 06/25/2013 10:35 AM, Mike Travis wrote:
> However, please consider Ingo's counterproposal of doing this via the
> buddy allocator, i.e. hugepages being broken on demand. That is a
> *very* powerful model, although would require more
On 6/25/2013 11:17 AM, H. Peter Anvin wrote:
> On 06/25/2013 10:35 AM, Mike Travis wrote:
>>
>> The two params that I couldn't figure out how to provide except via kernel
>> param option was the memory block size (128M or 2G) and the physical
>> address space per node. The other 3 params can be
On Tue, Jun 25, 2013 at 10:35 AM, Mike Travis wrote:
>
>
> On 6/21/2013 5:23 PM, Yinghai Lu wrote:
>> On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis wrote:
>>> Exactly. That's why I left both low and high memory on each node.
>>
>> looks like you assume every node have same ram, and before
On 06/25/2013 10:35 AM, Mike Travis wrote:
>
> The two params that I couldn't figure out how to provide except via kernel
> param option was the memory block size (128M or 2G) and the physical
> address space per node. The other 3 params can be automatically
> setup by a script when the total
On 6/21/2013 5:23 PM, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis wrote:
>>
>>
>> On 6/21/2013 11:50 AM, Greg KH wrote:
>>> On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
> On 06/21/2013 09:51
On 6/21/2013 5:23 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis tra...@sgi.com wrote:
On 6/21/2013 11:50 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013
On 06/25/2013 10:35 AM, Mike Travis wrote:
The two params that I couldn't figure out how to provide except via kernel
param option was the memory block size (128M or 2G) and the physical
address space per node. The other 3 params can be automatically
setup by a script when the total system
On Tue, Jun 25, 2013 at 10:35 AM, Mike Travis tra...@sgi.com wrote:
On 6/21/2013 5:23 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis tra...@sgi.com wrote:
Exactly. That's why I left both low and high memory on each node.
looks like you assume every node have same ram,
On 6/25/2013 11:17 AM, H. Peter Anvin wrote:
On 06/25/2013 10:35 AM, Mike Travis wrote:
The two params that I couldn't figure out how to provide except via kernel
param option was the memory block size (128M or 2G) and the physical
address space per node. The other 3 params can be
On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/25/2013 10:35 AM, Mike Travis wrote:
However, please consider Ingo's counterproposal of doing this via the
buddy allocator, i.e. hugepages being broken on demand. That is a
*very* powerful model, although would
On 6/25/2013 11:38 AM, Yinghai Lu wrote:
On Tue, Jun 25, 2013 at 10:35 AM, Mike Travis tra...@sgi.com wrote:
On 6/21/2013 5:23 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis tra...@sgi.com wrote:
Exactly. That's why I left both low and high memory on each node.
looks
On 06/25/2013 11:40 AM, Yinghai Lu wrote:
On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/25/2013 10:35 AM, Mike Travis wrote:
However, please consider Ingo's counterproposal of doing this via the
buddy allocator, i.e. hugepages being broken on demand. That is a
On 6/25/2013 11:44 AM, H. Peter Anvin wrote:
On 06/25/2013 11:40 AM, Yinghai Lu wrote:
On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/25/2013 10:35 AM, Mike Travis wrote:
However, please consider Ingo's counterproposal of doing this via the
buddy allocator,
It is worth experimenting with but the big question would be,
if it still avoids the very expensive memmap_init_zone and
it's sub-functions using huge expanses of memory. I'll do some
experimenting as soon as I can. Our 32TB system is being
brought back to 16TB (we found a number of
On 06/25/2013 12:03 PM, Yinghai Lu wrote:
It is worth experimenting with but the big question would be,
if it still avoids the very expensive memmap_init_zone and
it's sub-functions using huge expanses of memory. I'll do some
experimenting as soon as I can. Our 32TB system is being
brought
On Tue, Jun 25, 2013 at 12:09 PM, H. Peter Anvin h...@zytor.com wrote:
According to Intel SDM, CPU could support 52bits physical addressing.
So how linux kernel will handle it? as we only have 48bits virtual
addressing.
The Linux kernel will not support more than V-2 bits of physical
On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis wrote:
>
>
> On 6/21/2013 11:50 AM, Greg KH wrote:
>> On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
>>> On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff
On 6/21/2013 1:08 PM, H. Peter Anvin wrote:
> Is this init code? 32K of unconditional runtime addition isn't completely
> trivial.
The delay functions that move memory to the absent list are __init
but the read back of the list and memory insertion are not. BTW, this
option is only available
On 6/21/2013 11:50 AM, Greg KH wrote:
> On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
>> On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
>>> On 06/21/2013 09:51 AM, Greg KH wrote:
>>>
>>> I suspect the cutoff for this should be a lot lower than 8 TB even, more
>>> like 128
On 6/21/2013 12:00 PM, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 11:44 AM, Greg Kroah-Hartman
> wrote:
>> On Fri, Jun 21, 2013 at 11:36:21AM -0700, Yinghai Lu wrote:
>>> On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory
On 6/21/2013 11:36 AM, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer wrote:
>> This rfc patch set delays initializing large sections of memory until we have
>> started cpus. This has the effect of reducing startup times on large memory
>> systems. On 16TB it can take over
On 6/21/2013 10:18 AM, Nathan Zimmer wrote:
> On 06/21/2013 12:03 PM, H. Peter Anvin wrote:
>> On 06/21/2013 09:51 AM, Greg KH wrote:
>>> On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory
until we have
On Fri, Jun 21, 2013 at 01:28:11PM -0700, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 12:19 PM, Nathan Zimmer wrote:
> > On 06/21/2013 02:10 PM, Yinghai Lu wrote:
> >> in this way we can keep all numa etc on the place when online ram, cpu,
> >> pci...
> >>
> >> For example if we have 32 sockets
On Fri, Jun 21, 2013 at 01:08:06PM -0700, H. Peter Anvin wrote:
> Is this init code? 32K of unconditional runtime addition isn't completely
> trivial.
Some of it is init code but not all.
I am guessing 24k of that is actually runtime.
>
> Nathan Zimmer wrote:
>
> >On 06/21/2013 12:28 PM, H.
On Fri, Jun 21, 2013 at 12:19 PM, Nathan Zimmer wrote:
> On 06/21/2013 02:10 PM, Yinghai Lu wrote:
>> in this way we can keep all numa etc on the place when online ram, cpu,
>> pci...
>>
>> For example if we have 32 sockets system, most time for boot is with
>> *BIOS*
>> instead of OS. In those
Is this init code? 32K of unconditional runtime addition isn't completely
trivial.
Nathan Zimmer wrote:
>On 06/21/2013 12:28 PM, H. Peter Anvin wrote:
>> On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
Since you made it a compile time option, it would be good to know
>how
much code it
On 06/21/2013 12:28 PM, H. Peter Anvin wrote:
On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
Since you made it a compile time option, it would be good to know how
much code it adds, but otherwise I agree with Greg here... this really
shouldn't need to be an option. It *especially* shouldn't need
On 06/21/2013 02:10 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 11:50 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff for this should be a
On Fri, Jun 21, 2013 at 11:50 AM, Greg KH wrote:
> On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
>> On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
>> > On 06/21/2013 09:51 AM, Greg KH wrote:
>> >
>> > I suspect the cutoff for this should be a lot lower than 8 TB even, more
On Fri, Jun 21, 2013 at 11:44 AM, Greg Kroah-Hartman
wrote:
> On Fri, Jun 21, 2013 at 11:36:21AM -0700, Yinghai Lu wrote:
>> On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer wrote:
>> > This rfc patch set delays initializing large sections of memory until we
>> > have
>> > started cpus. This has
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
> > On 06/21/2013 09:51 AM, Greg KH wrote:
> >
> > I suspect the cutoff for this should be a lot lower than 8 TB even, more
> > like 128 GB or so. The only concern is to not set
On Fri, Jun 21, 2013 at 11:36:21AM -0700, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer wrote:
> > This rfc patch set delays initializing large sections of memory until we
> > have
> > started cpus. This has the effect of reducing startup times on large memory
> > systems.
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
> On 06/21/2013 09:51 AM, Greg KH wrote:
>
> I suspect the cutoff for this should be a lot lower than 8 TB even, more
> like 128 GB or so. The only concern is to not set the cutoff so low
> that we can end up running out of memory or with
On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer wrote:
> This rfc patch set delays initializing large sections of memory until we have
> started cpus. This has the effect of reducing startup times on large memory
> systems. On 16TB it can take over an hour to boot and most of that time
> is
On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
>>>
>> Since you made it a compile time option, it would be good to know how
>> much code it adds, but otherwise I agree with Greg here... this really
>> shouldn't need to be an option. It *especially* shouldn't need to be a
>> hand-set runtime option
On 06/21/2013 12:03 PM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on
On 06/21/2013 09:51 AM, Greg KH wrote:
> On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
>> This rfc patch set delays initializing large sections of memory until we have
>> started cpus. This has the effect of reducing startup times on large memory
>> systems. On 16TB it can take
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
> This rfc patch set delays initializing large sections of memory until we have
> started cpus. This has the effect of reducing startup times on large memory
> systems. On 16TB it can take over an hour to boot and most of that time
>
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can take over an hour to boot and most of that time
is spent initializing memory.
We avoid that bottleneck by delaying
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can take over an hour to boot and most of that time
is spent initializing memory.
We avoid that bottleneck by delaying
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can take over an hour to boot and most of that time
is
On 06/21/2013 09:51 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can take over an
On 06/21/2013 12:03 PM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on
On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
Since you made it a compile time option, it would be good to know how
much code it adds, but otherwise I agree with Greg here... this really
shouldn't need to be an option. It *especially* shouldn't need to be a
hand-set runtime option (which
On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer nzim...@sgi.com wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can take over an hour to boot and most of that
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff for this should be a lot lower than 8 TB even, more
like 128 GB or so. The only concern is to not set the cutoff so low
that we can end up running out of memory
On Fri, Jun 21, 2013 at 11:36:21AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer nzim...@sgi.com wrote:
This rfc patch set delays initializing large sections of memory until we
have
started cpus. This has the effect of reducing startup times on large memory
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff for this should be a lot lower than 8 TB even, more
like 128 GB or so. The only concern is to not
On Fri, Jun 21, 2013 at 11:44 AM, Greg Kroah-Hartman
gre...@linuxfoundation.org wrote:
On Fri, Jun 21, 2013 at 11:36:21AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer nzim...@sgi.com wrote:
This rfc patch set delays initializing large sections of memory until we
On Fri, Jun 21, 2013 at 11:50 AM, Greg KH gre...@linuxfoundation.org wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff for this should be a
On 06/21/2013 02:10 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 11:50 AM, Greg KH gre...@linuxfoundation.org wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I
On 06/21/2013 12:28 PM, H. Peter Anvin wrote:
On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
Since you made it a compile time option, it would be good to know how
much code it adds, but otherwise I agree with Greg here... this really
shouldn't need to be an option. It *especially* shouldn't need
Is this init code? 32K of unconditional runtime addition isn't completely
trivial.
Nathan Zimmer nzim...@sgi.com wrote:
On 06/21/2013 12:28 PM, H. Peter Anvin wrote:
On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
Since you made it a compile time option, it would be good to know
how
much code
On Fri, Jun 21, 2013 at 12:19 PM, Nathan Zimmer nzim...@sgi.com wrote:
On 06/21/2013 02:10 PM, Yinghai Lu wrote:
in this way we can keep all numa etc on the place when online ram, cpu,
pci...
For example if we have 32 sockets system, most time for boot is with
*BIOS*
instead of OS. In those
On Fri, Jun 21, 2013 at 01:08:06PM -0700, H. Peter Anvin wrote:
Is this init code? 32K of unconditional runtime addition isn't completely
trivial.
Some of it is init code but not all.
I am guessing 24k of that is actually runtime.
Nathan Zimmer nzim...@sgi.com wrote:
On 06/21/2013
On Fri, Jun 21, 2013 at 01:28:11PM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 12:19 PM, Nathan Zimmer nzim...@sgi.com wrote:
On 06/21/2013 02:10 PM, Yinghai Lu wrote:
in this way we can keep all numa etc on the place when online ram, cpu,
pci...
For example if we have 32 sockets
On 6/21/2013 10:18 AM, Nathan Zimmer wrote:
On 06/21/2013 12:03 PM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory
until we have
started cpus. This
On 6/21/2013 11:36 AM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer nzim...@sgi.com wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can
On 6/21/2013 12:00 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 11:44 AM, Greg Kroah-Hartman
gre...@linuxfoundation.org wrote:
On Fri, Jun 21, 2013 at 11:36:21AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 9:25 AM, Nathan Zimmer nzim...@sgi.com wrote:
This rfc patch set delays
On 6/21/2013 11:50 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff for this should be a lot lower than 8 TB even, more
like 128
On 6/21/2013 1:08 PM, H. Peter Anvin wrote:
Is this init code? 32K of unconditional runtime addition isn't completely
trivial.
The delay functions that move memory to the absent list are __init
but the read back of the list and memory insertion are not. BTW, this
option is only available
On Fri, Jun 21, 2013 at 2:30 PM, Mike Travis tra...@sgi.com wrote:
On 6/21/2013 11:50 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin h...@zytor.com wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the
74 matches
Mail list logo