--- Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Wed, 13 Feb 2008, Kanoj Sarcar wrote:
>
> > It seems that the need is to solve potential
> memory
> > shortage and overcommit issues by being able to
> > reclaim pages pinned by rdma driver/hardware.
--- Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Wed, 13 Feb 2008, Christian Bell wrote:
>
> > not always be in the thousands but you're still
> claiming scalability
> > for a mechanism that essentially logs who accesses
> the regions. Then
> > there's the fact that reclaim becomes a
--- Christoph Lameter [EMAIL PROTECTED] wrote:
On Wed, 13 Feb 2008, Christian Bell wrote:
not always be in the thousands but you're still
claiming scalability
for a mechanism that essentially logs who accesses
the regions. Then
there's the fact that reclaim becomes a collective
--- Christoph Lameter [EMAIL PROTECTED] wrote:
On Wed, 13 Feb 2008, Kanoj Sarcar wrote:
It seems that the need is to solve potential
memory
shortage and overcommit issues by being able to
reclaim pages pinned by rdma driver/hardware. Is
my
understanding correct?
Correct.
If I
>
>
> George, while this is needed as pointed out in a previous message,
> due to non-contiguous physical IDs, I think the current usage is
> pretty bad (at least looking from a x86 perspective). Maybe somebody
> can chime in from a different architecture.
>
> I think that all data accesses
George, while this is needed as pointed out in a previous message,
due to non-contiguous physical IDs, I think the current usage is
pretty bad (at least looking from a x86 perspective). Maybe somebody
can chime in from a different architecture.
I think that all data accesses
>
> It helps by keeping the task in the same node if it cannot keep it in
> the same cpu anymore.
>
> Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
> and it gets running and for some reason cpu 8 is busy and there are other
> cpus idle in the system. Now with the
>
>
>
> Kanoj, our cpu-pooling + loadbalancing allows you to do that.
> The system adminstrator can specify at runtime through a
> /proc filesystem interface the cpu-pool-size, whether loadbalacing
> should take place.
Yes, I think this approach can support the various requirements
put on the
>
> I didn't seen anything from Kanoj but I did something myself for the wildfire:
>
>
>ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
>
> this is mostly an userspace issue, not really intended as a kernel optimization
> (however it's also
>
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the runqueue length is limited to a very small
> > multiple of the
On Wed, 4 Apr 2001, Hubertus Franke wrote:
Another point to raise is that the current scheduler does a exhaustive
search for the "best" task to run. It touches every process in the
runqueue. this is ok if the runqueue length is limited to a very small
multiple of the #cpus. [...]
I didn't seen anything from Kanoj but I did something myself for the wildfire:
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
this is mostly an userspace issue, not really intended as a kernel optimization
(however it's also partly a
Kanoj, our cpu-pooling + loadbalancing allows you to do that.
The system adminstrator can specify at runtime through a
/proc filesystem interface the cpu-pool-size, whether loadbalacing
should take place.
Yes, I think this approach can support the various requirements
put on the
It helps by keeping the task in the same node if it cannot keep it in
the same cpu anymore.
Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
and it gets running and for some reason cpu 8 is busy and there are other
cpus idle in the system. Now with the current
Hi,
Just a quick note to mention that I was successful in booting up a
64 node, 128p, 64G mips64 machine on a 2.4.1 based kernel. To be able
to handle the amount of io devices connected, I had to make some
fixes in the arch/mips64 code. And a few to handle 128 cpus.
A couple of generic patches
Hi,
Just a quick note to mention that I was successful in booting up a
64 node, 128p, 64G mips64 machine on a 2.4.1 based kernel. To be able
to handle the amount of io devices connected, I had to make some
fixes in the arch/mips64 code. And a few to handle 128 cpus.
A couple of generic patches
>
> On Thu, 15 Feb 2001, Kanoj Sarcar wrote:
>
> > No. All architectures do not have this problem. For example, if the
> > Linux "dirty" (not the pte dirty) bit is managed by software, a fault
> > will actually be taken when processor 2 tries to do the w
>
> Kanoj Sarcar wrote:
> >
> > Okay, I will quote from Intel Architecture Software Developer's Manual
> > Volume 3: System Programming Guide (1997 print), section 3.7, page 3-27:
> >
> > "Bus cycles to the page directory and page tables in memor
>
> Kanoj Sarcar wrote:
> > > Here's the important part: when processor 2 wants to set the pte's dirty
> > > bit, it *rereads* the pte and *rechecks* the permission bits again.
> > > Even though it has a non-dirty TLB entry for that pte.
> > >
> >
>
> [Added Linus and linux-kernel as I think it's of general interest]
>
> Kanoj Sarcar wrote:
> > Whether Jamie was trying to illustrate a different problem, I am not
> > sure.
>
> Yes, I was talking about pte_test_and_clear_dirty in the earlier post.
>
>
> [Added Linus and linux-kernel as I think it's of general interest]
>
> Kanoj Sarcar wrote:
> > Whether Jamie was trying to illustrate a different problem, I am not
> > sure.
>
> Yes, I was talking about pte_test_and_clear_dirty in the earlier post.
>
[Added Linus and linux-kernel as I think it's of general interest]
Kanoj Sarcar wrote:
Whether Jamie was trying to illustrate a different problem, I am not
sure.
Yes, I was talking about pte_test_and_clear_dirty in the earlier post.
Look in mm/mprotect.c. Look at the call
[Added Linus and linux-kernel as I think it's of general interest]
Kanoj Sarcar wrote:
Whether Jamie was trying to illustrate a different problem, I am not
sure.
Yes, I was talking about pte_test_and_clear_dirty in the earlier post.
Look in mm/mprotect.c. Look at the call
Kanoj Sarcar wrote:
Here's the important part: when processor 2 wants to set the pte's dirty
bit, it *rereads* the pte and *rechecks* the permission bits again.
Even though it has a non-dirty TLB entry for that pte.
That is how I read Ben LaHaise's description, and his test
Kanoj Sarcar wrote:
Okay, I will quote from Intel Architecture Software Developer's Manual
Volume 3: System Programming Guide (1997 print), section 3.7, page 3-27:
"Bus cycles to the page directory and page tables in memory are performed
only when the TLBs do not co
On Thu, 15 Feb 2001, Kanoj Sarcar wrote:
No. All architectures do not have this problem. For example, if the
Linux "dirty" (not the pte dirty) bit is managed by software, a fault
will actually be taken when processor 2 tries to do the write. The fault
is solely to
>
> So while there may be a more elegant solution down the road, I would like
> to see the simple fix put back into 2.4. Here is the patch to essential
> put the code back to the way it was before the S/390 merge. Patch is
> against 2.4.0-test10pre6.
>
> --- linux/mm/memory.cFri Oct 27
>
>
> On Thu, 12 Oct 2000, David S. Miller wrote:
> >
> >page_table_lock is supposed to protect normal page table activity (like
> >what's done in page fault handler) from swapping out.
> >However, grabbing this lock in swap-out code is completely missing!
> >
> > Audrey,
>
> Some time ago, the list was very helpful in solving my programs
> failing at the limit of real memory rather than expanding into
> swap under linux 2.2.
>
I can;t say what your actual problem is, but in previous experiments,
I have seen these as the main cause:
1. shortage of real memory
Some time ago, the list was very helpful in solving my programs
failing at the limit of real memory rather than expanding into
swap under linux 2.2.
I can;t say what your actual problem is, but in previous experiments,
I have seen these as the main cause:
1. shortage of real memory (ram +
30 matches
Mail list logo