On Thursday 03 January 2008 19:55, Ingo Molnar wrote:
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > Have you done anything more with allowing > 256 CPUS in this
> > > spinlock patch? We've been testing with 1k cpus and to verify with
> > > -mm kernel, we need to "unpatch" these spinlock changes
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > Have you done anything more with allowing > 256 CPUS in this
> > spinlock patch? We've been testing with 1k cpus and to verify with
> > -mm kernel, we need to "unpatch" these spinlock changes.
>
> Hi Mike,
>
> Actually I had it in my mind that 64
On Thursday 03 January 2008 10:35, Mike Travis wrote:
> Hi Nick,
>
> Have you done anything more with allowing > 256 CPUS in this spinlock
> patch? We've been testing with 1k cpus and to verify with -mm kernel,
> we need to "unpatch" these spinlock changes.
>
> Thanks,
> Mike
Hi Mike,
Actually I
Hi Nick,
Have you done anything more with allowing > 256 CPUS in this spinlock
patch? We've been testing with 1k cpus and to verify with -mm kernel,
we need to "unpatch" these spinlock changes.
Thanks,
Mike
Nick Piggin wrote:
> On Thursday 20 December 2007 18:04, Christoph Lameter wrote:
>>> Th
On Thursday 20 December 2007 18:04, Christoph Lameter wrote:
> > The only reason the x86 ticket locks have the 256 CPu limit is that
> > if they go any bigger, we can't use the partial registers so would
> > have to have a few more instructions.
>
> x86_64 is going up to 4k or 16k cpus soon for our
> The only reason the x86 ticket locks have the 256 CPu limit is that
> if they go any bigger, we can't use the partial registers so would
> have to have a few more instructions.
x86_64 is going up to 4k or 16k cpus soon for our new hardware.
> A 32 bit spinlock would allow 64K cpus (ticket loc
On Thursday 20 December 2007 06:28, Peter Zijlstra wrote:
> On Wed, 2007-12-19 at 11:53 -0500, Lee Schermerhorn wrote:
> > On Wed, 2007-12-19 at 11:31 -0500, Rik van Riel wrote:
> > > On Wed, 19 Dec 2007 10:52:09 -0500
> > >
> > > Lee Schermerhorn <[EMAIL PROTECTED]> wrote:
> > > > I keep these pat
On Wed, 2007-12-19 at 11:53 -0500, Lee Schermerhorn wrote:
> On Wed, 2007-12-19 at 11:31 -0500, Rik van Riel wrote:
> > On Wed, 19 Dec 2007 10:52:09 -0500
> > Lee Schermerhorn <[EMAIL PROTECTED]> wrote:
> >
> > > I keep these patches up to date for testing. I don't have conclusive
> > > evidence
On Wed, 2007-12-19 at 11:31 -0500, Rik van Riel wrote:
> On Wed, 19 Dec 2007 10:52:09 -0500
> Lee Schermerhorn <[EMAIL PROTECTED]> wrote:
>
> > I keep these patches up to date for testing. I don't have conclusive
> > evidence whether they alleviate or exacerbate the problem nor by how
> > much.
On Wed, 19 Dec 2007 10:52:09 -0500
Lee Schermerhorn <[EMAIL PROTECTED]> wrote:
> I keep these patches up to date for testing. I don't have conclusive
> evidence whether they alleviate or exacerbate the problem nor by how
> much.
When the queued locking from Ingo's x86 tree hits mainline,
I sus
On Wed, 2007-12-19 at 11:48 +1100, Nick Piggin wrote:
> On Wednesday 19 December 2007 08:15, Rik van Riel wrote:
> > I have seen soft cpu lockups in page_referenced_file() due to
> > contention on i_mmap_lock() for different pages. Making the
> > i_mmap_lock a reader/writer lock should increase pa
Hi
> > rmap: try_to_unmap_file() required new cond_resched_rwlock().
> > To reduce code duplication, I recast cond_resched_lock() as a
> > [static inline] wrapper around reworked cond_sched_lock() =>
> > __cond_resched_lock(void *lock, int type).
> > New cond_resched_rwlock() implemented as anoth
On Wednesday 19 December 2007 08:15, Rik van Riel wrote:
> I have seen soft cpu lockups in page_referenced_file() due to
> contention on i_mmap_lock() for different pages. Making the
> i_mmap_lock a reader/writer lock should increase parallelism
> in vmscan for file back pages mapped into many add
I have seen soft cpu lockups in page_referenced_file() due to
contention on i_mmap_lock() for different pages. Making the
i_mmap_lock a reader/writer lock should increase parallelism
in vmscan for file back pages mapped into many address spaces.
Read lock the i_mmap_lock for all usage except:
1
14 matches
Mail list logo