diff --git a/Makefile b/Makefile
index 947ff3c..b3806cb 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 20
-EXTRAVERSION = .15
+EXTRAVERSION = .16
NAME = Homicidal Dwarf Hamster
# *DOCUMENTATION*
diff --git a/arch/i386/kernel/entry.S
On Thu, 16 Aug 2007, Satyam Sharma wrote:
> Hi Bill,
>
>
> On Wed, 15 Aug 2007, Bill Fink wrote:
>
> > On Wed, 15 Aug 2007, Satyam Sharma wrote:
> >
> > > (C)
> > > $ cat tp3.c
> > > int a;
> > >
> > > void func(void)
> > > {
> > > *(volatile int *) = 10;
> > > *(volatile int *) = 20;
I've just released Linux 2.6.20.16. This version catches up with 2.6.21.7.
I hope to issue newer releases soon with next batches of pending patches.
I'll also be replying to this message with a copy of the patch between
2.6.20.15 and 2.6.20.16.
The patch and changelog will appear soon at the
On Thu, Aug 16, 2007 at 02:11:43PM +1000, Paul Mackerras wrote:
>
> The uses of atomic_read where one might want it to allow caching of
> the result seem to me to fall into 3 categories:
>
> 1. Places that are buggy because of a race arising from the way it's
>used.
>
> 2. Places where there
On Thu, Aug 16, 2007 at 02:34:25PM +1000, Paul Mackerras wrote:
>
> I'm talking about this situation:
>
> CPU 0 comes into __sk_stream_mem_reclaim, reads memory_allocated, but
> then before it can do the store to *memory_pressure, CPUs 1-1023 all
> go through sk_stream_mem_schedule, collectively
On Wed, Aug 08, 2007 at 01:47:45PM -0500, Adam Litke wrote:
> Hello all. In an effort to understand how the page tables are laid out
> across various architectures I put together some diagrams. I have
> posted them on the linux-mm wiki: http://linux-mm.org/PageTableStructure
> and I hope they
blk_recalc_rq_segments calls blk_recount_segments on each bio,
then does some extra calculations to handle segments that overlap
two bios.
If we merge the code from blk_recount_segments into
blk_recalc_rq_segments, we can process the whole request one bio_vec
at a time, and not need the messy
(almost) every usage of rq_for_each_bio wraps a usage of
bio_for_each_segment, so these can be combined into
rq_for_each_segment.
We get it to fill in a bio_vec structure rather than provide a
pointer, as future changes to make bi_io_vec immutable will require
that.
The one place where
Almost every call to bio_data is for the first bio
in a request. A future patch will add some accounting
information to 'struct request' which will need to be
used to find the start of the request in the bio.
So replace bio_data with blk_rq_data which takes a 'struct request *'
The one
All calls to bio_cur_sectors are for the first bio in a 'struct request'.
A future patch will make the discovery of this number dependant on
information in the request. So change the function to take a
'struct request *' instread of a 'struct bio *', and make it a real
function as more code
ll_merge_requests_fn can update bi_hw_*_size in one case where we end
up not merging. This is wrong.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff .prev/block/ll_rw_blk.c
Hi Jens,
I wonder if you would accept these patches the block layer.
They are, as far as I can tell, quite uncontroversial and provide
good cleanups.
The first is a minor bug-fix.
The next to replace helper function that take a bio (always the first
bio of a request), to instead take a
Hi Bill,
On Wed, 15 Aug 2007, Bill Fink wrote:
> On Wed, 15 Aug 2007, Satyam Sharma wrote:
>
> > (C)
> > $ cat tp3.c
> > int a;
> >
> > void func(void)
> > {
> > *(volatile int *) = 10;
> > *(volatile int *) = 20;
> > }
> > $ gcc -Os -S tp3.c
> > $ cat tp3.s
> > ...
> > movl$10, a
On 8/15/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Lee wrote:
> > [altho' methinks CPUSET should select CONTAINERS rather than
> > depend on it...]
>
> Good point -- what do you think, Paul Menage?
That's how I made the configs originally; akpm asked me to invert the
dependencies to use
Mmm, slow-as-dirt hotel wireless. What fun...
On Aug 15, 2007, at 18:14:44, Phillip Susi wrote:
Kyle Moffett wrote:
I am well aware of that, I'm simply saying that sucks. Doing a
recursive chmod or setfacl on a large directory tree is slow as
all hell.
Doing it in the kernel won't make
Herbert Xu writes:
> > You mean it's intended that *sk->sk_prot->memory_pressure can end up
> > as 1 when sk->sk_prot->memory_allocated is small (less than
> > ->sysctl_mem[0]), or as 0 when ->memory_allocated is large (greater
> > than ->sysctl_mem[2])? Because that's the effect of the current
Joe Perches wrote:
On Wed, 2007-08-15 at 19:19 -0700, Kok, Auke wrote:
Joe Perches wrote:
On Wed, 2007-08-15 at 19:58 -0400, Dave Jones wrote:
There's more than a few of these (not inspected).
$ egrep -r --include=*.c "\bif[[:space:]]*\([^\)]*\)[[:space:]]*\;" *
arch/sh/boards/se/7343/io.c:
Satyam Sharma writes:
> Anyway, the problem, of course, is that this conversion to a stronger /
> safer-by-default behaviour doesn't happen with zero cost to performance.
> Converting atomic ops to "volatile" behaviour did add ~2K to kernel text
> for archs such as i386 (possibly to important
On Thu, Aug 16, 2007 at 01:48:32PM +1000, Paul Mackerras wrote:
> Herbert Xu writes:
>
> > If you're referring to the code in sk_stream_mem_schedule
> > then it's working as intended. The atomicity guarantees
>
> You mean it's intended that *sk->sk_prot->memory_pressure can end up
> as 1 when
Herbert Xu writes:
> If you're referring to the code in sk_stream_mem_schedule
> then it's working as intended. The atomicity guarantees
You mean it's intended that *sk->sk_prot->memory_pressure can end up
as 1 when sk->sk_prot->memory_allocated is small (less than
->sysctl_mem[0]), or as 0
On Wed, 15 Aug 2007, Satyam Sharma wrote:
> (C)
> $ cat tp3.c
> int a;
>
> void func(void)
> {
> *(volatile int *) = 10;
> *(volatile int *) = 20;
> }
> $ gcc -Os -S tp3.c
> $ cat tp3.s
> ...
> movl$10, a
> movl$20, a
> ...
I'm curious about one minor tangential point. Why,
On Thu, Aug 16, 2007 at 01:15:05PM +1000, Paul Mackerras wrote:
>
> But others can also reduce the reservation. Also, the code sets and
> clears *sk->sk_prot->memory_pressure nonatomically with respect to the
> reads of sk->sk_prot->memory_allocated, so in fact the code doesn't
> guarantee any
On Wed, 15 Aug 2007 15:48:15 PDT, Marc Perkel said:
> > Consider the rules:
> >
> > peter '*a*' can create
> > peter '*b*' cannot create
> >
> > Peter tries to create 'foo-ab-bar' - is he allowed
> > to or not?
> >
>
> First - I'm proposing a concept, not writing the
> implementation of the
Thanks! Seems like checking for this is in the air, I just applied an
identical patch from Ilpo Järvinen.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Linus, please pull from
master.kernel.org:/pub/scm/linux/kernel/git/roland/infiniband.git for-linus
This tree is also available from kernel.org mirrors at:
git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git
for-linus
This is basically a resend of the small fixes for
On Thu, Aug 16, 2007 at 01:23:06PM +1000, Paul Mackerras wrote:
>
> In particular, atomic_read seems to lend itself to buggy uses. People
> seem to do things like:
>
> atomic_add(, something);
> if (atomic_read() > something_else) ...
If you're referring to the code in
On Wed, Aug 15, 2007 at 03:12:06PM +0200, Peter Zijlstra wrote:
> On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
> > On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> > > The following patchset implements recursive reclaim. Recursive reclaim
> > > is necessary if we run
>It's not about being a niche. It's about creating a maintainable
>software net stack that has predictable behavior.
>
>Needing to reach out of the RDMA sandbox and reserve net stack resources
>away from itself travels a path we've consistently avoided.
We need to ensure that we're also creating
Signed-off-by: Denis Cheng <[EMAIL PROTECTED]>
---
fs/super.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/fs/super.c b/fs/super.c
index fc8ebed..f287c15 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -334,15 +334,12 @@ struct super_block *sget(struct
Herbert Xu writes:
> > Are you sure? How do you know some other CPU hasn't changed the value
> > in between?
>
> Yes I'm sure, because we don't care if others have increased
> the reservation.
But others can also reduce the reservation. Also, the code sets and
clears
Christoph Lameter writes:
> > But I have to say that I still don't know of a single place
> > where one would actually use the volatile variant.
>
> I suspect that what you say is true after we have looked at all callers.
It seems that there could be a lot of places where atomic_t is used in
a
Satyam Sharma writes:
> I can't speak for this particular case, but there could be similar code
> examples elsewhere, where we do the atomic ops on an atomic_t object
> inside a higher-level locking scheme that would take care of the kind of
> problem you're referring to here. It would be useful
On 8/14/07, Adrian Bunk <[EMAIL PROTECTED]> wrote:
> According to git, the only one who touched this file during the last
> 5 years was me when removing drivers...
>
> modinfo offers less ancient information.
>
> Signed-off-by: Adrian Bunk <[EMAIL PROTECTED]>
>
Fine by me for any of the old stuff
On Wed, 2007-08-15 at 19:19 -0700, Kok, Auke wrote:
> Joe Perches wrote:
> > On Wed, 2007-08-15 at 19:58 -0400, Dave Jones wrote:
> > There's more than a few of these (not inspected).
> > $ egrep -r --include=*.c "\bif[[:space:]]*\([^\)]*\)[[:space:]]*\;" *
> > arch/sh/boards/se/7343/io.c:if
On Thu, Aug 16, 2007 at 09:37:01AM +0800, Fengguang Wu wrote:
> 3) A 64bit number can be encoded in hex in 8 bytes, no more than its
~~ sorry, I'm a fool!
> binary presentation. And often the text string will be much shorter
> because of the common
> Needing to reach out of the RDMA sandbox and reserve net stack
> resources away from itself travels a path we've consistently avoided.
Where did the idea of an "RDMA sandbox" come from? Obviously no one
disagrees with keeping things clean and maintainable, but the idea
that RDMA is a
On Thu, 16 Aug 2007 10:58:41 +0800 ye janboe wrote:
> I want to down linux-arm git repos, but I do not have an account on
> master machine.
see http://www.kernel.org/faq/#account
---
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***
-
To unsubscribe from this
I want to down linux-arm git repos, but I do not have an account on
master machine.
thanks
Yuanbo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
On Wed, Aug 15, 2007 at 12:43:26PM -0500, Dean Nelson wrote:
> The calculation of pgoff in do_linear_fault() should use PAGE_SHIFT and not
> PAGE_CACHE_SHIFT since vma->vm_pgoff is in units of PAGE_SIZE and not
> PAGE_CACHE_SIZE. At the moment linux/pagemap.h has PAGE_CACHE_SHIFT defined
> as
On Tue, Aug 14, 2007 at 08:30:21AM -0700, Christoph Lameter wrote:
> This is the extended version of the reclaim patchset. It enables reclaim from
> clean file backed pages during GFP_ATOMIC allocs. A bit invasive since
> may locks must now be taken with saving flags. But it works.
>
> Tested by
On Tue, Aug 14, 2007 at 02:41:21PM -0500, Adam Litke wrote:
> It seems a simple mistake was made when converting follow_hugetlb_page()
> over to the VM_FAULT flags bitmask stuff:
> (commit 83c54070ee1a2d05c89793884bea1a03f2851ed4).
>
> By using the wrong bitmask, hugetlb_fault() failures
Repost of http://lkml.org/lkml/2007/8/10/472 made available by request.
The locking used by get_random_bytes() can conflict with the
preempt_disable() and synchronize_sched() form of RCU. This patch changes
rcutorture's RNG to gather entropy from the new cpu_clock() interface
(relying on
On Thu, 16 Aug 2007, Satyam Sharma wrote:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
> > Herbert Xu writes:
> >
> > > See sk_stream_mem_schedule in net/core/stream.c:
> > >
> > > /* Under limit. */
> > > if (atomic_read(sk->sk_prot->memory_allocated) <
> > >
Christoph wrote (and Lee quoted):
> Looks like we need to fix cpuset nodemasks for the !NUMA case then?
> It cannot expect to find valid nodemasks if !NUMA.
The code in kernel/cpuset.c should be written as if there are always
valid nodemasks. In the case of !NUMA, it will happen that there is
On Thu, Aug 16, 2007 at 10:11:05AM +0800, Herbert Xu wrote:
> On Thu, Aug 16, 2007 at 12:05:56PM +1000, Paul Mackerras wrote:
> > Herbert Xu writes:
> >
> > > See sk_stream_mem_schedule in net/core/stream.c:
> > >
> > > /* Under limit. */
> > > if
On Wed, Aug 15, 2007 at 06:41:40PM -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > Understood. My point is not that the impact is precisely zero, but
> > rather that the impact on optimization is much less hurtful than the
> > problems that could arise
On Thu, Aug 16, 2007 at 03:30:44AM +0200, Segher Boessenkool wrote:
> >>>Part of the motivation here is to fix heisenbugs. If I knew where
> >>>they
> >>
> >>By the same token we should probably disable optimisations
> >>altogether since that too can create heisenbugs.
> >
> >Precisely the point
I was putting together an early printk implementation for the Blackfin, and
was wondering what the expected behaviour was in this situation.
When I set up my bootargs earlyprintk=serial,ttyBF0,57600 and have no console
defined (no graphical console, no serial console).
based on the patch:
On Thu, Aug 16, 2007 at 11:09:25AM +1000, Nick Piggin wrote:
> Paul E. McKenney wrote:
> >On Wed, Aug 15, 2007 at 11:30:05PM +1000, Nick Piggin wrote:
>
> >>Especially since several big architectures don't have volatile in their
> >>atomic_get and _set, I think it would be a step backwards to add
Steve Wise wrote:
David Miller wrote:
From: Sean Hefty <[EMAIL PROTECTED]>
Date: Thu, 09 Aug 2007 14:40:16 -0700
Steve Wise wrote:
Any more comments?
Does anyone have ideas on how to reserve the port space without using
a struct socket?
How about we just remove the RDMA stack
Segher Boessenkool wrote:
Part of the motivation here is to fix heisenbugs. If I knew where they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would
On Thu, Aug 16, 2007 at 03:23:28AM +0200, Segher Boessenkool wrote:
> No; compilation units have nothing to do with it, GCC can optimise
> across compilation unit boundaries just fine, if you tell it to
> compile more than one compilation unit at once.
> >>>
> >>>Last I checked, the
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> Herbert Xu writes:
>
> > See sk_stream_mem_schedule in net/core/stream.c:
> >
> > /* Under limit. */
> > if (atomic_read(sk->sk_prot->memory_allocated) <
> > sk->sk_prot->sysctl_mem[0]) {
> > if
Joe Perches wrote:
On Wed, 2007-08-15 at 19:58 -0400, Dave Jones wrote:
Signed-off-by: Dave Jones <[EMAIL PROTECTED]>
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index 3330917..0ed02b7 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++
Herbert Xu wrote:
But I have to say that I still don't know of a single place
where one would actually use the volatile variant.
Given that many of the existing users do currently have "volatile", are
you comfortable simply removing that behaviour from them? Are you sure
that you will not
On Thu, 16 Aug 2007, Herbert Xu wrote:
> > Do we have a consensus here? (hoping against hope, probably :-)
>
> I can certainly agree with this.
I agree too.
> But I have to say that I still don't know of a single place
> where one would actually use the volatile variant.
I suspect that what
On Wed, 15 Aug 2007, Christoph Lameter wrote:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > > We don't need to reload sk->sk_prot->memory_allocated here.
> >
> > Are you sure? How do you know some other CPU hasn't changed the value
> > in between?
>
> The cpu knows because the cacheline
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> > We don't need to reload sk->sk_prot->memory_allocated here.
>
> Are you sure? How do you know some other CPU hasn't changed the value
> in between?
The cpu knows because the cacheline was not invalidated.
-
To unsubscribe from this list: send the
On Thu, Aug 16, 2007 at 12:05:56PM +1000, Paul Mackerras wrote:
> Herbert Xu writes:
>
> > See sk_stream_mem_schedule in net/core/stream.c:
> >
> > /* Under limit. */
> > if (atomic_read(sk->sk_prot->memory_allocated) <
> > sk->sk_prot->sysctl_mem[0]) {
> > if
On Thu, Aug 16, 2007 at 07:45:44AM +0530, Satyam Sharma wrote:
>
> Completely agreed, again. To summarize again (had done so about ~100 mails
> earlier in this thread too :-) ...
>
> atomic_{read,set}_volatile() -- guarantees volatility also along with
> atomicity (the two _are_ different
A volatile default would disable optimizations for atomic_read.
atomic_read without volatile would allow for full optimization by the
compiler. Seems that this is what one wants in many cases.
Name one such case.
An atomic_read should do a load from memory. If the programmer puts
an
Herbert Xu writes:
> See sk_stream_mem_schedule in net/core/stream.c:
>
> /* Under limit. */
> if (atomic_read(sk->sk_prot->memory_allocated) <
> sk->sk_prot->sysctl_mem[0]) {
> if (*sk->sk_prot->memory_pressure)
>
On Wed, 15 Aug 2007, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > Understood. My point is not that the impact is precisely zero, but
> > rather that the impact on optimization is much less hurtful than the
> > problems that could arise otherwise, particularly
On Thu, Aug 16, 2007 at 11:51:42AM +1000, Paul Mackerras wrote:
>
> Name one such case.
See sk_stream_mem_schedule in net/core/stream.c:
/* Under limit. */
if (atomic_read(sk->sk_prot->memory_allocated) <
sk->sk_prot->sysctl_mem[0]) {
if
Christoph Lameter writes:
> A volatile default would disable optimizations for atomic_read.
> atomic_read without volatile would allow for full optimization by the
> compiler. Seems that this is what one wants in many cases.
Name one such case.
An atomic_read should do a load from memory. If
On Wed, 15 Aug 2007, Paul E. McKenney wrote:
> Understood. My point is not that the impact is precisely zero, but
> rather that the impact on optimization is much less hurtful than the
> problems that could arise otherwise, particularly as compilers become
> more aggressive in their
"compilation unit" is a C standard term. It typically boils down
to "single .c file".
As you mentioned later, "single .c file with all the other files
(headers
or other .c files) that it pulls in via #include" is actually
"translation
unit", both in the C standard as well as gcc docs.
Hey Ingo, Thomas,
I was playing with the latency tracer on 2.6.23-rc2-rt2 while a "make
-j8" was going on in the background and the box hung with this on the
console:
[ BUG: circular locking deadlock detected! ]
Herbert Xu wrote:
On Wed, Aug 15, 2007 at 01:02:23PM -0400, Chris Snook wrote:
Herbert Xu wrote:
I'm still unconvinced why we need this because nobody has
brought up any examples of kernel code that legitimately
need this.
There's plenty of kernel code that *wants* this though. If we can
On Tue, Aug 14, 2007 at 11:26:18AM -0500, Matt Mackall wrote:
> On Tue, Aug 14, 2007 at 04:52:04PM +0800, Fengguang Wu wrote:
> > Hello,
> >
> > Matt Mackall brings us many memory-footprint-optimization
> > opportunities with his pagemap/kpagemap patches. However I wonder if
> > the binary
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Precisely the point -- use of volatile (whether in casts or on asms)
in these cases are intended to disable those
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would find disabling all compiler
No; compilation units have nothing to do with it, GCC can optimise
across compilation unit boundaries just fine, if you tell it to
compile more than one compilation unit at once.
Last I checked, the Linux kernel build system did compile each .c
file
as a separate compilation unit.
I have
Segher Boessenkool wrote:
Please check the definition of "cache coherence".
Which of the twelve thousand such definitions? :-)
Every definition I have seen says that writes to a single memory
location have a serial order as seen by all CPUs, and that a read
will return the most recent
On Thu, Aug 16, 2007 at 08:53:16AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 05:49:50PM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 16, 2007 at 08:30:23AM +0800, Herbert Xu wrote:
> >
> > > Thanks. But I don't need a summary of the thread, I'm asking
> > > for an extant code snippet
On Wed, Aug 15, 2007 at 05:59:41PM -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > The volatile cast should not disable all that many optimizations,
> > for example, it is much less hurtful than barrier(). Furthermore,
> > the main optimizations disabled
Was playing with sched_smt_power_savings/sched_mc_power_savings and found
out that while the scheduler domains are reconstructed when sysfs settings
change, rebalance_domains() can get triggered with null domain on other cpus,
which is setting next_balance to jiffies + 60*HZ. Resulting in no
On Wed, 15 Aug 2007, Segher Boessenkool wrote:
> [...]
> > BTW:
> >
> > #define atomic_read(a) (*(volatile int *)&(a))
> > #define atomic_set(a,i) (*(volatile int *)&(a) = (i))
> >
> > int a;
> >
> > void func(void)
> > {
> > int b;
> >
> > b = atomic_read(a);
> >
Paul E. McKenney wrote:
On Wed, Aug 15, 2007 at 11:30:05PM +1000, Nick Piggin wrote:
Especially since several big architectures don't have volatile in their
atomic_get and _set, I think it would be a step backwards to add them in
as a "just in case" thin now (unless there is a better reason).
Ingo, let me know if there any side effects of this change. Thanks.
---
On a four package system with HT - HT load balancing optimizations
were broken. For example, if two tasks end up running on two logical
threads of one of the packages, scheduler is not able to pull one of
the tasks to a
Hi Herbert,
On Thu, 16 Aug 2007, Herbert Xu wrote:
> On Thu, Aug 16, 2007 at 06:28:42AM +0530, Satyam Sharma wrote:
> >
> > > The udelay itself certainly should have some form of cpu_relax in it.
> >
> > Yes, a form of barrier() must be present in mdelay() or udelay() itself
> > as you say,
On Wed, 15 Aug 2007, Paul E. McKenney wrote:
> The volatile cast should not disable all that many optimizations,
> for example, it is much less hurtful than barrier(). Furthermore,
> the main optimizations disabled (pulling atomic_read() and atomic_set()
> out of loops) really do need to be
On 08/16/2007 02:26 AM, David P. Reed wrote:
My mention of NMI (which by definition can't be masked) is because NMI
Well, not by the CPU it can't, but on a PC, masking NMIs is a simple matter
of setting bit 7 of I/O port 0x70 to 1 (it seems the kernel does not provide
an interface for it
On Wed, Aug 15, 2007 at 05:49:50PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 16, 2007 at 08:30:23AM +0800, Herbert Xu wrote:
>
> > Thanks. But I don't need a summary of the thread, I'm asking
> > for an extant code snippet in our kernel that benefits from
> > the volatile change and is not
On Wed, Aug 15, 2007 at 05:42:07PM -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > Seems to me that we face greater chance of confusion without the
> > volatile than with, particularly as compiler optimizations become
> > more aggressive. Yes, we could simply
On Thu, Aug 16, 2007 at 06:28:42AM +0530, Satyam Sharma wrote:
>
> > The udelay itself certainly should have some form of cpu_relax in it.
>
> Yes, a form of barrier() must be present in mdelay() or udelay() itself
> as you say, having it in __const_udelay() is *not* enough (superflous
>
On Thu, Aug 16, 2007 at 08:30:23AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 05:23:10PM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 16, 2007 at 08:12:48AM +0800, Herbert Xu wrote:
> > > On Wed, Aug 15, 2007 at 04:53:35PM -0700, Paul E. McKenney wrote:
> > > >
> > > > > > Communicating
[ Sorry for empty subject line in previous mail. I intended to make
a patch so cleared it to change it, but ultimately neither made
a patch nor restored subject line. Done that now. ]
On Thu, 16 Aug 2007, Herbert Xu wrote:
> On Thu, Aug 16, 2007 at 06:06:00AM +0530, Satyam Sharma wrote:
> >
On Wed, 15 Aug 2007, Paul E. McKenney wrote:
> Seems to me that we face greater chance of confusion without the
> volatile than with, particularly as compiler optimizations become
> more aggressive. Yes, we could simply disable optimization, but
> optimization can be quite helpful.
A volatile
On Wed, 2007-08-15 at 19:58 -0400, Dave Jones wrote:
> Signed-off-by: Dave Jones <[EMAIL PROTECTED]>
>
> diff --git a/drivers/infiniband/hw/mlx4/mad.c
> b/drivers/infiniband/hw/mlx4/mad.c
> index 3330917..0ed02b7 100644
> --- a/drivers/infiniband/hw/mlx4/mad.c
> +++
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> Those barriers are for when we need ordering between atomic variables
> and other memory locations. An atomic variable by itself doesn't and
> shouldn't need any barriers for other CPUs to be able to see what's
> happening to it.
It does not need any
On Wed, Aug 15, 2007 at 05:26:34PM -0700, Christoph Lameter wrote:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > In the kernel we use atomic variables in precisely those situations
> > where a variable is potentially accessed concurrently by multiple
> > CPUs, and where each CPU needs to see
When a raid1 array is reshaped (number of drives changed),
the list of devices is compacted, so that slots for missing
devices are filled with working devices from later slots.
This requires the "rd%d" symlinks in sysfs to be updated.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat
Commit 1757128438d41670ded8bc3bc735325cc07dc8f9 was slightly bad. If
an array has a write-intent bitmap, and you remove a drive, then readd
it, only the changed parts should be resynced. However after the
above commit, this only works if the array has not been shut down and
restarted.
This is
Following 2 patches contain bugfixes for md. Both apply to earlier
kernels, but probably aren't significant enough for -stable (no oops,
no data corruption, no security hole).
They should go in 2.6.23 though.
Thanks,
NeilBrown
[PATCH 001 of 2] md: Make sure a re-add after a restart honours
Christoph Lameter writes:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > In the kernel we use atomic variables in precisely those situations
> > where a variable is potentially accessed concurrently by multiple
> > CPUs, and where each CPU needs to see updates done by other CPUs in a
> >
On Thu, Aug 16, 2007 at 06:06:00AM +0530, Satyam Sharma wrote:
>
> that are:
>
> while ((atomic_read(_for_crash_ipi) > 0) && msecs) {
> mdelay(1);
> msecs--;
> }
>
> where mdelay() becomes __const_udelay() which happens to be in another
> translation unit
On Wed, Aug 15, 2007 at 05:23:10PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 16, 2007 at 08:12:48AM +0800, Herbert Xu wrote:
> > On Wed, Aug 15, 2007 at 04:53:35PM -0700, Paul E. McKenney wrote:
> > >
> > > > > Communicating between process context and interrupt/NMI handlers using
> > > > >
On Wed, 15 Aug 2007, Heiko Carstens wrote:
> [...]
> Btw.: we still have
>
> include/asm-i386/mach-es7000/mach_wakecpu.h: while (!atomic_read(deassert));
> include/asm-i386/mach-default/mach_wakecpu.h: while (!atomic_read(deassert));
>
> Looks like they need to be fixed as well.
[PATCH]
Alan:
Thanks for the comment. I will code a patch, and include a sanity check
as you suggested, and send it for review. Just to clarify one concern
your note raised:
I understand that SMM/SMI servicing can take a long time, but SMM/SMI
shouldn't happen while interrupts are masked using
1 - 100 of 984 matches
Mail list logo