2007/8/26, Robert Hancock <[EMAIL PROTECTED]>:
>
> It looks like you have some CONFIG_IDE options enabled in your kernel
> configuration that result in drivers/ide trying to drive part or all of
> that controller, preventing libata from doing so. Likely the easiest
> thing to do is just set
2007/8/26, Robert Hancock [EMAIL PROTECTED]:
It looks like you have some CONFIG_IDE options enabled in your kernel
configuration that result in drivers/ide trying to drive part or all of
that controller, preventing libata from doing so. Likely the easiest
thing to do is just set CONFIG_IDE=n
I have been using 2.6.21.1. It seems working well, that is, all my
disk partitions are mapped as "/dev/sda*" and the performance looks
good. After I upgrade to 2.6.22.5 with the exact same configuration,
all the disk device turn to "/dev/hda*" and the performance degrade
obviously.
While I boot
I have been using 2.6.21.1. It seems working well, that is, all my
disk partitions are mapped as /dev/sda* and the performance looks
good. After I upgrade to 2.6.22.5 with the exact same configuration,
all the disk device turn to /dev/hda* and the performance degrade
obviously.
While I boot with
and substitute term
*context* in my previous mail with what you name. But I believe my
other explaination still hold, right?
And again, if anyway I am forced to use your termnology system, I
would also agree your other point regarding hardware.
2007/5/18, Phillip Susi <[EMAIL PROTECTED]>:
Don
and substitute term
*context* in my previous mail with what you name. But I believe my
other explaination still hold, right?
And again, if anyway I am forced to use your termnology system, I
would also agree your other point regarding hardware.
2007/5/18, Phillip Susi [EMAIL PROTECTED]:
Dong Feng
.
2007/5/16, Phillip Susi <[EMAIL PROTECTED]>:
Dong Feng wrote:
> If what you say were true, then an ISR would be running in the same
> context as the interrupted process.
Yes, and it is, as others have said in this thread, which is a good
reason why ISRs can't sleep.
> But p
.
2007/5/16, Phillip Susi [EMAIL PROTECTED]:
Dong Feng wrote:
If what you say were true, then an ISR would be running in the same
context as the interrupted process.
Yes, and it is, as others have said in this thread, which is a good
reason why ISRs can't sleep.
But please check any article
2007/5/16, Phillip Susi <[EMAIL PROTECTED]>:
Dong Feng wrote:
>> Doesn't it run in current process's context ?
>>
>
> No. I think the concept of process context is a higher-level logical
> concept. Though the interrupt share stack with the interrupted
> process, i
Yes, you are right in this regard. An interrupt handler does steal the
time slice from the interrupted process.
So now I think it is considered an acceptable deviation in calculating
the process run time as well as determine process scheduling because
an ISR should take very short time to
>
> I don't think so but I am not sure.
Aliter, i think so.How can an interrupt's execution time go
unaccounted then?
I guess it does not, only the current processes running
time is accounted for.
Thoughts?
The interrupt handler's execution time will definitely defer the
execution of the
good enough, but i have a query regarding this then.
On a 8K kernel stack system, doesn't interrupts share the stack associated
with the current process which was interrupted?
Yes, I think so.
Doesn't interrupt steals the CPU slice time allocated to the running process
to run?
I don't
good enough, but i have a query regarding this then.
On a 8K kernel stack system, doesn't interrupts share the stack associated
with the current process which was interrupted?
Yes, I think so.
Doesn't interrupt steals the CPU slice time allocated to the running process
to run?
I don't
I don't think so but I am not sure.
Aliter, i think so.How can an interrupt's execution time go
unaccounted then?
I guess it does not, only the current processes running
time is accounted for.
Thoughts?
The interrupt handler's execution time will definitely defer the
execution of the
Yes, you are right in this regard. An interrupt handler does steal the
time slice from the interrupted process.
So now I think it is considered an acceptable deviation in calculating
the process run time as well as determine process scheduling because
an ISR should take very short time to
2007/5/16, Phillip Susi [EMAIL PROTECTED]:
Dong Feng wrote:
Doesn't it run in current process's context ?
No. I think the concept of process context is a higher-level logical
concept. Though the interrupt share stack with the interrupted
process, in my opinion it logically does not share
I agree that the reason an interrupt can not sleep is because an
interrupt is not associated with any context. But I do not agree that
it is specifically because the scheduler can not *resume* the context.
In early version, the ISR always borrow the stack of the currently
running process, so if
My understanding is as follows.
Whenever the kernel code sleeps, it means the latest process running
in user space will have to wait for the event on which the kernel code
sleeps.
It makes sense for an exception handler to sleep because an exception
handler always serves the latest process
My understanding is as follows.
Whenever the kernel code sleeps, it means the latest process running
in user space will have to wait for the event on which the kernel code
sleeps.
It makes sense for an exception handler to sleep because an exception
handler always serves the latest process
I agree that the reason an interrupt can not sleep is because an
interrupt is not associated with any context. But I do not agree that
it is specifically because the scheduler can not *resume* the context.
In early version, the ISR always borrow the stack of the currently
running process, so if
Thank you very much.
2007/2/23, Davide Libenzi :
On Fri, 23 Feb 2007, Dong Feng wrote:
> The __syscallN series macros have disappeared in
> include/asm-i386/unistd.h. Why? I occasionally what to add and use
> some new system calls, mainly for debug use. Now I can not access the
>
The __syscallN series macros have disappeared in
include/asm-i386/unistd.h. Why? I occasionally what to add and use
some new system calls, mainly for debug use. Now I can not access the
system call I added from user space.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
The __syscallN series macros have disappeared in
include/asm-i386/unistd.h. Why? I occasionally what to add and use
some new system calls, mainly for debug use. Now I can not access the
system call I added from user space.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
Thank you very much.
2007/2/23, Davide Libenzi davidel@xmailserver.org:
On Fri, 23 Feb 2007, Dong Feng wrote:
The __syscallN series macros have disappeared in
include/asm-i386/unistd.h. Why? I occasionally what to add and use
some new system calls, mainly for debug use. Now I can
cond_resched() checks and conditionally sets PREEMPT_ACTIVE flag for
the current task. The comments says,
/*
* The BKS might be reacquired before we have dropped
* PREEMPT_ACTIVE, which could trigger a second
* cond_resched() call.
*/
My understanding is that cond_resched() would be indirectly
at explicit voluntary preemption points only, and those
points are determined by invoking cond_resched().
But I still have questions, why cond_resched() does not yield no-op
while CONFIG_PREEMPT is set? And why does it deal with the
PREEMPT_ACTIVE flag anyway?
2007/2/22, Dong Feng <[EMAIL PROTEC
I have a question about cond_resched().
What is the condition under which I should invoke cond_resched() irreplaceably?
For example, I see the following code in ksoftirqd(),
preempt_enable_no_resched();
cond_resched();
preempt_disable();
But I do not understand why I should not write the
I have a question about cond_resched().
What is the condition under which I should invoke cond_resched() irreplaceably?
For example, I see the following code in ksoftirqd(),
preempt_enable_no_resched();
cond_resched();
preempt_disable();
But I do not understand why I should not write the
at explicit voluntary preemption points only, and those
points are determined by invoking cond_resched().
But I still have questions, why cond_resched() does not yield no-op
while CONFIG_PREEMPT is set? And why does it deal with the
PREEMPT_ACTIVE flag anyway?
2007/2/22, Dong Feng [EMAIL PROTECTED]:
I
cond_resched() checks and conditionally sets PREEMPT_ACTIVE flag for
the current task. The comments says,
/*
* The BKS might be reacquired before we have dropped
* PREEMPT_ACTIVE, which could trigger a second
* cond_resched() call.
*/
My understanding is that cond_resched() would be indirectly
Function permanent_kmaps_init() take a struct pgd_t as parameter.
I presume passing the struct pgd_t as a parameter is to make the
function flexible in order to reuse it under different cases. However,
I discover the following things imparing the rationality of this
parameter.
1. This function
Function permanent_kmaps_init() take a struct pgd_t as parameter.
I presume passing the struct pgd_t as a parameter is to make the
function flexible in order to reuse it under different cases. However,
I discover the following things imparing the rationality of this
parameter.
1. This function
32 matches
Mail list logo