On 07/14/2012 01:45 PM, Adam Hraska wrote:
> On Fri, Jul 13, 2012 at 12:23 AM, Jakub Jermar <[email protected]> wrote:
>> I noticed we now have conflicting implementations of
>> ipi_wait_for_idle(). Moreover you call yours before sending the IPI
>> while the mainline implementation calls it afterwards:
>>
>> +int l_apic_send_custom_ipi(uint8_t apicid, uint8_t vector)
>> +{
>> +       icr_t icr;
>> +
>> +       /* Wait for a destination cpu to accept our previous ipi. */
>> +       ipi_wait_for_idle();
>>
>> I guess doing it before can eliminate some waiting time by simply doing
>> some useful work before sending the next IPI, but on the other hand, you
>> proceed with uncertainty that the IPI has indeed been delivered when you
>> needed it.
> 
> If I understood the documentation [1] correctly, the
> delivery status is "pending" until the destination cpu
> accepts the interrupt. That just means the destination
> cpu's APIC noted the IPI. Therefore, it does not indicate
> the IPI was dispatched to software and it definitely
> does not mean that software handled the IPI.
> 
> As far as I can see, it is only used to avoid sending
> another IPI while sending the previous one is still
> in progress. The documentation does not specify what
> happens if we try to send another IPI and the previous
> has not yet been accepted but I am guessing bad things
> might happen, eg the previous IPI may be lost or the
> current one discarded.

This makes sense.

>> I noticed two (IMHO) antagonistic changes when it comes to dealing with
>> latency. First, you started to prefer the CPU on which a thread last ran
>> when readying the thread again after it was sleeping. This may
>> questionably improve cache utilization (but the thread may have been
>> sleeping for quite some time),
> 
> It also helps maintain a steady load. If a thread wakes up
> many threads at once (eg condvar broadcast) they would all
> be migrated by the wake up to the local cpu. The distribution
> of threads among the cpus is then governed by thread_ready
> and not by the load balancing threads.

Ok, makes sense.

>> but worsens the wakeup latency for most
>> threads.
> 
> Hmm, I am not sure I see why. Would not the thread just
> woken up still have to wait for the current thread's time
> slice to elapse -- be it on the local cpu or on another
> cpu?

I guess I had a tunnel vision, please disregard.

>> Shouldn't waitq_complete_wakeup() be an integral part of waitq_sleep()?
>> Using it separately means that the caller needs to know how was the
>> waitq allocated. I think that we've had cases when the waitq was part of
>> a structure which itself was sometimes allocated on the stack and
>> sometimes dynamically. Good catch, btw.
> 
> That is a good idea.

Cool.

Jakub

_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/cgi-bin/listinfo/helenos-devel

Reply via email to