On 07/10/2012 08:47 PM, Adam Hraska wrote:
> I've pushed some of my recent changes to launchpad. It
> currently includes particularly:

Ok, I went through the changes, including the most recent ones. However,
through the big stuff, namely workq and rcu, I went only superficially
to be able get the big picture in this rather short time. I will need to
come back to this in a greater detail soon.

The amount of code you had to type in is kind of impressive and I felt
certain relief for you when glancing through it. The style is ok, the
only nit would be not to use // for commenting and use the good old /**/
instead.

I noticed we now have conflicting implementations of
ipi_wait_for_idle(). Moreover you call yours before sending the IPI
while the mainline implementation calls it afterwards:

+int l_apic_send_custom_ipi(uint8_t apicid, uint8_t vector)
+{
+       icr_t icr;
+
+       /* Wait for a destination cpu to accept our previous ipi. */
+       ipi_wait_for_idle();

I guess doing it before can eliminate some waiting time by simply doing
some useful work before sending the next IPI, but on the other hand, you
proceed with uncertainty that the IPI has indeed been delivered when you
needed it.

In l_apic_broadcast_custom_ipi(), you have:

+       if (CPU->arch.id != l_apic_id()) {
+#ifdef CONFIG_DEBUG
+               printf("lapic error: LAPIC ID (%" PRIu8 ") and hw ID assigned 
by BSP"
+                       " (%u) differ. Correcting to LAPIC ID.\n", l_apic_id(),
+                       CPU->arch.id);
+#endif
+               CPU->arch.id = l_apic_id();
+       }

Can this really happen? If not, why bother with cpu_arch_id_init()?

I noticed two (IMHO) antagonistic changes when it comes to dealing with
latency. First, you started to prefer the CPU on which a thread last ran
when readying the thread again after it was sleeping. This may
questionably improve cache utilization (but the thread may have been
sleeping for quite some time), but worsens the wakeup latency for most
threads. Second, you started to compensate for preemptions that could
not be realized when a thread temporarily disabled preemption. Can you
elaborate on what was your motivation for these changes?

Shouldn't waitq_complete_wakeup() be an integral part of waitq_sleep()?
Using it separately means that the caller needs to know how was the
waitq allocated. I think that we've had cases when the waitq was part of
a structure which itself was sometimes allocated on the stack and
sometimes dynamically. Good catch, btw.

Jakub

_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/cgi-bin/listinfo/helenos-devel

Reply via email to