Hi, Anthony 

 I agree these two points
1) timer migration only effects at before set_timer.(at pal_halt_light)
   So current configuration it does not need migrate timer at schedule_tail.
   (It seems mainteners likely.)
 
2) For performance tuning, it should be changed.
   But I am not yet tested your proposed configuration.
   I will test it in night run today.

Thanks
Atsushi SAKAI

>That's Ok for me.
>
>>@@ -122,6 +122,7 @@ void schedule_tail(struct vcpu *prev)
>>                shared_info->vcpu_info[current->vcpu_id].evtchn_upcall_mask;
>>              __ia64_per_cpu_var(current_psr_ic_addr) = (int *)
>>                (current->domain->arch.shared_info_va + XSI_PSR_IC_OFS);
>>+             migrate_timer(&current->arch.hlt_timer, current->processor);
>>      }
>>      flush_vtlb_for_context_switch(current);
>> }
>
>I think we don't need to call migrate_timer in schedule_tail,
>due to it is definitely stopped.
>
>+++ b/xen/arch/ia64/xen/hypercall.c    Thu Aug 24 11:48:35 2006 -0600
>@@ -235,7 +235,12 @@ fw_hypercall (struct pt_regs *regs)
>                       }
>                       else {
>                               perfc_incrc(pal_halt_light);
>-                              do_sched_op_compat(SCHEDOP_yield, 0);  
>+                              migrate_timer(&v->arch.hlt_timer,  << 
><<<v->arch.hlt_timer.cpu=v->processor;
>+                                            v->processor);
>+                              set_timer(&v->arch.hlt_timer,
>+                                        vcpu_get_next_timer_ns(v));
>+                              do_sched_op_compat(SCHEDOP_block, 0);
>+                              stop_timer(&v->arch.hlt_timer);
>                       }
>                       regs->r8 = 0;
>                       regs->r9 = 0;
>
>I also propose use above assignment state to substitute migrate_timer,
>Because at this time hlt_timer is definitely stopped, we can change 
>hlt_timer.cpu
>directly. As we know, migrate_timer may need to get two big spin_locks, 
>in huge box, I think this may cause performance degradation.
>
>Thanks,
>Anthony
>
>
>
>>-----Original Message-----
>>From: Alex Williamson [mailto:[EMAIL PROTECTED]
>>Sent: 2006?8?29? 21:55
>>To: Xu, Anthony
>>Cc: Atsushi SAKAI; xen-ia64-devel@lists.xensource.com
>>Subject: RE: [Xen-ia64-devel][PATCH] found a small
>>bugRE:[Xen-ia64-devel][PATCH] pal_halt_light emulatefor domU TAKE3
>>
>>On Tue, 2006-08-29 at 17:04 +0800, Xu, Anthony wrote:
>>>
>>> I agree with you,
>>> But I didn't find a good place to call init_tiemr.
>>>
>>> Comment?
>>
>>   How about the patch below?  It calls init_timer() with a valid CPU,
>>then migrates the timer in schedule_tail(), much like the vmx timer.
>>Probably safer from a timer standpoint.  Thanks,
>>
>>      Alex
>>
>>Signed-off-by: Alex Williamson <[EMAIL PROTECTED]>
>>---
>>
>>diff -r 684fdcfb251a xen/arch/ia64/xen/domain.c
>>--- a/xen/arch/ia64/xen/domain.c      Mon Aug 28 16:26:37 2006 -0600
>>+++ b/xen/arch/ia64/xen/domain.c      Tue Aug 29 07:52:49 2006 -0600
>>@@ -122,6 +122,7 @@ void schedule_tail(struct vcpu *prev)
>>                shared_info->vcpu_info[current->vcpu_id].evtchn_upcall_mask;
>>              __ia64_per_cpu_var(current_psr_ic_addr) = (int *)
>>                (current->domain->arch.shared_info_va + XSI_PSR_IC_OFS);
>>+             migrate_timer(&current->arch.hlt_timer, current->processor);
>>      }
>>      flush_vtlb_for_context_switch(current);
>> }
>>@@ -305,7 +306,8 @@ struct vcpu *alloc_vcpu_struct(struct do
>>          v->arch.last_processor = INVALID_PROCESSOR;
>>      }
>>      if (!VMX_DOMAIN(v)){
>>-             init_timer(&v->arch.hlt_timer, hlt_timer_fn, v, v->processor);
>>+             init_timer(&v->arch.hlt_timer, hlt_timer_fn, v,
>>+                        first_cpu(cpu_online_map));
>>      }
>>
>>      return v;
>







_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Reply via email to