On 11/17/2010 02:34 AM, Jan Beulich wrote:
>> Actually, on second thoughts, maybe it doesn't matter so much. The main
>> issue is making sure that the interrupt will make the VCPU drop out of
>> xen_poll_irq() - if it happens before xen_poll_irq(), it should leave
>> the event pending, which will
On 11/17/2010 04:21 AM, Peter Zijlstra wrote:
> On Tue, 2010-11-16 at 13:08 -0800, Jeremy Fitzhardinge wrote:
>> Maintain a flag in both LSBs of the ticket lock which indicates whether
>> anyone is in the lock slowpath and may need kicking when the current
>> holder unlocks. The flags are set when
On Tue, 2010-11-16 at 13:08 -0800, Jeremy Fitzhardinge wrote:
> Maintain a flag in both LSBs of the ticket lock which indicates whether
> anyone is in the lock slowpath and may need kicking when the current
> holder unlocks. The flags are set when the first locker enters
> the slowpath, and cleare
>>> On 17.11.10 at 10:57, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:52 AM, Jeremy Fitzhardinge wrote:
>> On 11/17/2010 12:11 AM, Jan Beulich wrote:
>> On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
+static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
{
>>
On 11/17/2010 12:52 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:11 AM, Jan Beulich wrote:
> On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
>>> +static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
>>> {
>>> - struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>>> On 17.11.10 at 10:08, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:56 AM, Jeremy Fitzhardinge wrote:
>> On 11/17/2010 12:52 AM, Jeremy Fitzhardinge wrote:
>>> But, yes, %z0 sounds interesting. Is it documented anywhere? I think
>>> I've tried to use it in the past and run into gcc bugs.
>>
On 11/17/2010 11:05 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:58 AM, Avi Kivity wrote:
> >> Actually in this case I'm pretty sure there's already a "set bit"
> >> function which will do the job. set_bit(), I guess, though it takes a
> >> bit number rather than a mask...
> >>
> >
> >
> >
On 11/17/2010 12:56 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:52 AM, Jeremy Fitzhardinge wrote:
>> But, yes, %z0 sounds interesting. Is it documented anywhere? I think
>> I've tried to use it in the past and run into gcc bugs.
> This one: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39590
On 11/17/2010 12:58 AM, Avi Kivity wrote:
>> Actually in this case I'm pretty sure there's already a "set bit"
>> function which will do the job. set_bit(), I guess, though it takes a
>> bit number rather than a mask...
>>
>
>
> set_bit() operates on a long, while the intel manuals recommend
> aga
On 11/17/2010 10:52 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:31 AM, Jan Beulich wrote:
> On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
> >> +static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
> >> +{
> >> + if (sizeof(lock->tickets.tail) == sizeof(u8))
> >>
On 11/16/2010 11:08 PM, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge
>
> Hi all,
>
> This is a revised version of the pvticket lock series.
>
> The early part of the series is mostly unchanged: it converts the bulk
> of the ticket lock code into C and makes the "small" and "large"
> ticke
On 11/17/2010 12:52 AM, Jeremy Fitzhardinge wrote:
> But, yes, %z0 sounds interesting. Is it documented anywhere? I think
> I've tried to use it in the past and run into gcc bugs.
This one: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39590
Should be OK in this case because there's no 64-bit valu
On 11/17/2010 12:31 AM, Jan Beulich wrote:
On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
>> +static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
>> +{
>> +if (sizeof(lock->tickets.tail) == sizeof(u8))
>> +asm (LOCK_PREFIX "orb %1, %0"
>> +
On 11/17/2010 12:11 AM, Jan Beulich wrote:
On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
>> +static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
>> {
>> -struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -struct xen_spinlock *prev;
>> int irq = __
>>> On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
> +static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
> +{
> + if (sizeof(lock->tickets.tail) == sizeof(u8))
> + asm (LOCK_PREFIX "orb %1, %0"
> + : "+m" (lock->tickets.tail)
> +
>>> On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
> +static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
> {
> - struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> - struct xen_spinlock *prev;
> int irq = __get_cpu_var(lock_kicker_irq);
> - int ret;
16 matches
Mail list logo