On Fri, 10 Oct 2008 14:57:47 -0200
Werner Almesberger <[EMAIL PROTECTED]> wrote:

> How bad was the workqueue really ? It's certainly wasteful,
> but if we consider that all the data goes up to userspace
> eagerly consuming every event anyway, it may not matter all
> that much in the big picture.

I did some measurements after I posted my patch to do this (not
realising that this is how it was done earlier). And there was
significant difference from just reading the accelerometers. Workqueue
numbers first:

>   Cpu(s): 29.1%us, 24.5%sy,  0.0%ni, 46.2%id,  0.0%wa,  0.0%hi,  0.3%si,  
> 0.0%st
>    2239 root      20   0  5168 2952 1748 S 18.1  2.4   0:04.74 python         
>                         
>     232 root      15  -5     0    0    0 S  7.8  0.0   0:09.71 
> spi_s3c24xx_gpi                        
>    1546 root      15  -5     0    0    0 S  6.6  0.0   0:10.76 LIS302dl 
> interr   
> 
> compared to the current driver
> 
>   Cpu(s): 18.0%us,  7.3%sy,  0.0%ni, 74.4%id,  0.0%wa,  0.0%hi,  0.3%si,  
> 0.0%st
>    1647 root      20   0  5168 2952 1748 S 17.4  2.4   0:01.10 python         
>           

However, I think the spi_async() method should do quite a bit better
than this. Instead of basically doing 5 queue_work()s per interrupt
(the interrupt itself, plus reading of status,x,y,z), it would do just
one.

I made some mistake in the implementation so it doesn't work yet, so I
don't have numbers for the improvement. I still think there is a race
condition if we get the interrupt again before the spi_async completion
handler has finished executing (so that the workqueue handler is already
running).

// Simon

Reply via email to