At 05 Oct 2001 05:20:40 -0700,
Josh Green wrote:
> 
> > > Are there any programs that report what piece of
> > > code is causing big spikes? I haven't tested the 2.4.10 kernel yet, and
> > > if I did it would be bare without any patches (as I don't know of any up
> > > to date with 2.4.10). I did realize that the 2.4.9 kernel with LL patch
> > > is worse than the 2.4.5 one I was using. This might be because I
> > > compiled it with Mandrake's gcc a la 2.96.
> > 
> > As far as I've experienced, the VM of 2.4.9 tends to have higher
> > latency under heavy disk loads.  On 2.4.10 the VM was changed much and
> > became better.
> > 
> > I discussed with Andrea about this problem, and he will check the
> > relevant part.  Hopefully any patch will come to 2.4.11.
> > 
> 
> Sounds good. I noticed the memory management got worked heavily between
> 2.4.9 and 2.4.10. I couldn't really just logically do a manual apply of
> rejects for the LL patch for 2.4.9. Its nice to hear that its for the
> better :)

A good news:  after several tries and hacks, I got 1msec latency.
You need to add rescheduling checks in sync_buffers() loop (in
fs/buffer.c).  In my case I added the checks in write_some_buffers()
and wait_for_buffers(), which are called from sync_buffers().

My results are shown in
        http://www.alsa-project.org/~iwai/latency-results/
The subdirectory rf-ll2-alsa contains the latest (almost perfect)
result with 2.4.10 + LL + my patch + ALSA 0.9.0 + reiserfs.
(No explanatory html, sorry.)


> > > By the way I'm posting my results to http://www.c0nfusion.org/~josh/ if
> > > you want to have a look. The only hardware that has changed in my
> > > machine since my previous tests (http://www.resonance.org/~josh/) is I
> > > added an SB Live!. I think I upgraded to Mandrake 8 between then, so
> > > most of the software is probably different.
> > 
> > Ah, the last test looks really fantastic..
> > I wish in near future we get it back!
> > 
> 
> Alas, the last test is with OSS (my test #4 was with ALSA though, which
> has similar #s as far as latency). I wonder about the OSS latencytest
> though. It seems like the latency is fairly equivelant. The only
> difference being the huge # of overruns with ALSA compared to OSS. I
> wonder if the OSS overrun detection is correct. I haven't looked at the
> latencytest code recently, but I seem to remember it manually
> determining if an overrun was caused or not (by comparing the latency
> with the fragment period). I also noticed that the buffer size with OSS
> was 1024 rather than 768 bytes. I think this is just one extra fragment
> though, so that shouldn't affect it right?

I guess yes.  The ALSA driver can detect the overrun strictly by
itself while most of OSS drivers don't.  So for OSS the overrun is
counted only from the measured time difference.  Anyway when any high
peak exists, it must appear in the plot, so we can see it anyway :)
That's the advantage of visual effect :)


> Not sure if you saw my question about whether there is some way to
> determine where in the kernel a hold up occurs. How does Andrew Morton
> check those things? Seems like a tool to determine what driver, program
> or kernel routine is causing latency spikes would be a useful tool.

AFAIK there are some tools.  I saw some stuffss on Andrew's page.
For PE kernel, there is additional patch to check the lock time.
But in both cases the results are not always trustable (only from my
experience).  For example, pe-stats patch shows often sched.c spinlock
grabs for the longest time, which must not be correct.

Sorry for little useful info.  I'd like to know if there is really a
good tool for this purpose.


Takashi

_______________________________________________
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel

Reply via email to