On Tue, 11 Aug 2020, Brad Boyer wrote: > Do we have enough CPU time to implement LocalTalk fully in software > without causing other issues?
It would cause issues if drivers (especially the VIA clocksource) were badly starved of CPU, such that interrupts were handled too late. We already have a system clock drift problem on many models (though not all) because of that. > I believe it's a synchronous protocol, and unless you're talking about > one of the models with IOPs (IIfx, Q900, Q950) or real DMA (Q660, Q840) > the CPU will need to be involved on a very fixed schedule. I agree, the IOP machines would be good candidates. The PSC machines too, now that the NetBSD developers have figured out a lot more about the DMA controller. > Is it possible to do that level of real-time scheduling with a standard > Linux kernel? Well, high efficiency I/O and real time processing are important to the upstream kernel, because some big customers want those things. > If I'm interpreting the docs correctly, we will have to talk to the > serial controller around 8000 times per second (230kbit/s and 4 bytes at > a time from the ESCC) I think 8000 is an over-estimate. 230kbit/s is a baud rate not a bit rate, right? > with very tight tolerances. Slack tolerances probably lead to high packet loss. But there may be a happy medium. > Will it cause problems to have the serial controller sending interrupts > that often? I would like to know how many interrupts per second can be serviced on a Powerbook 190 (assuming that the ISR does nothing except acknowledge the interrupt) without consuming more than say, 50% CPU. It might be possible to run a test like this using the timer interrupt, just by reducing VIA_TIMER_CYCLES. > I suspect Apple just hogged the CPU during data transfer. > Me too. IIRC the mouse pointer moved jerkily during Localtalk I/O (and also during floppy disk initialization).

