I also got hit with this bug. It's quite annoying, but I found the reason. According to the Javadoc of ScheduledThreadPoolExecutor:
because it acts as a fixed-sized pool using corePoolSize threads and an unbounded queue, adjustments to maximumPoolSize have no useful effect.
Turns out, this is exactly the case that occurs when you create a ScheduledThreadPoolExecutor. Since each SocketReceiver creates one never-ending task, all the threads of the ScheduledThreadPoolExecutor become clogged, and no new task is ever accepted. Presumably, a proper fix in Logback would be to increase the corePoolSize of the executor before adding a new Receiver to it (eg: in ReceiverBase.start()), just to make sure that there is at least one free thread left. For now, in my code, I have injected the following bit of code outside of the configuration:
ScheduledThreadPoolExecutor exec = (ScheduledThreadPoolExecutor)context.getScheduledExecutorService();
exec.setCorePoolSize(exec.getCorePoolSize() + exec.getQueue().size() + 1);
|