Hi, The ExternalSemaphoreTable is an important resource since it holds 3 Semaphores per Socket (network connection). The size of ExternalSemaphoreTable determines how many such resources can be used concurrently.
VM parameter 49 holds this limit, it is accessed by VirtualMachine>>#maxExternalSemaphores. The default for Pharo 9 seems to be 256 currently. There is a method VirtualMachine>>#maxExternalSemaphoresSilently: that can be used to set this limit higher. This seems to work. Now, there are some limitations here: the number must be a power of 2, higher that the one set before and lower than 64K. BTW the number of file descriptors for files and sockets is also limited by the OS for its processes. Also, while growing the table, the Semaphores become unavailable to the VM for a very short period of time, so they could miss signals, leading to IO issues. For this reason, it seems that the design goal was to not allow this to grow during normal use, only during startup or upfront configuration. Now comes the weird thing: the implementation/code of VirtualMachine>>#maxExternalSemaphores: This is the method called when the current table is too small and when it wants to grow larger than the current VirtualMachine>>#maxExternalSemaphores. The implementation calls #maxExternalSemaphoresSilently: in a forked thread, after signalling a SystemNotification with a full explanation. But why should this be forked in the first place ? Then in the main thread, it signals an error ! Presumably because the design goals mentioned before. These two actions are in conflict. All this is probably the result of well meant evolution and refactoring, but it is quite confusing. My first idea would be: let the caller raise the error and don't try raising the limit, remove the current implementation and replace it by what is now #maxExternalSemaphoresSilently: What do you think ? Sven
