On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:
Every request is processed by a worker running in an isolated
process, no fibers/threads, sorry (or thanks?)
I did some tests and the performance sounds good: on a local
machine it can handle more than 100_000 reqs/sec for a simple
page containing just "hello world!".Of course that's not a good
benchmark, if you can help me with other benchmarks it would be
much appreciated (a big thanks to Tomáš Chaloupka who did some
tests!)
Typically server applications are IO heavy. I expect your
isolated-process approach to break down with that kind of work.
As an example, how many requests per second can you manage if all
requests have to wait 100 msecs?
For non critical workload you will probably still hit good enough
performance though.
Instead of using a lot of different UDAs to set routing rules,
you can simply write them in your endpoint's body and exit from
it to pass to the next endpoint.
My experience is that exhaustive matching is easier to reason
about at larger scale.
Please help me testing it, I'm looking forward to receiving
your shiny new issues on github.
I noticed it has zero unittests, that is probably a good place to
start.