Re: Release: serverino - please destroy it.

2022-05-09 Thread Andrea Fontana via Digitalmars-d-announce

On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:
Every request is processed by a worker running in an isolated 
process, no fibers/threads, sorry (or thanks?)


I did some tests and the performance sounds good: on a local 
machine it can handle more than 100_000 reqs/sec for a simple 
page containing just "hello world!".Of course that's not a 
good benchmark, if you can help me with other benchmarks it 
would be much appreciated (a big thanks to Tomáš Chaloupka who 
did some tests!)


Typically server applications are IO heavy. I expect your 
isolated-process approach to break down with that kind of work.


I know. We all know :) Benchmarks are just benchmarks. They are 
useful to understand how much overhead your server adds to the 
whole project. These benchmarks are made in the local machine, 
with almost no connection overhead.


Not every application is IO heavy, anyway.

As an example, how many requests per second can you manage if 
all requests have to wait 100 msecs?


For non critical workload you will probably still hit good 
enough performance though.


Firstly, it depends on how many workers you have.
Then you should consider that a lot of (most?) websites use 
php-fpm, that works using the same approach (but php is much 
slower than D). The same goes for cgi/fastcgi/scgi and so on.


Let's say you have just 20 workers. 100msecs for each request (a 
lot of time for my standards, I would say). That means 20*10 = 
200 webpages/s = 720k pages/h. I don't think your website has so 
much traffic...


And I hope not every request will take 100msecs!



Instead of using a lot of different UDAs to set routing rules, 
you can simply write them in your endpoint's body and exit 
from it to pass to the next endpoint.


My experience is that exhaustive matching is easier to reason 
about at larger scale.


Yes, but exactly the same thing can be done without uda.

```
@endpoint void my_end(Request r, Output o)
{
 if (r.uri == "/asd") // or whatever you want: regex, or 
checking another field

return false; //
}
```

This is just like:

```
@matchuda(uri, "/asd") void my_end() { ... }
```

What's the difference? The first one is much more flexible, IMHO.


Please help me testing it, I'm looking forward to receiving 
your shiny new issues on github.


I noticed it has zero unittests, that is probably a good place 
to start.


Of course! They will come for sure. :)

Andrea



Re: Release: serverino - please destroy it.

2022-05-09 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:
Every request is processed by a worker running in an isolated 
process, no fibers/threads, sorry (or thanks?)


I did some tests and the performance sounds good: on a local 
machine it can handle more than 100_000 reqs/sec for a simple 
page containing just "hello world!".Of course that's not a good 
benchmark, if you can help me with other benchmarks it would be 
much appreciated (a big thanks to Tomáš Chaloupka who did some 
tests!)


Typically server applications are IO heavy. I expect your 
isolated-process approach to break down with that kind of work.


As an example, how many requests per second can you manage if all 
requests have to wait 100 msecs?


For non critical workload you will probably still hit good enough 
performance though.


Instead of using a lot of different UDAs to set routing rules, 
you can simply write them in your endpoint's body and exit from 
it to pass to the next endpoint.


My experience is that exhaustive matching is easier to reason 
about at larger scale.


Please help me testing it, I'm looking forward to receiving 
your shiny new issues on github.


I noticed it has zero unittests, that is probably a good place to 
start.


Re: Release: serverino - please destroy it.

2022-05-09 Thread Andrea Fontana via Digitalmars-d-announce

On Monday, 9 May 2022 at 19:09:40 UTC, Guillaume Piolat wrote:

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

Hello!

I've just released serverino. It's a small & ready-to-go 
http/https server.


Dub package: https://code.dlang.org/packages/serverino

Andrea


Looks very useful, congratulations!


Thank you. Looking forward to getting feedback, bug reports and 
help :)


Andrea


Re: Release: serverino - please destroy it.

2022-05-09 Thread Guillaume Piolat via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

Hello!

I've just released serverino. It's a small & ready-to-go 
http/https server.


Dub package: https://code.dlang.org/packages/serverino

Andrea


Looks very useful, congratulations!


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 04:48:11PM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Monday, 9 May 2022 at 16:37:15 UTC, H. S. Teoh wrote:
> > Why is memory protection the only way to implement write barriers in
> > D?
> 
> Well, it's the only way I know of without making it a major
> backwards-incompatible change. The main restriction in this area is
> that it must continue working with code written in other languages,
> and generally not affect the ABI drastically.

Ah, gotcha.  Yeah, I don't think such an approach would be fruitful (it
was worth a shot, though!).  If D were ever to get write barriers,
they'd have to be in some other form, probably more intrusive in terms
of backwards-compatibility and ABI.


T

-- 
Curiosity kills the cat. Moral: don't be the cat.


Re: Release: serverino - please destroy it.

2022-05-09 Thread Vladimir Panteleev via Digitalmars-d-announce

On Monday, 9 May 2022 at 16:37:15 UTC, H. S. Teoh wrote:
Why is memory protection the only way to implement write 
barriers in D?


Well, it's the only way I know of without making it a major 
backwards-incompatible change. The main restriction in this area 
is that it must continue working with code written in other 
languages, and generally not affect the ABI drastically.




Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 05:55:39AM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Monday, 9 May 2022 at 00:25:43 UTC, H. S. Teoh wrote:
> > In the past, the argument was that write barriers represented an
> > unacceptable performance hit to D code.  But I don't think this has
> > ever actually been measured. (Or has it?)  Maybe somebody should
> > make a dmd fork that introduces write barriers, plus a generational
> > GC (even if it's a toy, proof-of-concept-only implementation) to see
> > if the performance hit is really as bad as believed to be.
> 
> Implementing write barriers in the compiler (by instrumenting code)
> means that you're no longer allowed to copy pointers to managed memory
> in non-D code. This is a stricter assumption that the current ones we
> have; for instance, copying a struct (which has indirections) with
> memcpy would be forbidden.

Hmm, true.  That puts a big damper on the possibilities... OTOH, if this
could be made an optional feature, then code that we know doesn't need,
e.g., passing pointers to C code, can take advantage of possibly better
GC strategies.


T

-- 
English has the lovely word "defenestrate", meaning "to execute by throwing 
someone out a window", or more recently "to remove Windows from a computer and 
replace it with something useful". :-) -- John Cowan


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 05:52:30AM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Sunday, 8 May 2022 at 23:44:42 UTC, Ali Çehreli wrote:
> > While we are on topic :) and as I finally understood what
> > generational GC is[1], are there any fundamental issues with D to
> > not use one?
> 
> I implemented one a long time ago. The only way to get write barriers
> with D is memory protection. It worked, but unfortunately the write
> barriers caused a severe performance penalty.

Why is memory protection the only way to implement write barriers in D?


> It's possible that it might be viable with more tweaking, or in
> certain applications where most of the heap is not written to; I did
> not experiment a lot with it.

Interesting data point, in any case.


T

-- 
The early bird gets the worm. Moral: ewww...


Re: Release: serverino - please destroy it.

2022-05-09 Thread Vladimir Panteleev via Digitalmars-d-announce

On Monday, 9 May 2022 at 00:25:43 UTC, H. S. Teoh wrote:
In the past, the argument was that write barriers represented 
an unacceptable performance hit to D code.  But I don't think 
this has ever actually been measured. (Or has it?)  Maybe 
somebody should make a dmd fork that introduces write barriers, 
plus a generational GC (even if it's a toy, 
proof-of-concept-only implementation) to see if the performance 
hit is really as bad as believed to be.


Implementing write barriers in the compiler (by instrumenting 
code) means that you're no longer allowed to copy pointers to 
managed memory in non-D code. This is a stricter assumption that 
the current ones we have; for instance, copying a struct (which 
has indirections) with memcpy would be forbidden.