Re: Release: serverino - please destroy it.

2022-05-15 Thread Andrea Fontana via Digitalmars-d-announce

On Sunday, 15 May 2022 at 06:37:08 UTC, frame wrote:

On Saturday, 14 May 2022 at 23:23:47 UTC, Andrea Fontana wrote:


Which kind of socket exception could be triggered by a client?

Andrea


It doesn't matter if triggered by a client or not, you need to 
deal with the possibility. A closed/destroyed socket is an 
invalid resource.


I recently had the scenario on Windows where a client crashed 
and the socket wasn't closed properly somehow. Now the server 
adds the socket to the set to see an update - boom! "Socket 
operation on non-socket" error.


Also accepting sockets can throw, for eg. by a stupid network 
time out error - not only on Windows. Other socket operations 
are no exceptions either.


`isAlive` is fine for properly shutdowned/closed sockets by you 
or peer. But it doesn't protect you from faulting ones.


Ok, added some checks on .select, .accept, .bind, .listen.
Thank you.

Andrea



Re: Release: serverino - please destroy it.

2022-05-15 Thread frame via Digitalmars-d-announce

On Saturday, 14 May 2022 at 23:23:47 UTC, Andrea Fontana wrote:


Which kind of socket exception could be triggered by a client?

Andrea


It doesn't matter if triggered by a client or not, you need to 
deal with the possibility. A closed/destroyed socket is an 
invalid resource.


I recently had the scenario on Windows where a client crashed and 
the socket wasn't closed properly somehow. Now the server adds 
the socket to the set to see an update - boom! "Socket operation 
on non-socket" error.


Also accepting sockets can throw, for eg. by a stupid network 
time out error - not only on Windows. Other socket operations are 
no exceptions either.


`isAlive` is fine for properly shutdowned/closed sockets by you 
or peer. But it doesn't protect you from faulting ones.


Re: Release: serverino - please destroy it.

2022-05-14 Thread Andrea Fontana via Digitalmars-d-announce

On Saturday, 14 May 2022 at 20:44:54 UTC, frame wrote:
Take care of socket exceptions - especially if you want to make 
a port to Windows.


You should always expect one. It's not enough to test 
`Socket.isAlive` - a client socket may be faulty and any 
illegal socket operation throws and kills your loop. Even if 
`isAlive` works as expected, it may changes the status before 
you have add the socket to the set. You don't want your server 
to crash if a client misbehaves.


Which kind of socket exception could be triggered by a client?

Andrea


Re: Release: serverino - please destroy it.

2022-05-14 Thread frame via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

Please help me testing it, I'm looking forward to receiving 
your shiny new issues on github.


Dub package: https://code.dlang.org/packages/serverino

Andrea


Take care of socket exceptions - especially if you want to make a 
port to Windows.


You should always expect one. It's not enough to test 
`Socket.isAlive` - a client socket may be faulty and any illegal 
socket operation throws and kills your loop. Even if `isAlive` 
works as expected, it may changes the status before you have add 
the socket to the set. You don't want your server to crash if a 
client misbehaves.


Re: Release: serverino - please destroy it.

2022-05-12 Thread Andrea Fontana via Digitalmars-d-announce

On Thursday, 12 May 2022 at 11:46:05 UTC, Guillaume Piolat wrote:

On Thursday, 12 May 2022 at 11:33:07 UTC, Andrea Fontana wrote:


Does dmd/rdmd work? Serverino uses std.net.curl just for 
running its unittests, so maybe that bug is not blocking.


Well tbh, the simple fact that I would have to use WSL is a 
blocker for me.

AFAIK vibe or cgi.d do not require that.


Yay. I need a Windows machine (or someone with it!) to rewrite 
some POSIX parts.


For example the part that send/receive the file descriptor (of a 
socket) from the master process to the worker (windows has its 
own API for this)


Re: Release: serverino - please destroy it.

2022-05-12 Thread Guillaume Piolat via Digitalmars-d-announce

On Thursday, 12 May 2022 at 11:33:07 UTC, Andrea Fontana wrote:


Does dmd/rdmd work? Serverino uses std.net.curl just for 
running its unittests, so maybe that bug is not blocking.


Well tbh, the simple fact that I would have to use WSL is a 
blocker for me.

AFAIK vibe or cgi.d do not require that.


Re: Release: serverino - please destroy it.

2022-05-12 Thread rikki cattermole via Digitalmars-d-announce



On 12/05/2022 11:33 PM, Andrea Fontana wrote:

Too bad dub doesn't work with wsl, it sounds like a lost opportunity.

Does dmd/rdmd work? Serverino uses std.net.curl just for running its 
unittests, so maybe that bug is not blocking.


It doesn't look like it is dub that is failing.

This is a problem in Phobos/compiler.


Re: Release: serverino - please destroy it.

2022-05-12 Thread Andrea Fontana via Digitalmars-d-announce

On Thursday, 12 May 2022 at 10:26:28 UTC, Guillaume Piolat wrote:

On Sunday, 8 May 2022 at 21:45:28 UTC, Andrea Fontana wrote:


If you can test it on windows with WSL, that would be 
appreciated a lot!




I tried to test servrino on WSL, but dub doesn't run on WSL.
=> https://github.com/dlang/dub/issues/2249


Hey thanks for your support!

Too bad dub doesn't work with wsl, it sounds like a lost 
opportunity.


Does dmd/rdmd work? Serverino uses std.net.curl just for running 
its unittests, so maybe that bug is not blocking.


Andrea


Re: Release: serverino - please destroy it.

2022-05-12 Thread Guillaume Piolat via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:45:28 UTC, Andrea Fontana wrote:


If you can test it on windows with WSL, that would be 
appreciated a lot!




I tried to test servrino on WSL, but dub doesn't run on WSL.
=> https://github.com/dlang/dub/issues/2249




Re: Release: serverino - please destroy it.

2022-05-11 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 19:24:25 UTC, Andrea Fontana wrote:
Maybe bambinetto is more about immaturity. Bambinuccio is cute. 
Bambinaccio is bad. Bambinone is big (an adult that behave like 
a child). -ello doesn't sound good with bambino, but it's very 
similar to -etto.


Good luck :)


Thanks for the explanation! <3
If only programming languages were this expressive!

«Servinuccio»… ;P




Re: Release: serverino - please destroy it.

2022-05-11 Thread Andrea Fontana via Digitalmars-d-announce

On Wednesday, 11 May 2022 at 06:50:37 UTC, Orfeo wrote:

well done Andrea!

(forum begins to be too crowded with Italians :) )


---
Orfeo


We all miss the good old bearophile!
I think the most active italian in this forum.

Andrea


Re: Release: serverino - please destroy it.

2022-05-11 Thread Orfeo via Digitalmars-d-announce

well done Andrea!

(forum begins to be too crowded with Italians :) )


---
Orfeo




Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 21:24:46 UTC, Paolo Invernizzi wrote:


Here I am ... Milanese: https://www.deepglance.com/about

/Paolo


Ok it's me getting old!

Andrea




Re: Release: serverino - please destroy it.

2022-05-10 Thread Paolo Invernizzi via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 20:41:17 UTC, Andrea Fontana wrote:

On Tuesday, 10 May 2022 at 20:13:45 UTC, Paolo Invernizzi wrote:
Sinceramente non ricordo di averlo scritto, ma alla mia eta 
... probabilmente dimentico qualcosa ... comunque piacere! E' 
bello vedere altri italiani apprezzare questo magnifico 
linguaggio!


(Frankly speaking, I don't remember to have written that, but 
hey, I'm getting old ... probably  I'm forgetting something 
... anyway nice to meet you! It's great to see Italians here 
enjoying this great programming language!)


I wonder if you're making a fool of me. Or maybe it's me who is 
getting old.
I'm pretty sure that there's a user here with a really Italian 
name who was born somewhere in South America.


Andrea


Here I am ... Milanese: https://www.deepglance.com/about

/Paolo




Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 20:13:45 UTC, Paolo Invernizzi wrote:
Sinceramente non ricordo di averlo scritto, ma alla mia eta ... 
probabilmente dimentico qualcosa ... comunque piacere! E' bello 
vedere altri italiani apprezzare questo magnifico linguaggio!


(Frankly speaking, I don't remember to have written that, but 
hey, I'm getting old ... probably  I'm forgetting something ... 
anyway nice to meet you! It's great to see Italians here 
enjoying this great programming language!)


I wonder if you're making a fool of me. Or maybe it's me who is 
getting old.
I'm pretty sure that there's a user here with a really Italian 
name who was born somewhere in South America.


Andrea


Re: Release: serverino - please destroy it.

2022-05-10 Thread Paolo Invernizzi via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 19:55:32 UTC, Andrea Fontana wrote:

On Tuesday, 10 May 2022 at 19:50:08 UTC, Paolo Invernizzi wrote:


Concordo ... (I agree!)

:-P


Wait, you have always said you're not Italian. Have you changed 
your mind?


Andrea


Sinceramente non ricordo di averlo scritto, ma alla mia eta ... 
probabilmente dimentico qualcosa ... comunque piacere! E' bello 
vedere altri italiani apprezzare questo magnifico linguaggio!


(Frankly speaking, I don't remember to have written that, but 
hey, I'm getting old ... probably  I'm forgetting something ... 
anyway nice to meet you! It's great to see Italians here enjoying 
this great programming language!)




Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 19:50:08 UTC, Paolo Invernizzi wrote:


Concordo ... (I agree!)

:-P


Wait, you have always said you're not Italian. Have you changed 
your mind?


Andrea




Re: Release: serverino - please destroy it.

2022-05-10 Thread Paolo Invernizzi via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 16:05:11 UTC, Andrea Fontana wrote:
On Tuesday, 10 May 2022 at 15:35:35 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 10 May 2022 at 15:27:48 UTC, Andrea Fontana wrote:
Indeed the "-ino" suffix in "serverino" stands for "small" in 
italian. :)


Bambino > bambinello? So, the embedded-version could be 
«serverinello»? :O)


Oh, italian is full of suffixes. -ello means a slightly 
different thing. It's small but sounds like a bit pejorative.


-ino in bambino is not (anymore) a suffix, anyway.


Andrea


Concordo ... (I agree!)

:-P


Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 18:33:18 UTC, Sebastiaan Koppe wrote:

On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote:
On Tuesday, 10 May 2022 at 08:32:15 UTC, Sebastiaan Koppe 
wrote:
The difference is that with the route uda you can *only* map 
routes 1:1 exhaustively. With your approach it is up to the 
programmer to avoid errors. It is also hard to reason about 
the flow of requests through all those functions, and you 
have to look at the body of them to determine what will 
happen.


Sorry I don't follow you


It is simple, since all your handler are effectively chained, 
any error in any one of them can cause later ones to fail or 
misbehave. This decreases locality and increases the things you 
have to reason about.


Not sure. What if your uda (regex) match is too permissive? Is 
that different?


My code evaluates workers in order, just like yours, no?

Maybe I can enable some log if set on config, to track what's 
happening. That could help you to debug if something goes wrong.




There are other benefits to uda tagged endpoints too, for 
example they are easier to nest, or to programmatically 
generate them. In vibe-d I added the default option of 
generating OPTION handlers for every regular endpoint. This is 
required for CORS.



@endpoint void func(...){
   if(req.method == Method.OPTION){
   // THIS RUN FOR EVERY ENDPOINT
   }
}




In any case if you want to use a different routing strategy 
it's quite easy. I really don't like libraries that force you 
to use their own style/way.


That is good.


Andrea




Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce
On Tuesday, 10 May 2022 at 16:47:13 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 10 May 2022 at 16:05:11 UTC, Andrea Fontana wrote:
Oh, italian is full of suffixes. -ello means a slightly 
different thing. It's small but sounds like a bit pejorative.


Oh, and I loved the sound of it… suggests immaturity, perhaps?

(I love the -ello and -ella endings. «Bambinella» is one of my 
favourite words, turns out it is a fruit too!)


Maybe bambinetto is more about immaturity. Bambinuccio is cute. 
Bambinaccio is bad. Bambinone is big (an adult that behave like a 
child). -ello doesn't sound good with bambino, but it's very 
similar to -etto.


Good luck :)



Re: Release: serverino - please destroy it.

2022-05-10 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote:

On Tuesday, 10 May 2022 at 08:32:15 UTC, Sebastiaan Koppe wrote:
The difference is that with the route uda you can *only* map 
routes 1:1 exhaustively. With your approach it is up to the 
programmer to avoid errors. It is also hard to reason about 
the flow of requests through all those functions, and you have 
to look at the body of them to determine what will happen.


Sorry I don't follow you


It is simple, since all your handler are effectively chained, any 
error in any one of them can cause later ones to fail or 
misbehave. This decreases locality and increases the things you 
have to reason about.


There are other benefits to uda tagged endpoints too, for example 
they are easier to nest, or to programmatically generate them. In 
vibe-d I added the default option of generating OPTION handlers 
for every regular endpoint. This is required for CORS.


In any case if you want to use a different routing strategy 
it's quite easy. I really don't like libraries that force you 
to use their own style/way.


That is good.


Re: Release: serverino - please destroy it.

2022-05-10 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 16:05:11 UTC, Andrea Fontana wrote:
Oh, italian is full of suffixes. -ello means a slightly 
different thing. It's small but sounds like a bit pejorative.


Oh, and I loved the sound of it… suggests immaturity, perhaps?

(I love the -ello and -ella endings. «Bambinella» is one of my 
favourite words, turns out it is a fruit too!)




Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce
On Tuesday, 10 May 2022 at 15:35:35 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 10 May 2022 at 15:27:48 UTC, Andrea Fontana wrote:
Indeed the "-ino" suffix in "serverino" stands for "small" in 
italian. :)


Bambino > bambinello? So, the embedded-version could be 
«serverinello»? :O)


Oh, italian is full of suffixes. -ello means a slightly different 
thing. It's small but sounds like a bit pejorative.


-ino in bambino is not (anymore) a suffix, anyway.


Andrea


Re: Release: serverino - please destroy it.

2022-05-10 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 15:27:48 UTC, Andrea Fontana wrote:
Indeed the "-ino" suffix in "serverino" stands for "small" in 
italian. :)


Bambino > bambinello? So, the embedded-version could be 
«serverinello»? :O)





Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce
On Tuesday, 10 May 2022 at 15:16:22 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 10 May 2022 at 15:00:06 UTC, Andrea Fontana wrote:
I work in the R and every single time I even have to write a 
small api or a simple html interface to control some strange 
machine I think "omg, I have to set nginx agaain".


Good point, there are more application areas than regular 
websites. Embedded remote applications could be another 
application area where you want something simple with HTTPS 
(monitoring webcams, sensors, solar panels, supervising farming 
houses or whatever).


Indeed the "-ino" suffix in "serverino" stands for "small" in 
italian. :)


Andrea


Re: Release: serverino - please destroy it.

2022-05-10 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 15:00:06 UTC, Andrea Fontana wrote:
I work in the R and every single time I even have to write a 
small api or a simple html interface to control some strange 
machine I think "omg, I have to set nginx agaain".


Good point, there are more application areas than regular 
websites. Embedded remote applications could be another 
application area where you want something simple with HTTPS 
(monitoring webcams, sensors, solar panels, supervising farming 
houses or whatever).






Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 15:01:43 UTC, Adam Ruppe wrote:

On Monday, 9 May 2022 at 19:20:27 UTC, Andrea Fontana wrote:
Thank you. Looking forward to getting feedback, bug reports 
and help :)


BTW I'm curious, what made you not want to use my cgi.d which 
has similar capabilities?


I was really tempted to start from that!
But it's difficult to fork and edit a 11kloc project like that :)

I had yet developed fastcgi and scgi code in the past so I've 
reused some code and it didn't take so much time to get to 
serverino.


Andrea




Re: Release: serverino - please destroy it.

2022-05-10 Thread Adam Ruppe via Digitalmars-d-announce

On Monday, 9 May 2022 at 19:20:27 UTC, Andrea Fontana wrote:
Thank you. Looking forward to getting feedback, bug reports and 
help :)


BTW I'm curious, what made you not want to use my cgi.d which has 
similar capabilities?


Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce
On Tuesday, 10 May 2022 at 13:15:38 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 10 May 2022 at 12:52:01 UTC, Andrea Fontana wrote:
I'm running a whole website in D using fastcgi and we have no 
problem at all, it's blazing fast. But it's not so easy to 
setup as serverino :)


Easy setup is probably the number one reason people land on a 
specific web-tech, so it is the best initial angle, I agree.


(By version 3.x you know what the practical weak spots are and 
can rethink the bottom layer.)


Right. But it's not just marketing.

I work in the R and every single time I even have to write a 
small api or a simple html interface to control some strange 
machine I think "omg, I have to set nginx agaain". It's 
pretty annoying especially if you're working on shared aws 
machine. (I know, docker & c. Exist, but they take a lot to setup 
and they are heavy for some simple api). I'm going to love 
serverino in the next months :)


Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 13:34:27 UTC, Adam D Ruppe wrote:

On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:

The same goes for cgi/fastcgi/scgi and so on.


Well, cgi does one process per request, so there is no worker 
pool (it is the original "serverless" lol).


fastcgi is interesting because the Apache module for it will 
actually start and stop worker processes as-needed. I don't 
think the the nginx impl does that though.


Some daemons can manage this by themselves (once again: check  
php-fpm  "dynamic" setting).


Serverino can do it as well. You can set in configuration the max 
and min number of workers. It's easy:


```
@onServerInit auto setup()
{
ServerinoConfig sc = ServerinoConfig.create();
sc.setMinWorkers(5);
sc.setMaxWorkers(100);
return sc;
}
```

If all workers are busy the daemon will launch a new one. You 
might be interested in

setMaxWorkerLifetime() and  sc.setMaxWorkerIdling() too!

But the nicest thing about all these application models is if 
you write it in the right way, you can swap out the approach, 
either transparently adding the i/o event waits or just adding 
additional servers without touching the application code. 
That's a lot harder to do when you expect shared state etc. 
like other things encourage.


I would mention that if something goes wrong and a process crash 
or get caught in an infinite loop, it's not a problem. Process is 
killed and wake up again without pull all the server down.


Andrea




Re: Release: serverino - please destroy it.

2022-05-10 Thread Adam D Ruppe via Digitalmars-d-announce

On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:

The same goes for cgi/fastcgi/scgi and so on.


Well, cgi does one process per request, so there is no worker 
pool (it is the original "serverless" lol).


fastcgi is interesting because the Apache module for it will 
actually start and stop worker processes as-needed. I don't think 
the the nginx impl does that though.


But the nicest thing about all these application models is if you 
write it in the right way, you can swap out the approach, either 
transparently adding the i/o event waits or just adding 
additional servers without touching the application code. That's 
a lot harder to do when you expect shared state etc. like other 
things encourage.


Re: Release: serverino - please destroy it.

2022-05-10 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 12:52:01 UTC, Andrea Fontana wrote:
I'm running a whole website in D using fastcgi and we have no 
problem at all, it's blazing fast. But it's not so easy to 
setup as serverino :)


Easy setup is probably the number one reason people land on a 
specific web-tech, so it is the best initial angle, I agree.


(By version 3.x you know what the practical weak spots are and 
can rethink the bottom layer.)





Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce
On Tuesday, 10 May 2022 at 12:31:23 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote:

And you can still handle 700k/views per hour with 20 workers!


Requests tend to come in bursts from the same client, thanks to 
clunky javascript APIs and clutters of resources (and careless 
web developers). For a typical D user ease-of-use is probably 
more important at this point, though, so good luck with your 
project!


In my opnioni IRL that's not a big problem as it can seem.

Again: that's just how nginx and apache handle php/cgi/fcgi/scgi 
requests.
Wikipedia runs wikimedia software. Written in php. Running on 
apache with php-fpm (and cache!).


And I'm not suggesting to run wikipedia on serverino, *for now*.

If you try to open a lot of wikipedia pages at the same time in a 
burst, they will be served (probably using keep-alive connection) 
not in parallel: you're queued. And the 99.9% of users will never 
notice this. Is it a problem?


If you need much control, you can use an http accelerator and/or 
you can use a reverse proxy (like nginx) to control bursts et 
similia.


I'm running a whole website in D using fastcgi and we have no 
problem at all, it's blazing fast. But it's not so easy to setup 
as serverino :)


Andrea




Re: Release: serverino - please destroy it.

2022-05-10 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote:

And you can still handle 700k/views per hour with 20 workers!


Requests tend to come in bursts from the same client, thanks to 
clunky javascript APIs and clutters of resources (and careless 
web developers). For a typical D user ease-of-use is probably 
more important at this point, though, so good luck with your 
project!







Re: Release: serverino - please destroy it.

2022-05-10 Thread Andrea Fontana via Digitalmars-d-announce

On Tuesday, 10 May 2022 at 08:32:15 UTC, Sebastiaan Koppe wrote:

On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:

On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:
As an example, how many requests per second can you manage if 
all requests have to wait 100 msecs?


For non critical workload you will probably still hit good 
enough performance though.


Firstly, it depends on how many workers you have.

Then you should consider that a lot of (most?) websites use 
php-fpm, that works using the same approach (but php is much 
slower than D). The same goes for cgi/fastcgi/scgi and so on.


Let's say you have just 20 workers. 100msecs for each request 
(a lot of time for my standards, I would say). That means 
20*10 = 200 webpages/s = 720k pages/h. I don't think your 
website has so much traffic...


And I hope not every request will take 100msecs!


100msecs is on the upper end for sure, but if you add a 
database, external service call, etc. it is not uncommon to 
reach that.


And you can still handle 700k/views per hour with 20 workers!

The point however, is that the architecture breaks down because 
it is unable to do work concurrently. Every requests blocks a 
worker from start to finish.


Unless it is CPU heavy the system will be under utilized. That 
is not necessarily bad though. The simplicity has something 
going for it, but it is definitely a tradeoff that you should 
consider highlighting.


Every server has its own target. BTW, I'm not developing 
serverino to use it as a building block of a CDN.


In real-life projects, I think you can use it without any problem 
for not-huge projects.


You can also put it under a reverse proxy (f.e. nginx), to handle 
just the requests you need to write in D.


The difference is that with the route uda you can *only* map 
routes 1:1 exhaustively. With your approach it is up to the 
programmer to avoid errors. It is also hard to reason about the 
flow of requests through all those functions, and you have to 
look at the body of them to determine what will happen.


Sorry I don't follow you: I don't know which framework you're 
using, but if you're using UDA with matches (something like: 
@matchUri("/main") void renderMain(...) { ... }) you still have 
to check all the functions if a request is not handled correctly. 
Or am I missing something?


Using my approach if you want to check which functions escape 
from routing you can just add a catch-all endpoint with low 
priority.


```
@priority(-1000) @endpoint
void wtf(Request r, Output o)
{
  fatal("Request NOT HANDLED: ", r.dump());
}
```

And if a request doesn't match your UDA constraint, how do you 
debug what's wrong with it? I think it's easier to add a 
checkpoint/log on the first lines of your functions body to guess 
why the function is skipped.


In any case if you want to use a different routing strategy it's 
quite easy. I really don't like libraries that force you to use 
their own style/way.


So you can even drop my UDAs and write the app like this. It 
still works:


```
mixin ServerinoMain;

void entry(Request r, Output o)
{
// Use your routing strategy here
// ...
// YourRouter router;
// router.do(r, "/hello/world", );
// router.do(r, "/bla", );
}
```

Andrea


Re: Release: serverino - please destroy it.

2022-05-10 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:

On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:
As an example, how many requests per second can you manage if 
all requests have to wait 100 msecs?


For non critical workload you will probably still hit good 
enough performance though.


Firstly, it depends on how many workers you have.

Then you should consider that a lot of (most?) websites use 
php-fpm, that works using the same approach (but php is much 
slower than D). The same goes for cgi/fastcgi/scgi and so on.


Let's say you have just 20 workers. 100msecs for each request 
(a lot of time for my standards, I would say). That means 20*10 
= 200 webpages/s = 720k pages/h. I don't think your website has 
so much traffic...


And I hope not every request will take 100msecs!


100msecs is on the upper end for sure, but if you add a database, 
external service call, etc. it is not uncommon to reach that.


The point however, is that the architecture breaks down because 
it is unable to do work concurrently. Every requests blocks a 
worker from start to finish.


Unless it is CPU heavy the system will be under utilized. That is 
not necessarily bad though. The simplicity has something going 
for it, but it is definitely a tradeoff that you should consider 
highlighting.



```
@endpoint void my_end(Request r, Output o)
{
 if (r.uri == "/asd") // or whatever you want: regex, 
or checking another field

return false; //
}
```

This is just like:

```
@matchuda(uri, "/asd") void my_end() { ... }
```

What's the difference? The first one is much more flexible, 
IMHO.


The difference is that with the route uda you can *only* map 
routes 1:1 exhaustively. With your approach it is up to the 
programmer to avoid errors. It is also hard to reason about the 
flow of requests through all those functions, and you have to 
look at the body of them to determine what will happen.


Re: Release: serverino - please destroy it.

2022-05-09 Thread Andrea Fontana via Digitalmars-d-announce

On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:
Every request is processed by a worker running in an isolated 
process, no fibers/threads, sorry (or thanks?)


I did some tests and the performance sounds good: on a local 
machine it can handle more than 100_000 reqs/sec for a simple 
page containing just "hello world!".Of course that's not a 
good benchmark, if you can help me with other benchmarks it 
would be much appreciated (a big thanks to Tomáš Chaloupka who 
did some tests!)


Typically server applications are IO heavy. I expect your 
isolated-process approach to break down with that kind of work.


I know. We all know :) Benchmarks are just benchmarks. They are 
useful to understand how much overhead your server adds to the 
whole project. These benchmarks are made in the local machine, 
with almost no connection overhead.


Not every application is IO heavy, anyway.

As an example, how many requests per second can you manage if 
all requests have to wait 100 msecs?


For non critical workload you will probably still hit good 
enough performance though.


Firstly, it depends on how many workers you have.
Then you should consider that a lot of (most?) websites use 
php-fpm, that works using the same approach (but php is much 
slower than D). The same goes for cgi/fastcgi/scgi and so on.


Let's say you have just 20 workers. 100msecs for each request (a 
lot of time for my standards, I would say). That means 20*10 = 
200 webpages/s = 720k pages/h. I don't think your website has so 
much traffic...


And I hope not every request will take 100msecs!



Instead of using a lot of different UDAs to set routing rules, 
you can simply write them in your endpoint's body and exit 
from it to pass to the next endpoint.


My experience is that exhaustive matching is easier to reason 
about at larger scale.


Yes, but exactly the same thing can be done without uda.

```
@endpoint void my_end(Request r, Output o)
{
 if (r.uri == "/asd") // or whatever you want: regex, or 
checking another field

return false; //
}
```

This is just like:

```
@matchuda(uri, "/asd") void my_end() { ... }
```

What's the difference? The first one is much more flexible, IMHO.


Please help me testing it, I'm looking forward to receiving 
your shiny new issues on github.


I noticed it has zero unittests, that is probably a good place 
to start.


Of course! They will come for sure. :)

Andrea



Re: Release: serverino - please destroy it.

2022-05-09 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:
Every request is processed by a worker running in an isolated 
process, no fibers/threads, sorry (or thanks?)


I did some tests and the performance sounds good: on a local 
machine it can handle more than 100_000 reqs/sec for a simple 
page containing just "hello world!".Of course that's not a good 
benchmark, if you can help me with other benchmarks it would be 
much appreciated (a big thanks to Tomáš Chaloupka who did some 
tests!)


Typically server applications are IO heavy. I expect your 
isolated-process approach to break down with that kind of work.


As an example, how many requests per second can you manage if all 
requests have to wait 100 msecs?


For non critical workload you will probably still hit good enough 
performance though.


Instead of using a lot of different UDAs to set routing rules, 
you can simply write them in your endpoint's body and exit from 
it to pass to the next endpoint.


My experience is that exhaustive matching is easier to reason 
about at larger scale.


Please help me testing it, I'm looking forward to receiving 
your shiny new issues on github.


I noticed it has zero unittests, that is probably a good place to 
start.


Re: Release: serverino - please destroy it.

2022-05-09 Thread Andrea Fontana via Digitalmars-d-announce

On Monday, 9 May 2022 at 19:09:40 UTC, Guillaume Piolat wrote:

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

Hello!

I've just released serverino. It's a small & ready-to-go 
http/https server.


Dub package: https://code.dlang.org/packages/serverino

Andrea


Looks very useful, congratulations!


Thank you. Looking forward to getting feedback, bug reports and 
help :)


Andrea


Re: Release: serverino - please destroy it.

2022-05-09 Thread Guillaume Piolat via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

Hello!

I've just released serverino. It's a small & ready-to-go 
http/https server.


Dub package: https://code.dlang.org/packages/serverino

Andrea


Looks very useful, congratulations!


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 04:48:11PM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Monday, 9 May 2022 at 16:37:15 UTC, H. S. Teoh wrote:
> > Why is memory protection the only way to implement write barriers in
> > D?
> 
> Well, it's the only way I know of without making it a major
> backwards-incompatible change. The main restriction in this area is
> that it must continue working with code written in other languages,
> and generally not affect the ABI drastically.

Ah, gotcha.  Yeah, I don't think such an approach would be fruitful (it
was worth a shot, though!).  If D were ever to get write barriers,
they'd have to be in some other form, probably more intrusive in terms
of backwards-compatibility and ABI.


T

-- 
Curiosity kills the cat. Moral: don't be the cat.


Re: Release: serverino - please destroy it.

2022-05-09 Thread Vladimir Panteleev via Digitalmars-d-announce

On Monday, 9 May 2022 at 16:37:15 UTC, H. S. Teoh wrote:
Why is memory protection the only way to implement write 
barriers in D?


Well, it's the only way I know of without making it a major 
backwards-incompatible change. The main restriction in this area 
is that it must continue working with code written in other 
languages, and generally not affect the ABI drastically.




Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 05:55:39AM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Monday, 9 May 2022 at 00:25:43 UTC, H. S. Teoh wrote:
> > In the past, the argument was that write barriers represented an
> > unacceptable performance hit to D code.  But I don't think this has
> > ever actually been measured. (Or has it?)  Maybe somebody should
> > make a dmd fork that introduces write barriers, plus a generational
> > GC (even if it's a toy, proof-of-concept-only implementation) to see
> > if the performance hit is really as bad as believed to be.
> 
> Implementing write barriers in the compiler (by instrumenting code)
> means that you're no longer allowed to copy pointers to managed memory
> in non-D code. This is a stricter assumption that the current ones we
> have; for instance, copying a struct (which has indirections) with
> memcpy would be forbidden.

Hmm, true.  That puts a big damper on the possibilities... OTOH, if this
could be made an optional feature, then code that we know doesn't need,
e.g., passing pointers to C code, can take advantage of possibly better
GC strategies.


T

-- 
English has the lovely word "defenestrate", meaning "to execute by throwing 
someone out a window", or more recently "to remove Windows from a computer and 
replace it with something useful". :-) -- John Cowan


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 05:52:30AM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Sunday, 8 May 2022 at 23:44:42 UTC, Ali Çehreli wrote:
> > While we are on topic :) and as I finally understood what
> > generational GC is[1], are there any fundamental issues with D to
> > not use one?
> 
> I implemented one a long time ago. The only way to get write barriers
> with D is memory protection. It worked, but unfortunately the write
> barriers caused a severe performance penalty.

Why is memory protection the only way to implement write barriers in D?


> It's possible that it might be viable with more tweaking, or in
> certain applications where most of the heap is not written to; I did
> not experiment a lot with it.

Interesting data point, in any case.


T

-- 
The early bird gets the worm. Moral: ewww...


Re: Release: serverino - please destroy it.

2022-05-09 Thread Vladimir Panteleev via Digitalmars-d-announce

On Monday, 9 May 2022 at 00:25:43 UTC, H. S. Teoh wrote:
In the past, the argument was that write barriers represented 
an unacceptable performance hit to D code.  But I don't think 
this has ever actually been measured. (Or has it?)  Maybe 
somebody should make a dmd fork that introduces write barriers, 
plus a generational GC (even if it's a toy, 
proof-of-concept-only implementation) to see if the performance 
hit is really as bad as believed to be.


Implementing write barriers in the compiler (by instrumenting 
code) means that you're no longer allowed to copy pointers to 
managed memory in non-D code. This is a stricter assumption that 
the current ones we have; for instance, copying a struct (which 
has indirections) with memcpy would be forbidden.




Re: Release: serverino - please destroy it.

2022-05-08 Thread Vladimir Panteleev via Digitalmars-d-announce

On Sunday, 8 May 2022 at 23:44:42 UTC, Ali Çehreli wrote:
While we are on topic :) and as I finally understood what 
generational GC is[1], are there any fundamental issues with D 
to not use one?


I implemented one a long time ago. The only way to get write 
barriers with D is memory protection. It worked, but 
unfortunately the write barriers caused a severe performance 
penalty. It's possible that it might be viable with more 
tweaking, or in certain applications where most of the heap is 
not written to; I did not experiment a lot with it.




Re: Release: serverino - please destroy it.

2022-05-08 Thread Bruce Carneal via Digitalmars-d-announce

On Monday, 9 May 2022 at 00:32:33 UTC, Ali Çehreli wrote:

On 5/8/22 17:25, H. S. Teoh wrote:

> somebody should make a dmd
> fork that introduces write barriers, plus a generational GC
(even if
> it's a toy, proof-of-concept-only implementation) to see if
the
> performance hit is really as bad as believed to be.

Ooh! DConf is getting even more interesting. :o)

Ali


A helpful paper: "Getting to Go: The Journey of Go's garbage 
collector".


Positive highlights: 1) non-copying 2) no read barriers

Less friendly: 1) write barriers 2) GC aware fiber scheduler 3) 
other???


Would be some (huge amount?) of work but porting/enabling an 
opt-in golang latency GC could be a big enabler for the 
casual/soft "real time" crowd.


Here's a link to the paper:
https://go.dev/blog/ismmkeynote




Re: Release: serverino - please destroy it.

2022-05-08 Thread Ali Çehreli via Digitalmars-d-announce

On 5/8/22 17:25, H. S. Teoh wrote:

> somebody should make a dmd
> fork that introduces write barriers, plus a generational GC (even if
> it's a toy, proof-of-concept-only implementation) to see if the
> performance hit is really as bad as believed to be.

Ooh! DConf is getting even more interesting. :o)

Ali



Re: Release: serverino - please destroy it.

2022-05-08 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 12:10:53PM +1200, rikki cattermole via 
Digitalmars-d-announce wrote:
> On 09/05/2022 11:44 AM, Ali Çehreli wrote:
> > While we are on topic :) and as I finally understood what
> > generational GC is[1], are there any fundamental issues with D to
> > not use one?
> 
> This is not a D issue, its an implementation one.
> 
> We don't have write barriers, that's it.
> 
> Make them opt-in and we can have more advanced GC's.
[...]

In the past, the argument was that write barriers represented an
unacceptable performance hit to D code.  But I don't think this has ever
actually been measured. (Or has it?)  Maybe somebody should make a dmd
fork that introduces write barriers, plus a generational GC (even if
it's a toy, proof-of-concept-only implementation) to see if the
performance hit is really as bad as believed to be.


T

-- 
The best way to destroy a cause is to defend it poorly.


Re: Release: serverino - please destroy it.

2022-05-08 Thread rikki cattermole via Digitalmars-d-announce



On 09/05/2022 11:44 AM, Ali Çehreli wrote:
While we are on topic :) and as I finally understood what generational 
GC is[1], are there any fundamental issues with D to not use one?


This is not a D issue, its an implementation one.

We don't have write barriers, that's it.

Make them opt-in and we can have more advanced GC's.

Oh and book recommendation for the subject: 
https://www.amazon.com/Garbage-Collection-Handbook-Management-Algorithms/dp/1420082795


Re: Release: serverino - please destroy it.

2022-05-08 Thread Ali Çehreli via Digitalmars-d-announce

On 5/8/22 16:10, Adam Ruppe wrote:
> On Sunday, 8 May 2022 at 22:09:37 UTC, Ali Çehreli wrote:
>> That effectively uses multiple GCs. I always suspected that approach
>> would provide better latency.
>
> My cgi.d has used some fork approaches for a very long time since it is
> a very simple way to spread this out, it works quite well.

While we are on topic :) and as I finally understood what generational 
GC is[1], are there any fundamental issues with D to not use one?


Ali

[1] Translating from what I wrote in the Turkish forum, here is my 
current understanding: Let's not waste time checking all allocated 
memory at every GC cycle. Instead, let's be smarter and assume that 
memory that survived through this GC cycle will survive the next cycle 
as well.


Let's put those memory blocks aside to be reconsidered only when we 
really have to. This effectively makes the GC only play with short-lived 
objects, reducing the amount of memory touched. This would make some 
objects live forever, but GC never promises that all finalizers will be 
executed.




Re: Release: serverino - please destroy it.

2022-05-08 Thread Adam Ruppe via Digitalmars-d-announce

On Sunday, 8 May 2022 at 22:09:37 UTC, Ali Çehreli wrote:
That effectively uses multiple GCs. I always suspected that 
approach would provide better latency.


My cgi.d has used some fork approaches for a very long time since 
it is a very simple way to spread this out, it works quite well.


Re: Release: serverino - please destroy it.

2022-05-08 Thread Andrea Fontana via Digitalmars-d-announce

On Sunday, 8 May 2022 at 22:09:37 UTC, Ali Çehreli wrote:
Congratulations! :) Looking forward to watching your 
presentation at DConf... ;)


I wish I was able to speak publicly in English in front of an 
audience :)




On 5/8/22 14:32, Andrea Fontana wrote:

> Every request is processed by a worker running in an isolated
process,
> no fibers/threads, sorry (or thanks?)

That effectively uses multiple GCs. I always suspected that 
approach would provide better latency.


I think it depends on what your server is doing, anyway.



> sending opened file descriptors between processes thru sockets


I sent a pull request (merged!) for druntime to make this work on 
macOS too!




Sweet!

Ali





Re: Release: serverino - please destroy it.

2022-05-08 Thread Ali Çehreli via Digitalmars-d-announce
Congratulations! :) Looking forward to watching your presentation at 
DConf... ;)


On 5/8/22 14:32, Andrea Fontana wrote:

> Every request is processed by a worker running in an isolated process,
> no fibers/threads, sorry (or thanks?)

That effectively uses multiple GCs. I always suspected that approach 
would provide better latency.


> sending opened file descriptors between processes thru sockets

Sweet!

Ali



Re: Release: serverino - please destroy it.

2022-05-08 Thread Andrea Fontana via Digitalmars-d-announce

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

[...]
Andrea


Whoops, I forgot a couple of things. This was tested on linux 
only and it should work fine on other posix systems (macOS 
included!).


I don't have windows, but I think you need WSL to run it, since 
I'm using a lot of strange posix tricks to keep performace at a 
good level (like sending opened file descriptors between 
processes thru sockets).


If you can test it on windows with WSL, that would be appreciated 
a lot!


Andrea