Dear systemd folks,

the setup is as follows.

Nginx is used as the Web server, which communicates with a Ruby on
Rails application over a socket. Puma [1] is used as the application
server.

Nginx and Puma are managed by systemd service files.

If a new version of the application is installed, the goal is that
there is no downtime. That means, until the application with the new
code isn’t ready yet to serve requests, the old application still
answers them, that means, not only are no requests lost, but during
restart the visitor does not need to wait any longer than normally.

In this case, JRuby is used, which means that application start, `sudo
systemctl start web-application.service` takes more than 30 seconds.

So, `sudo systemctl restart web-application.service` is not enough as
Puma takes some time to be started. (The socket activation described in
the Puma systemd documentation [2] only takes care, that no requests
are lost. The visitor still has to wait.)

Is that possible by just using systemd, or is a load balancer like
HAProxy or a special NGINX configuration and service file templates
needed?

My current thinking would be, that a service file template for the Puma
application server is used. The new code is then started in parallel,
and if it’s done, it “takes over the socket”. (No idea if NGINX would
need to be restarted.) Then the old version of the code is stopped. (I
don’t know how to model that last step/dependency.)

What drawbacks does the above method have? Is it implementable?

How do you manage such a setup?


Thanks,

Paul


[1] http://puma.io/
[2] https://github.com/puma/puma/blob/master/docs/systemd.md

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to