Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-15 Thread David Timothy Strauss
On Thu, Jul 14, 2016, 19:07 Kai Hendry  wrote:

> I would love to see that 10 lines of shell you claimed, but I think you
> might be underestimating the fine work that went into Dokku!
>

It's not so much underestimating the work in Dokku as much as leveraging
what systemd and a tool like haproxy provide for services.

Here's what a script would do with no socket activation, assuming you're
sending traffic to the services with a tool like haproxy and have an
interface like a control socket [1]:

   1. Tell haproxy to stop sending traffic to service instance A.
   2. systemctl restart instance-a.service
   3. Tell haproxy to start sending traffic to service instance A and stop
   to B.
   4. systemctl restart instance-b.service
   5. Tell haproxy to start sending traffic to service instance B.

Alternatively, you could track state of a flip-flop and stabilize on only
one service instance at a time after the restart.

[1] https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-15 Thread Kai Krakow
Am Sat, 18 Jun 2016 13:56:03 +0200
schrieb Paul Menzel :

> Dear systemd folks,
> 
> 
> the setup is as follows.
> 
> Nginx is used as the Web server, which communicates with a Ruby on
> Rails application over a socket. Puma [1] is used as the application
> server.
> 
> Nginx and Puma are managed by systemd service files.
> 
> If a new version of the application is installed, the goal is that
> there is no downtime. That means, until the application with the new
> code isn’t ready yet to serve requests, the old application still
> answers them, that means, not only are no requests lost, but during
> restart the visitor does not need to wait any longer than normally.
> 
> In this case, JRuby is used, which means that application start, `sudo
> systemctl start web-application.service` takes more than 30 seconds.
> 
> So, `sudo systemctl restart web-application.service` is not enough as
> Puma takes some time to be started. (The socket activation described
> in the Puma systemd documentation [2] only takes care, that no
> requests are lost. The visitor still has to wait.)
> 
> Is that possible by just using systemd, or is a load balancer like
> HAProxy or a special NGINX configuration and service file templates
> needed?
> 
> My current thinking would be, that a service file template for the
> Puma application server is used. The new code is then started in
> parallel, and if it’s done, it “takes over the socket”. (No idea if
> NGINX would need to be restarted.) Then the old version of the code
> is stopped. (I don’t know how to model that last step/dependency.)
> 
> What drawbacks does the above method have? Is it implementable?
> 
> How do you manage such a setup?

This is not really systemd specific. Systemd solves zero-downtime only
by socket-activation which is not exactly what you expect.

We are using mod_passenger with nginx to provider zero-downtime during
application deployment, background workers (job servers etc) are
managed by systemd and socket activation.

Passenger however does not support jruby afaik. But you may want to
look at how they implement zero downtime: Its kind of a proxy which is
switched to the new application instance as soon as it is up and
running. You could do something similar: Deploy to a new installation,
when ready, rewrite your nginx config to point to the new instance,
reload nginx, then gracefully stop the old instance. Its not that well
integrated as passenger does it but it can be optimized. However,
nothing of this is systemd specific except you still may want to use
socket activation in some places. Stopping and starting instances and
reloading nginx should be part of your deployment process. If controlled
with systemd, you can use service templates like
my-application@.service and then start
my-application@instance-201607150001.service and stop
my-application@instance-201607140003.service. You can use the instance
name to setup sockets, proxies etc.

-- 
Regards,
Kai

Replies to list-only preferred.


pgphNUoKsRqYy.pgp
Description: Digitale Signatur von OpenPGP
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-14 Thread Kai Hendry
On Thu, 14 Jul 2016, at 05:07 PM, David Timothy Strauss wrote:
> Dokku would be about a 5-10 lines of shell script with services running
> in
> systemd.

I would love to see that 10 lines of shell you claimed, but I think you
might be underestimating the fine work that went into Dokku!

Cheers,
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-14 Thread David Timothy Strauss
Dokku would be about a 5-10 lines of shell script with services running in
systemd.

On Wed, Jul 13, 2016, 20:41 Kai Hendry  wrote:

> On Sat, 18 Jun 2016, at 07:56 PM, Paul Menzel wrote:
> > Is that possible by just using systemd, or is a load balancer like
> > HAProxy or a special NGINX configuration and service file templates
> > needed?
>
> I'm looking for answers too and the best switcheroo I've found so far is
> http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/ which
> leverages Docker and it's not at all integrated with systemd.
>
> Guess a systemd integrated solution would be leveraging machinectl,
> journalctl and more powerful service control, but I haven't seen an
> orchestrated deployment like that yet. Maybe CoreOS has something, I am
> not sure.
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-13 Thread Kai Hendry
On Sat, 18 Jun 2016, at 07:56 PM, Paul Menzel wrote:
> Is that possible by just using systemd, or is a load balancer like
> HAProxy or a special NGINX configuration and service file templates
> needed?

I'm looking for answers too and the best switcheroo I've found so far is
http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/ which
leverages Docker and it's not at all integrated with systemd.

Guess a systemd integrated solution would be leveraging machinectl,
journalctl and more powerful service control, but I haven't seen an
orchestrated deployment like that yet. Maybe CoreOS has something, I am
not sure.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-06 Thread David Timothy Strauss
You either need a load balancer (less elegant) or need to make use of the
Linux kernel's SO_REUSEPORT option so the new application can bind to the
same port as the old one (at which point the old application should unbind
the port and shut itself down).
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-06-18 Thread Paul Menzel
Dear systemd folks,


the setup is as follows.

Nginx is used as the Web server, which communicates with a Ruby on
Rails application over a socket. Puma [1] is used as the application
server.

Nginx and Puma are managed by systemd service files.

If a new version of the application is installed, the goal is that
there is no downtime. That means, until the application with the new
code isn’t ready yet to serve requests, the old application still
answers them, that means, not only are no requests lost, but during
restart the visitor does not need to wait any longer than normally.

In this case, JRuby is used, which means that application start, `sudo
systemctl start web-application.service` takes more than 30 seconds.

So, `sudo systemctl restart web-application.service` is not enough as
Puma takes some time to be started. (The socket activation described in
the Puma systemd documentation [2] only takes care, that no requests
are lost. The visitor still has to wait.)

Is that possible by just using systemd, or is a load balancer like
HAProxy or a special NGINX configuration and service file templates
needed?

My current thinking would be, that a service file template for the Puma
application server is used. The new code is then started in parallel,
and if it’s done, it “takes over the socket”. (No idea if NGINX would
need to be restarted.) Then the old version of the code is stopped. (I
don’t know how to model that last step/dependency.)

What drawbacks does the above method have? Is it implementable?

How do you manage such a setup?


Thanks,

Paul


[1] http://puma.io/
[2] https://github.com/puma/puma/blob/master/docs/systemd.md

signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel