Hi,

On Tue, Nov 21, 2017 at 02:13:24AM +0100, PiBa-NL wrote:
> Hi List,
> 
> I've got a startup script that essentially looks like the one below #1# 
> (simplified..)
> When configured with master-worker, the first parent process 2926 as 
> seen in #2# keeps running.

Yes, that's the expected behavior, the master-worker was designed in a way to
replace the systemd-wrapper, and the systemd way to run a daemon is to keep it
on the foreground and pipe it to systemd so it can catch the errors on the
standard ouput.

However, it was also designed for normal people who wants to daemonize,
so you can combine -W with -D which will daemonize the master.

> Doing the same without master-worker, the daemon properly detaches and 
> the parent exits returning possible warnings/errors..
> 
> When the second php exec line in #1# with "> /dev/null" is used instead 
> it does succeed.
> 
> While its running the stats page does get served by the workers..
> 
> To avoid a possible issue with polers(see my previous mail thread) ive 
> tried to add the -dk but still the first started parent process stays 
> alive..
> And if terminated with a ctrl+c it stops the other master-worker 
> processes with it.. as can be seen in #3# (was from a different attempt 
> so different processid's.).

Well, that's an expected behavior too, the master will forward the ctrl-c
signal to the workers and leave when all the workers are dead.

> 
> 'truss' output (again with different pids..): 
> https://0bin.net/paste/f2p8uRU1t2ebZjkL#iJOBdPnR8mCmRrtGGkEaqsmQXfbHmQ56vQHdseh1x8U
> 
> If desired i can gater the htop/truss/console output information from a 
> single run..
> 
> Any other info i can provide? Or should i change my script to not expect 
> any console output from haproxy? In my original script the 'exec' is 
> called with 2 extra parameters that return the console output and exit 
> status..
> p.s.
> how should configuration/startup errors be 'handled' when using 
> master-worker?

I'm not sure of getting the issue there, the errors are still displayed upon
startup like in any other haproxy mode, there is really no change here.
I assume your only problem with your script is the daemonize that you can
achieve by combining -W and -D.

> A kill -1 itself wont tell if a new configured bind cannot find the 
> interface address to bind to? and a -c before hand wont find such a problem.

Upon a reload (SIGUSR2 on the master) the master will try to parse the
configuration again and start the listeners. If it fails, the master will
reexec itself in a wait() mode, and won't kill the previous workers, the
parsing/bind error should be displayed on the standard output of the master.

> The end result that nothing is running and the error causing that 
> however should be 'caught' somehow for logging.?. should haproxy itself 
> log it to syslogs? but how will the startup script know to notify the 
> user of a failure?

Well, the master don't do syslog, because there might be no syslog in your
configuration. I think you should try the systemd way and log the standard
output.

> Would it be possible when starting haproxy with -sf <PID> it would tell 
> if the (original?) master was successful in reloading the config / 
> starting new workers or how should this be done?

That may be badly documented but you are not supposed to use -sf with the 
master worker,
you just have to send the -USR2 signal to the master and it will parse again the
configuration, launch new workers and kill smoothly the previous ones.

Unfortunately signals are asynchronous, and we don't have a way yet to return
a bad exit code upon reload. But we might implement a synchronous
configuration notification in the future, using the admin socket for example.

> Currently a whole new set of master-worker processes seems to be take over..

Well, I supposed that's because you launched a new master-worker with -sf, it's
not supposed to be used that way but it should work too if you don't mind
having a new PID.


> Regards,
> PiBa-NL / Pieter
> 
 
Best Regards,

-- 
William Lallemand

Reply via email to