Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-05 Thread silvioprog
Master,

I'm going to test it in C simulating my scenario (in Pascal), the idea
using a list of daemons is really awesome.

Thanks a lot for the full example, I probably will be back with some news!
:-)

On Fri, Dec 2, 2016 at 4:40 PM, Evgeny Grin  wrote:

> You will need something like:
>
> -
>   size_t N = 0; /* number of inited daemons */
>   size_t n = 0; /* current worker */
>   MHD_Daemon* daemons[MAX_DAEMONS];
>   daemons[N++] = MHD_start_daemon (MHD_USE_NO_LISTEN_SOCKET |
> MHD_USE_SELECT_INTERNALLY, );
>   while (processingAllowed())
>   {
> int fd = accept (listen_fd, &addr, &addrlen);
> if (-1 == fd)
>   continue;
>
> if (!isSomeFunctionOfMyAppResponding())
> {
>   if (N < MAX_DAEMONS)
>   { /* Add new daemon if space is available */
> daemons[N++] = MHD_start_daemon (MHD_USE_NO_LISTEN_SOCKET |
> MHD_USE_SELECT_INTERNALLY, );
>   }
>   n++; /* Switch to next worker */
>   if (MAX_DAEMONS == n)
>   {
> n = 0; /* Return processing to first daemon */
>   }
> }
> MHD_add_connection (daemons[n], fd, &addr, &addrlen);
>   }
> -
>
> "Slow" daemons will continue processing their connections, when slowdown
> is detected, you will switch new connections to next daemon.
>
> --
> Best Wishes,
> Evgeny Grin


-- 
Silvio Clécio


Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-02 Thread Evgeny Grin
You will need something like:

-
  size_t N = 0; /* number of inited daemons */
  size_t n = 0; /* current worker */
  MHD_Daemon* daemons[MAX_DAEMONS];
  daemons[N++] = MHD_start_daemon (MHD_USE_NO_LISTEN_SOCKET |
MHD_USE_SELECT_INTERNALLY, );
  while (processingAllowed())
  {
int fd = accept (listen_fd, &addr, &addrlen);
if (-1 == fd)
  continue;

if (!isSomeFunctionOfMyAppResponding())
{
  if (N < MAX_DAEMONS)
  { /* Add new daemon if space is available */
daemons[N++] = MHD_start_daemon (MHD_USE_NO_LISTEN_SOCKET |
MHD_USE_SELECT_INTERNALLY, );
  }
  n++; /* Switch to next worker */
  if (MAX_DAEMONS == n)
  {
n = 0; /* Return processing to first daemon */
  }
}
MHD_add_connection (daemons[n], fd, &addr, &addrlen);
  }
-

"Slow" daemons will continue processing their connections, when slowdown
is detected, you will switch new connections to next daemon.

-- 
Best Wishes,
Evgeny Grin

On 02.12.2016 0:39, silvioprog wrote:
> On Thu, Dec 1, 2016 at 5:49 PM, Evgeny Grin  > wrote:
> 
> It's a basic of theory of mass telecommunication.
> Developed with first automatic telephone exchange.
> In overloaded situation - just reject some part of incoming traffic as
> retries only prevent end of overload.
> 
> 
> Exactly. :-)
> 
> But instead of rejecting it redirects them to a new server.
> 
> You can do it manually.
> Start MHD with MHD_USE_NO_LISTEN_SOCKET and process polling of listen
> socket in your own thread. Use MHD_add_connection() when new connection
> arrive.
> 
> 
> Hm I didn't know about MHD_add_connection()... it seems awesome. I'm
> going to check how to use it.
>  
> 
> As soon as you detect "overload" of MHD - start new MHD instance without
> listen socket and use MHD_add_connection() with new MHD instance.
> 
> 
> This is the problem: how to detect that a function of my application is
> not responding (overloaded) to not redirect new requests to it? :-/
> 
> (this is little bit funny: my app can't know if it is not responding
> because it is not responding... But MHD/nginx can! :-D)
> 
> I think nginx uses some system call to check if the TCP destination
> (proxy) is responding, but I don't know how it does that. See my
> environment:
> 
> . nginx running on port 443; // in Least Connected mode
> . fastcgiapp1 rutting on port 9000; // primary
> . fastcgiapp2 rutting on port 9001. // backup
> 
> (both fastcgitapp are blocking and doesn't have any threading support,
> so I think nginx creates the required threads)
> 
> When the fastcgiapp1(primary) is not responding, fastcgiapp2(backup) is
> used (nginx creates a new thread, now I have two threads). Some time
> after (timeout) nginx try to use fastcgiapp1 again (now one thread).
> Nginx never redirects new requests to fastcgiapp1 if it is not
> responding... it uses fastcgiapp2, however, fastcgiapp1 is the primary
> app, so nginx retries to use it after some time.
> 
> Supposing this pseudo code:
> 
> static int ahc_echo(void * cls, struct MHD_Connection * connection ...
> other params) {
>   if (isSomeFunctionOfMyAppResponding()) {
> do something ...
>   } else {
> create a new thread with blocking server and finally do something ...
>   }
> }
> 
> MHD_start_daemon(*MHD_USE_SELECT_INTERNALLY* ...
> 
> The pseudo model above seems MHD_USE_THREAD_PER_CONNECTION, but creating
> new threads only when it is really required.
> 
> --
> Best Wishes,
> Evgeny Grin
> 
> 
> -- 
> Silvio Clécio



Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-02 Thread silvioprog
Oops,

In my pseudo code I meant:

static int ahc_echo(void * cls, struct MHD_Connection * connection ...
other params) {
  if (isSomeFunctionOfMyAppResponding()) {
do something ...
  } else {
create a new thread *without* blocking server and finally do something
...
  }
}

MHD_start_daemon(*MHD_USE_SELECT_INTERNALLY* ...

^^'

On Thu, Dec 1, 2016 at 6:42 PM, silvioprog  wrote:

> Oops,
>
> On Thu, Dec 1, 2016 at 6:39 PM, silvioprog  wrote:
> [...]
>
>> . fastcgiapp1 rutting on port 9000; // primary
>> . fastcgiapp2 rutting on port 9001. // backup
>>
>
> "... running on port ..."
>
> Supposing this pseudo code:
>
>
> I meant "A pseudo mode:"
>

-- 
Silvio Clécio


Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-01 Thread silvioprog
Oops,

On Thu, Dec 1, 2016 at 6:39 PM, silvioprog  wrote:
[...]

> . fastcgiapp1 rutting on port 9000; // primary
> . fastcgiapp2 rutting on port 9001. // backup
>

"... running on port ..."

Supposing this pseudo code:


I meant "A pseudo mode:"

-- 
Silvio Clécio


Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-01 Thread silvioprog
On Thu, Dec 1, 2016 at 5:49 PM, Evgeny Grin  wrote:

> It's a basic of theory of mass telecommunication.
> Developed with first automatic telephone exchange.
> In overloaded situation - just reject some part of incoming traffic as
> retries only prevent end of overload.
>

Exactly. :-)

But instead of rejecting it redirects them to a new server.

You can do it manually.
> Start MHD with MHD_USE_NO_LISTEN_SOCKET and process polling of listen
> socket in your own thread. Use MHD_add_connection() when new connection
> arrive.
>

Hm I didn't know about MHD_add_connection()... it seems awesome. I'm going
to check how to use it.


> As soon as you detect "overload" of MHD - start new MHD instance without
> listen socket and use MHD_add_connection() with new MHD instance.
>

This is the problem: how to detect that a function of my application is not
responding (overloaded) to not redirect new requests to it? :-/

(this is little bit funny: my app can't know if it is not responding
because it is not responding... But MHD/nginx can! :-D)

I think nginx uses some system call to check if the TCP destination (proxy)
is responding, but I don't know how it does that. See my environment:

. nginx running on port 443; // in Least Connected mode
. fastcgiapp1 rutting on port 9000; // primary
. fastcgiapp2 rutting on port 9001. // backup

(both fastcgitapp are blocking and doesn't have any threading support, so I
think nginx creates the required threads)

When the fastcgiapp1(primary) is not responding, fastcgiapp2(backup) is
used (nginx creates a new thread, now I have two threads). Some time after
(timeout) nginx try to use fastcgiapp1 again (now one thread). Nginx never
redirects new requests to fastcgiapp1 if it is not responding... it uses
fastcgiapp2, however, fastcgiapp1 is the primary app, so nginx retries to
use it after some time.

Supposing this pseudo code:

static int ahc_echo(void * cls, struct MHD_Connection * connection ...
other params) {
  if (isSomeFunctionOfMyAppResponding()) {
do something ...
  } else {
create a new thread with blocking server and finally do something ...
  }
}

MHD_start_daemon(*MHD_USE_SELECT_INTERNALLY* ...

The pseudo model above seems MHD_USE_THREAD_PER_CONNECTION, but creating
new threads only when it is really required.

--
> Best Wishes,
> Evgeny Grin


-- 
Silvio Clécio


Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-01 Thread silvioprog
Complementing my question: this way creates a new thread only when really
it is required, different from the threaded model, that creates
unconditionally a new thread for each request.

On Thu, Dec 1, 2016 at 5:13 PM, silvioprog  wrote:

> Hello,
>
> Please take a look at this quote:
>
> "*Least Connected*
>
> *With least connected load balancing, nginx won’t forward any traffic to a
> busy server. This method is useful when operations on the application
> servers take longer to complete. Using this method helps to avoid overload
> situations, because nginx doesn't pass any requests to servers which are
> already under load.*"
>
> See article: https://futurestud.io/tutorials/nginx-load-balancing-
> advanced-configuration .
>
> I'm using this nginx feature and it helped me a lot: I have an application
> with two routes, the first one is fast because it just process common
> atomic CRUD operations; the second is a little bit slow, because that
> spends a long time processing hard things like report generation. So I
> start two instances of my application and configuring two proxies in my
> upstream, all the time the requests are redirected to the first proxy, but
> when some request executes some reporting generation -- blocking the server
> --, all next requests are redirected to the second proxy, however, after a
> timeout (about 10 seconds), the server checks if the first proxy is
> readable, coming back the requests for the first one.
>
> Does MHD offer something like this on its threading models? It so, what
> flags I need to pass to get the following behavior: all the time the users
> are routed in the first thread using the MHD's event-driven, however, when
> someone executes some reporting generation -- blocking the server --, each
> new request MHD checks if the route is still blocked, if so, MHD creates a
> new thread (also in event-drive way) redirecting the next request for it.
>
> I don't know if I was clear in my explanation, but in short I'm trying
> something like the nginx's "Least Connected", however using purely MHD
> requests.
>
> --
> Silvio Clécio
>



-- 
Silvio Clécio


Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?

2016-12-01 Thread Evgeny Grin
It's a basic of theory of mass telecommunication.
Developed with first automatic telephone exchange.
In overloaded situation - just reject some part of incoming traffic as
retries only prevent end of overload.

You can do it manually.
Start MHD with MHD_USE_NO_LISTEN_SOCKET and process polling of listen
socket in your own thread. Use MHD_add_connection() when new connection
arrive.
As soon as you detect "overload" of MHD - start new MHD instance without
listen socket and use MHD_add_connection() with new MHD instance.

-- 
Best Wishes,
Evgeny Grin

On 01.12.2016 23:13, silvioprog wrote:
> Hello,
> 
> Please take a look at this quote:
> 
> "*/Least Connected/*
> /
> /
> /With least connected load balancing, nginx won’t forward any traffic to
> a busy server. This method is useful when operations on the application
> servers take longer to complete. Using this method helps to avoid
> overload situations, because nginx doesn't pass any requests to servers
> which are already under load./"
> 
> See article:
> https://futurestud.io/tutorials/nginx-load-balancing-advanced-configuration
> .
> 
> I'm using this nginx feature and it helped me a lot: I have an
> application with two routes, the first one is fast because it just
> process common atomic CRUD operations; the second is a little bit slow,
> because that spends a long time processing hard things like report
> generation. So I start two instances of my application and configuring
> two proxies in my upstream, all the time the requests are redirected to
> the first proxy, but when some request executes some reporting
> generation -- blocking the server --, all next requests are redirected
> to the second proxy, however, after a timeout (about 10 seconds), the
> server checks if the first proxy is readable, coming back the requests
> for the first one.
> 
> Does MHD offer something like this on its threading models? It so, what
> flags I need to pass to get the following behavior: all the time the
> users are routed in the first thread using the MHD's event-driven,
> however, when someone executes some reporting generation -- blocking the
> server --, each new request MHD checks if the route is still blocked, if
> so, MHD creates a new thread (also in event-drive way) redirecting the
> next request for it.
> 
> I don't know if I was clear in my explanation, but in short I'm trying
> something like the nginx's "Least Connected", however using purely MHD
> requests.
> 
> --
> Silvio Clécio