Re: Per EndPoint Threads???

2017-08-15 Thread Owen Rubel
Owen Rubel
oru...@gmail.com

On Tue, Aug 15, 2017 at 8:23 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/13/17 10:46 AM, Owen Rubel wrote:
> > Owen Rubel oru...@gmail.com
> >
> > On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Owen,
> >
> > On 8/12/17 12:47 PM, Owen Rubel wrote:
>  What I am talking about is something that improves
>  communication as we notice that communication channel needing
>  more resources. Not caching what is communicated... improving
>  the CHANNEL for communicating the resource (whatever it may
>  be).
> >
> > If the channel is an HTTP connection (or TCP; the application
> > protocol isn't terribly relevant), then you are limited by the
> > following:
> >
> > 1. Network bandwidth 2. Available threads (to service a particular
> > request) 3. Hardware resources on the server
> > (CPU/memory/disk/etc.)
> >
> > Let's ignore 1 and 3 for now, since you are primarily concerned
> > with concurrency, and concurrency is useless if the other resources
> > are constrained or otherwise limiting the equation.
> >
> > Let's say we had "per endpoint" thread pools, so that e.g. /create
> > had its own thread pool, and /show had another one, etc. What would
> > that buy us?
> >
> > (Let's ignore for now the fact that one set of threads must always
> > be used to decode the request to decide where it's going, like
> > /create or /show.)
> >
> > If we have a limited total number of threads (e.g. 10), then we
> > could "reserve" some of them so that we could always have 2 threads
> > for /create even if all the other threads in the system (the other
> > 8) were being used for something else. If we had 2 threads for
> > /create and 2 threads for /show, then only 6 would remain for e.g.
> > /edit or /delete. So if 6 threads were already being used for /edit
> > or /delete, the 7th incoming request would be queued, but anyone
> > making a request for /show or /create would (if a thread in those
> > pools is available) be serviced immediately.
> >
> > I can see some utility in this ability, because it would allow the
> > container to ensure that some resources were never starved... or,
> > rather, that they have some priority over certain other services.
> > In other words, the service could enjoy guaranteed provisioning
> > for certain endpoints.
> >
> > As it stands, Tomcat (and, I would venture a guess, most if not
> > all other containers) implements a fair request pipeline where
> > requests are (at least roughly) serviced in the order in which they
> > are received. Rather than guaranteeing provisioning for a
> > particular endpoint, the closest thing that could be implemented
> > (at the application level) would be a
> > resource-availability-limiting mechanism, such as counting the
> > number of in-flight requests and rejecting those which exceed some
> > threshold with e.g. a 503 response.
> >
> > Unfortunately, that doesn't actually prioritize some requests, it
> > merely rejects others in order to attempt to prioritize those
> > others. It also starves endpoints even when there is no reason to
> > do so (e.g. in the 10-thread scenario, if all 4 /show and /create
> > threads are idle, but 6 requests are already in process for the
> > other endpoints, a 7th request for those other endpoints will be
> > rejected).
> >
> > I believe that per-endpoint provisioning is a possibility, but I
> > don't think that the potential gains are worth the certain
> > complexity of the system required to implement it.
> >
> > There are other ways to handle heterogeneous service requests in a
> > way that doesn't starve one type of request in favor of another.
> > One obvious solution is horizontal scaling with a load-balancer. An
> > LB can be used to implement a sort of guaranteed-provisioning for
> > certain endpoints by providing more back-end servers for certain
> > endpoints. If you want to make sure that /show can be called by any
> > client at any time, then make sure you spin-up 1000 /show servers
> > and register them with the load-balancer. You can survive with only
> > maybe 10 nodes servicing /delete requests; others will either wait
> > in a queue or receive a 503 from the lb.
> >
> > For my money, I'd maximize the number of threads available for all
> > requests (whether within a single server, or across a large
> > cluster) and not require that they be available for any particular
> > endpoint. Once you have to depart from a single server, you MUST
> > have something like a load-balancer involved, and therefore the
> > above solution becomes not only more practical but also more
> > powerful.
> >
> > Since relying on a one-box-wonder to run a high-availability web
> > service isn't practical, provisioning is necessarily above the
> > cluster-node level, and so the problem has effectively moved from
> > the app server to the 

Re: Per EndPoint Threads???

2017-08-15 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Owen,

On 8/13/17 10:46 AM, Owen Rubel wrote:
> Owen Rubel oru...@gmail.com
> 
> On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz < 
> ch...@christopherschultz.net> wrote:
> 
> Owen,
> 
> On 8/12/17 12:47 PM, Owen Rubel wrote:
 What I am talking about is something that improves
 communication as we notice that communication channel needing
 more resources. Not caching what is communicated... improving
 the CHANNEL for communicating the resource (whatever it may
 be).
> 
> If the channel is an HTTP connection (or TCP; the application
> protocol isn't terribly relevant), then you are limited by the
> following:
> 
> 1. Network bandwidth 2. Available threads (to service a particular
> request) 3. Hardware resources on the server
> (CPU/memory/disk/etc.)
> 
> Let's ignore 1 and 3 for now, since you are primarily concerned
> with concurrency, and concurrency is useless if the other resources
> are constrained or otherwise limiting the equation.
> 
> Let's say we had "per endpoint" thread pools, so that e.g. /create
> had its own thread pool, and /show had another one, etc. What would
> that buy us?
> 
> (Let's ignore for now the fact that one set of threads must always
> be used to decode the request to decide where it's going, like
> /create or /show.)
> 
> If we have a limited total number of threads (e.g. 10), then we
> could "reserve" some of them so that we could always have 2 threads
> for /create even if all the other threads in the system (the other
> 8) were being used for something else. If we had 2 threads for
> /create and 2 threads for /show, then only 6 would remain for e.g.
> /edit or /delete. So if 6 threads were already being used for /edit
> or /delete, the 7th incoming request would be queued, but anyone
> making a request for /show or /create would (if a thread in those
> pools is available) be serviced immediately.
> 
> I can see some utility in this ability, because it would allow the 
> container to ensure that some resources were never starved... or, 
> rather, that they have some priority over certain other services.
> In other words, the service could enjoy guaranteed provisioning
> for certain endpoints.
> 
> As it stands, Tomcat (and, I would venture a guess, most if not
> all other containers) implements a fair request pipeline where
> requests are (at least roughly) serviced in the order in which they
> are received. Rather than guaranteeing provisioning for a
> particular endpoint, the closest thing that could be implemented
> (at the application level) would be a
> resource-availability-limiting mechanism, such as counting the
> number of in-flight requests and rejecting those which exceed some
> threshold with e.g. a 503 response.
> 
> Unfortunately, that doesn't actually prioritize some requests, it 
> merely rejects others in order to attempt to prioritize those
> others. It also starves endpoints even when there is no reason to
> do so (e.g. in the 10-thread scenario, if all 4 /show and /create
> threads are idle, but 6 requests are already in process for the
> other endpoints, a 7th request for those other endpoints will be
> rejected).
> 
> I believe that per-endpoint provisioning is a possibility, but I
> don't think that the potential gains are worth the certain
> complexity of the system required to implement it.
> 
> There are other ways to handle heterogeneous service requests in a
> way that doesn't starve one type of request in favor of another.
> One obvious solution is horizontal scaling with a load-balancer. An
> LB can be used to implement a sort of guaranteed-provisioning for
> certain endpoints by providing more back-end servers for certain
> endpoints. If you want to make sure that /show can be called by any
> client at any time, then make sure you spin-up 1000 /show servers
> and register them with the load-balancer. You can survive with only
> maybe 10 nodes servicing /delete requests; others will either wait
> in a queue or receive a 503 from the lb.
> 
> For my money, I'd maximize the number of threads available for all 
> requests (whether within a single server, or across a large
> cluster) and not require that they be available for any particular
> endpoint. Once you have to depart from a single server, you MUST
> have something like a load-balancer involved, and therefore the
> above solution becomes not only more practical but also more
> powerful.
> 
> Since relying on a one-box-wonder to run a high-availability web 
> service isn't practical, provisioning is necessarily above the 
> cluster-node level, and so the problem has effectively moved from
> the app server to the load-balancer (or reverse proxy). I believe
> the application server is an inappropriate place to implement this
> type of provisioning because it's too small-scale. The app server
> should serve requests as quickly as possible, and arranging for
> this kind of provisioning would add a level of complexity that

Re: Per EndPoint Threads???

2017-08-13 Thread Owen Rubel
Owen Rubel
oru...@gmail.com

On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 12:47 PM, Owen Rubel wrote:
> > What I am talking about is something that improves communication as
> > we notice that communication channel needing more resources. Not
> > caching what is communicated... improving the CHANNEL for
> > communicating the resource (whatever it may be).
>
> If the channel is an HTTP connection (or TCP; the application protocol
> isn't terribly relevant), then you are limited by the following:
>
> 1. Network bandwidth
> 2. Available threads (to service a particular request)
> 3. Hardware resources on the server (CPU/memory/disk/etc.)
>
> Let's ignore 1 and 3 for now, since you are primarily concerned with
> concurrency, and concurrency is useless if the other resources are
> constrained or otherwise limiting the equation.
>
> Let's say we had "per endpoint" thread pools, so that e.g. /create had
> its own thread pool, and /show had another one, etc. What would that
> buy us?
>
> (Let's ignore for now the fact that one set of threads must always be
> used to decode the request to decide where it's going, like /create or
> /show.)
>
> If we have a limited total number of threads (e.g. 10), then we could
> "reserve" some of them so that we could always have 2 threads for
> /create even if all the other threads in the system (the other 8) were
> being used for something else. If we had 2 threads for /create and 2
> threads for /show, then only 6 would remain for e.g. /edit or /delete.
> So if 6 threads were already being used for /edit or /delete, the 7th
> incoming request would be queued, but anyone making a request for
> /show or /create would (if a thread in those pools is available) be
> serviced immediately.
>
> I can see some utility in this ability, because it would allow the
> container to ensure that some resources were never starved... or,
> rather, that they have some priority over certain other services. In
> other words, the service could enjoy guaranteed provisioning for
> certain endpoints.
>
> As it stands, Tomcat (and, I would venture a guess, most if not all
> other containers) implements a fair request pipeline where requests
> are (at least roughly) serviced in the order in which they are
> received. Rather than guaranteeing provisioning for a particular
> endpoint, the closest thing that could be implemented (at the
> application level) would be a resource-availability-limiting
> mechanism, such as counting the number of in-flight requests and
> rejecting those which exceed some threshold with e.g. a 503 response.
>
> Unfortunately, that doesn't actually prioritize some requests, it
> merely rejects others in order to attempt to prioritize those others.
> It also starves endpoints even when there is no reason to do so (e.g.
> in the 10-thread scenario, if all 4 /show and /create threads are
> idle, but 6 requests are already in process for the other endpoints, a
> 7th request for those other endpoints will be rejected).
>
> I believe that per-endpoint provisioning is a possibility, but I don't
> think that the potential gains are worth the certain complexity of the
> system required to implement it.
>
> There are other ways to handle heterogeneous service requests in a way
> that doesn't starve one type of request in favor of another. One
> obvious solution is horizontal scaling with a load-balancer. An LB can
> be used to implement a sort of guaranteed-provisioning for certain
> endpoints by providing more back-end servers for certain endpoints. If
> you want to make sure that /show can be called by any client at any
> time, then make sure you spin-up 1000 /show servers and register them
> with the load-balancer. You can survive with only maybe 10 nodes
> servicing /delete requests; others will either wait in a queue or
> receive a 503 from the lb.
>
> For my money, I'd maximize the number of threads available for all
> requests (whether within a single server, or across a large cluster)
> and not require that they be available for any particular endpoint.
> Once you have to depart from a single server, you MUST have something
> like a load-balancer involved, and therefore the above solution
> becomes not only more practical but also more powerful.
>
> Since relying on a one-box-wonder to run a high-availability web
> service isn't practical, provisioning is necessarily above the
> cluster-node level, and so the problem has effectively moved from the
> app server to the load-balancer (or reverse proxy). I believe the
> application server is an inappropriate place to implement this type of
> provisioning because it's too small-scale. The app server should serve
> requests as quickly as possible, and arranging for this kind of
> provisioning would add a level of complexity that would jeopardize
> performance of all requests within the application server.
>
> > But 

Re: Per EndPoint Threads???

2017-08-13 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Owen,

On 8/12/17 12:47 PM, Owen Rubel wrote:
> What I am talking about is something that improves communication as
> we notice that communication channel needing more resources. Not
> caching what is communicated... improving the CHANNEL for
> communicating the resource (whatever it may be).

If the channel is an HTTP connection (or TCP; the application protocol
isn't terribly relevant), then you are limited by the following:

1. Network bandwidth
2. Available threads (to service a particular request)
3. Hardware resources on the server (CPU/memory/disk/etc.)

Let's ignore 1 and 3 for now, since you are primarily concerned with
concurrency, and concurrency is useless if the other resources are
constrained or otherwise limiting the equation.

Let's say we had "per endpoint" thread pools, so that e.g. /create had
its own thread pool, and /show had another one, etc. What would that
buy us?

(Let's ignore for now the fact that one set of threads must always be
used to decode the request to decide where it's going, like /create or
/show.)

If we have a limited total number of threads (e.g. 10), then we could
"reserve" some of them so that we could always have 2 threads for
/create even if all the other threads in the system (the other 8) were
being used for something else. If we had 2 threads for /create and 2
threads for /show, then only 6 would remain for e.g. /edit or /delete.
So if 6 threads were already being used for /edit or /delete, the 7th
incoming request would be queued, but anyone making a request for
/show or /create would (if a thread in those pools is available) be
serviced immediately.

I can see some utility in this ability, because it would allow the
container to ensure that some resources were never starved... or,
rather, that they have some priority over certain other services. In
other words, the service could enjoy guaranteed provisioning for
certain endpoints.

As it stands, Tomcat (and, I would venture a guess, most if not all
other containers) implements a fair request pipeline where requests
are (at least roughly) serviced in the order in which they are
received. Rather than guaranteeing provisioning for a particular
endpoint, the closest thing that could be implemented (at the
application level) would be a resource-availability-limiting
mechanism, such as counting the number of in-flight requests and
rejecting those which exceed some threshold with e.g. a 503 response.

Unfortunately, that doesn't actually prioritize some requests, it
merely rejects others in order to attempt to prioritize those others.
It also starves endpoints even when there is no reason to do so (e.g.
in the 10-thread scenario, if all 4 /show and /create threads are
idle, but 6 requests are already in process for the other endpoints, a
7th request for those other endpoints will be rejected).

I believe that per-endpoint provisioning is a possibility, but I don't
think that the potential gains are worth the certain complexity of the
system required to implement it.

There are other ways to handle heterogeneous service requests in a way
that doesn't starve one type of request in favor of another. One
obvious solution is horizontal scaling with a load-balancer. An LB can
be used to implement a sort of guaranteed-provisioning for certain
endpoints by providing more back-end servers for certain endpoints. If
you want to make sure that /show can be called by any client at any
time, then make sure you spin-up 1000 /show servers and register them
with the load-balancer. You can survive with only maybe 10 nodes
servicing /delete requests; others will either wait in a queue or
receive a 503 from the lb.

For my money, I'd maximize the number of threads available for all
requests (whether within a single server, or across a large cluster)
and not require that they be available for any particular endpoint.
Once you have to depart from a single server, you MUST have something
like a load-balancer involved, and therefore the above solution
becomes not only more practical but also more powerful.

Since relying on a one-box-wonder to run a high-availability web
service isn't practical, provisioning is necessarily above the
cluster-node level, and so the problem has effectively moved from the
app server to the load-balancer (or reverse proxy). I believe the
application server is an inappropriate place to implement this type of
provisioning because it's too small-scale. The app server should serve
requests as quickly as possible, and arranging for this kind of
provisioning would add a level of complexity that would jeopardize
performance of all requests within the application server.

> But like you said, this is not something that is doable so I'll
> look elsewhere.

I think it's doable, just not worth it given the orthogonal solutions
available. Some things are better-implemented at other layers of the
application (as a whole system) and perhaps not the application server
itself.

Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
On Sat, Aug 12, 2017 at 3:13 PM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 12:47 PM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Owen,
> >
> > On 8/12/17 11:21 AM, Owen Rubel wrote:
>  On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas
>   wrote:
> 
> > On 12/08/17 06:00, Christopher Schultz wrote:
> >> Owen,
> >>
> >> Please do not top-post. I have re-ordered your post to
> >> be bottom-post.
> >>
> >> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>> On Fri, Aug 11, 2017 at 5:58 PM,
> >>>  wrote:
> >>
> > Hi All,
> >
> > I'm looking for a way (or a tool) in Tomcat to
> > associate threads with endpoints.
> 
>  It isn't clear to me why this would be necessary.
>  Threads should be allocated on demand to individual
>  requests. If one route sees more traffic, then it
>  should automatically be allocated more threads. This
>  could starve some requests if the maximum number of
>  threads had been allocated to a lessor used route,
>  while available threads went unused for more commonly
>  used route.
> >>
> >>> Absolutely but it could ramp up more threads as
> >>> needed.
> >>
> >>> I base the logic on neuron and neuralTransmitters.
> >>> When neurons talk to each other, they send back neural
> >>> transmitters to enforce that pathway.
> >>
> >>> If we could do the same through threads by adding
> >>> additional threads for endpoints that receive more
> >>> traffic vs those which do not, it would enforce better
> >>> and faster communication on those paths.> The current
> >>> way Tomcat does it is not dynamic and it just applies
> >>> to ALL pathways equally which is not efficient.
> >> How would this improve efficiency at all?
> >>
> >> There is nothing inherently "showy" or "edity" about a
> >> particular thread; each request-processing thread is
> >> indistinguishable from any other. I don't believe there
> >> is a way to improve the situation even if "per-endpoint"
> >> (whatever that would mean) threads were a possibility.
> >>
> >> What would you attach to a thread that would make it any
> >> better at editing records? Or deleting them?
> >
> > And I'll add that the whole original proposal ignores a
> > number of rather fundamental points about how Servlet
> > containers (and web servers in general) work. To name a
> > few:
> >
> > - Until the request has been parsed (which requires a
> > thread) Tomcat doesn't know which Servlet (endpoint) the
> > request is destined for. Switching processing to a
> > different thread at that point would add significant
> > overhead for no benefit.
> >
> > - Even after parsing, the actual Servlet that processes
> > the request (if any) can change during processing (e.g. a
> > Filter that conditionally forwards to a different Servlet,
> > authentication, etc.)
> >
> > There is nothing about a endpoint specific thread that
> > would allow it to process a request more efficiently than a
> > general thread.
> >
> > Any per-endpoint thread-pool solution will require the
> > additional overhead to switch processing from the general
> > parsing thread to the endpoint specific thread. This
> > additional cost comes with zero benefits hence it will
> > always be less efficient.
> >
> > In short, there is no way pre-allocating threads to
> > particular endpoints can improve performance compared to
> > just adding the same number of additional threads to the
> > general thread pool.
> >
>  Ah ok thank you for very concise answer. am chasing a pipe
>  dream I guess. Maybe there is another way to get this kind of
>  benefit.
> > The answer is caching, and that can be done at many levels, but
> > the thread-level makes the least sense due to the reasons Mark
> > outlined abov e.
> >
> > -chris
> >>
> >> -
> >>
> >>
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> > Well caching is: - related to resource not communication - is a one
> > time thing and has to have a version check every time.
> >
> > What I am talking about is something that improves communication as
> > we notice that communication channel needing more resources. Not
> > caching what is communicated... improving the CHANNEL for
> > communicating the resource (whatever it may be).
> >
> > But like you said, this is not something that is doable so I'll
> > look elsewhere. 

Re: Per EndPoint Threads???

2017-08-12 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Owen,

On 8/12/17 12:47 PM, Owen Rubel wrote:
> On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz < 
> ch...@christopherschultz.net> wrote:
> 
> Owen,
> 
> On 8/12/17 11:21 AM, Owen Rubel wrote:
 On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas
  wrote:
 
> On 12/08/17 06:00, Christopher Schultz wrote:
>> Owen,
>> 
>> Please do not top-post. I have re-ordered your post to
>> be bottom-post.
>> 
>> On 8/11/17 10:12 PM, Owen Rubel wrote:
>>> On Fri, Aug 11, 2017 at 5:58 PM,
>>>  wrote:
>> 
> Hi All,
> 
> I'm looking for a way (or a tool) in Tomcat to
> associate threads with endpoints.
 
 It isn't clear to me why this would be necessary.
 Threads should be allocated on demand to individual
 requests. If one route sees more traffic, then it
 should automatically be allocated more threads. This
 could starve some requests if the maximum number of
 threads had been allocated to a lessor used route,
 while available threads went unused for more commonly
 used route.
>> 
>>> Absolutely but it could ramp up more threads as
>>> needed.
>> 
>>> I base the logic on neuron and neuralTransmitters.
>>> When neurons talk to each other, they send back neural 
>>> transmitters to enforce that pathway.
>> 
>>> If we could do the same through threads by adding
>>> additional threads for endpoints that receive more
>>> traffic vs those which do not, it would enforce better
>>> and faster communication on those paths.> The current
>>> way Tomcat does it is not dynamic and it just applies
>>> to ALL pathways equally which is not efficient.
>> How would this improve efficiency at all?
>> 
>> There is nothing inherently "showy" or "edity" about a 
>> particular thread; each request-processing thread is 
>> indistinguishable from any other. I don't believe there
>> is a way to improve the situation even if "per-endpoint"
>> (whatever that would mean) threads were a possibility.
>> 
>> What would you attach to a thread that would make it any
>> better at editing records? Or deleting them?
> 
> And I'll add that the whole original proposal ignores a
> number of rather fundamental points about how Servlet
> containers (and web servers in general) work. To name a
> few:
> 
> - Until the request has been parsed (which requires a
> thread) Tomcat doesn't know which Servlet (endpoint) the
> request is destined for. Switching processing to a
> different thread at that point would add significant
> overhead for no benefit.
> 
> - Even after parsing, the actual Servlet that processes
> the request (if any) can change during processing (e.g. a
> Filter that conditionally forwards to a different Servlet,
> authentication, etc.)
> 
> There is nothing about a endpoint specific thread that
> would allow it to process a request more efficiently than a
> general thread.
> 
> Any per-endpoint thread-pool solution will require the 
> additional overhead to switch processing from the general
> parsing thread to the endpoint specific thread. This
> additional cost comes with zero benefits hence it will
> always be less efficient.
> 
> In short, there is no way pre-allocating threads to
> particular endpoints can improve performance compared to
> just adding the same number of additional threads to the
> general thread pool.
> 
 Ah ok thank you for very concise answer. am chasing a pipe
 dream I guess. Maybe there is another way to get this kind of
 benefit.
> The answer is caching, and that can be done at many levels, but
> the thread-level makes the least sense due to the reasons Mark
> outlined abov e.
> 
> -chris
>> 
>> -
>>
>> 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>> 
>> 
> Well caching is: - related to resource not communication - is a one
> time thing and has to have a version check every time.
> 
> What I am talking about is something that improves communication as
> we notice that communication channel needing more resources. Not
> caching what is communicated... improving the CHANNEL for
> communicating the resource (whatever it may be).
> 
> But like you said, this is not something that is doable so I'll
> look elsewhere. Thanks again. :)

If you want to improve communication efficiency, I think that HTTP
isn't the protocol for you. Perhaps Websocket?

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/


Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 11:21 AM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas 
> > wrote:
> >
> >> On 12/08/17 06:00, Christopher Schultz wrote:
> >>> Owen,
> >>>
> >>> Please do not top-post. I have re-ordered your post to be
> >>> bottom-post.
> >>>
> >>> On 8/11/17 10:12 PM, Owen Rubel wrote:
>  On Fri, Aug 11, 2017 at 5:58 PM, 
>  wrote:
> >>>
> >> Hi All,
> >>
> >> I'm looking for a way (or a tool) in Tomcat to associate
> >> threads with endpoints.
> >
> > It isn't clear to me why this would be necessary. Threads
> > should be allocated on demand to individual requests. If
> > one route sees more traffic, then it should automatically
> > be allocated more threads. This could starve some requests
> > if the maximum number of threads had been allocated to a
> > lessor used route, while available threads went unused for
> > more commonly used route.
> >>>
>  Absolutely but it could ramp up more threads as needed.
> >>>
>  I base the logic on neuron and neuralTransmitters. When
>  neurons talk to each other, they send back neural
>  transmitters to enforce that pathway.
> >>>
>  If we could do the same through threads by adding additional
>  threads for endpoints that receive more traffic vs those
>  which do not, it would enforce better and faster
>  communication on those paths.> The current way Tomcat does it
>  is not dynamic and it just applies to ALL pathways equally
>  which is not efficient.
> >>> How would this improve efficiency at all?
> >>>
> >>> There is nothing inherently "showy" or "edity" about a
> >>> particular thread; each request-processing thread is
> >>> indistinguishable from any other. I don't believe there is a
> >>> way to improve the situation even if "per-endpoint" (whatever
> >>> that would mean) threads were a possibility.
> >>>
> >>> What would you attach to a thread that would make it any better
> >>> at editing records? Or deleting them?
> >>
> >> And I'll add that the whole original proposal ignores a number of
> >> rather fundamental points about how Servlet containers (and web
> >> servers in general) work. To name a few:
> >>
> >> - Until the request has been parsed (which requires a thread)
> >> Tomcat doesn't know which Servlet (endpoint) the request is
> >> destined for. Switching processing to a different thread at that
> >> point would add significant overhead for no benefit.
> >>
> >> - Even after parsing, the actual Servlet that processes the
> >> request (if any) can change during processing (e.g. a Filter that
> >> conditionally forwards to a different Servlet, authentication,
> >> etc.)
> >>
> >> There is nothing about a endpoint specific thread that would
> >> allow it to process a request more efficiently than a general
> >> thread.
> >>
> >> Any per-endpoint thread-pool solution will require the
> >> additional overhead to switch processing from the general parsing
> >> thread to the endpoint specific thread. This additional cost
> >> comes with zero benefits hence it will always be less efficient.
> >>
> >> In short, there is no way pre-allocating threads to particular
> >> endpoints can improve performance compared to just adding the
> >> same number of additional threads to the general thread pool.
>
> > Ah ok thank you for very concise answer. am chasing a pipe dream I
> > guess. Maybe there is another way to get this kind of benefit.
> The answer is caching, and that can be done at many levels, but the
> thread-level makes the least sense due to the reasons Mark outlined abov
> e.
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPLngACgkQHPApP6U8
> pFisbw//aiIg0vGmmlm4T/xoEbAKblKf6Qn9zmDzbLY9IbIG7MdsMcuV9hnsasEp
> iaZs3ROTy3BvWKGoyIGThtRsBPSFmb1H/XuKs4bqxgdRNgcbxEbjkH+1wZCx76Aq
> aqdIiCFdWvkOll4EqC4UYjNXCMkMBoTGN4GTxGmB8arujOyiC1KVPLY+wiRtXusF
> BrV3n9G+wN7Qq+rHIvgct1J29xTnPwQWhcdTrR5+IXn7vuNhEe9yxlKyJh4N6Pkt
> TW8ZlZfUgPnAXYZFvb0UfRK43cOCP4HsgncvIDjnnRJVTnaqRKBuRE4ZVYJG91SN
> CHUCYAmCR/rUZcOO3VJZ0dE7OEkrtcs6tmRT7j0qfS2qxbAb6YuW5xNYrCTgWKyD
> 6bUCQsKzcChV4mQPVDjXO/yv1t3dpXeMB+44KwCVB3bFPTediwISzTxInCSd/Kdu
> I+57Rcrclto8S3+GRsUPRG3dwsNMYMIxHpuzj/LYzLNdoANI8vM5NntYdQ4cwEFM
> H23i54m00WQ5RLuRJGzker+T5H0NvGlVwFQnqO9kCkA57o1Gi+vk34UuNPVLsqHx
> sKq6Eb4s3MeslZBPHhJWYXGPx226+T6sEXO1y2UZ9GuWYzfI3MF6/xcFOI2/W3id
> kYZEnR3R1Xes7GzsSLuCXVRDQco3GhXvSiyLvYC9xwgIsjnM61Q=
> =Q/vf
> -END PGP SIGNATURE-
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>
I 

Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 11:21 AM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas 
> > wrote:
> >
> >> On 12/08/17 06:00, Christopher Schultz wrote:
> >>> Owen,
> >>>
> >>> Please do not top-post. I have re-ordered your post to be
> >>> bottom-post.
> >>>
> >>> On 8/11/17 10:12 PM, Owen Rubel wrote:
>  On Fri, Aug 11, 2017 at 5:58 PM, 
>  wrote:
> >>>
> >> Hi All,
> >>
> >> I'm looking for a way (or a tool) in Tomcat to associate
> >> threads with endpoints.
> >
> > It isn't clear to me why this would be necessary. Threads
> > should be allocated on demand to individual requests. If
> > one route sees more traffic, then it should automatically
> > be allocated more threads. This could starve some requests
> > if the maximum number of threads had been allocated to a
> > lessor used route, while available threads went unused for
> > more commonly used route.
> >>>
>  Absolutely but it could ramp up more threads as needed.
> >>>
>  I base the logic on neuron and neuralTransmitters. When
>  neurons talk to each other, they send back neural
>  transmitters to enforce that pathway.
> >>>
>  If we could do the same through threads by adding additional
>  threads for endpoints that receive more traffic vs those
>  which do not, it would enforce better and faster
>  communication on those paths.> The current way Tomcat does it
>  is not dynamic and it just applies to ALL pathways equally
>  which is not efficient.
> >>> How would this improve efficiency at all?
> >>>
> >>> There is nothing inherently "showy" or "edity" about a
> >>> particular thread; each request-processing thread is
> >>> indistinguishable from any other. I don't believe there is a
> >>> way to improve the situation even if "per-endpoint" (whatever
> >>> that would mean) threads were a possibility.
> >>>
> >>> What would you attach to a thread that would make it any better
> >>> at editing records? Or deleting them?
> >>
> >> And I'll add that the whole original proposal ignores a number of
> >> rather fundamental points about how Servlet containers (and web
> >> servers in general) work. To name a few:
> >>
> >> - Until the request has been parsed (which requires a thread)
> >> Tomcat doesn't know which Servlet (endpoint) the request is
> >> destined for. Switching processing to a different thread at that
> >> point would add significant overhead for no benefit.
> >>
> >> - Even after parsing, the actual Servlet that processes the
> >> request (if any) can change during processing (e.g. a Filter that
> >> conditionally forwards to a different Servlet, authentication,
> >> etc.)
> >>
> >> There is nothing about a endpoint specific thread that would
> >> allow it to process a request more efficiently than a general
> >> thread.
> >>
> >> Any per-endpoint thread-pool solution will require the
> >> additional overhead to switch processing from the general parsing
> >> thread to the endpoint specific thread. This additional cost
> >> comes with zero benefits hence it will always be less efficient.
> >>
> >> In short, there is no way pre-allocating threads to particular
> >> endpoints can improve performance compared to just adding the
> >> same number of additional threads to the general thread pool.
>
> > Ah ok thank you for very concise answer. am chasing a pipe dream I
> > guess. Maybe there is another way to get this kind of benefit.
> The answer is caching, and that can be done at many levels, but the
> thread-level makes the least sense due to the reasons Mark outlined abov
> e.
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPLngACgkQHPApP6U8
> pFisbw//aiIg0vGmmlm4T/xoEbAKblKf6Qn9zmDzbLY9IbIG7MdsMcuV9hnsasEp
> iaZs3ROTy3BvWKGoyIGThtRsBPSFmb1H/XuKs4bqxgdRNgcbxEbjkH+1wZCx76Aq
> aqdIiCFdWvkOll4EqC4UYjNXCMkMBoTGN4GTxGmB8arujOyiC1KVPLY+wiRtXusF
> BrV3n9G+wN7Qq+rHIvgct1J29xTnPwQWhcdTrR5+IXn7vuNhEe9yxlKyJh4N6Pkt
> TW8ZlZfUgPnAXYZFvb0UfRK43cOCP4HsgncvIDjnnRJVTnaqRKBuRE4ZVYJG91SN
> CHUCYAmCR/rUZcOO3VJZ0dE7OEkrtcs6tmRT7j0qfS2qxbAb6YuW5xNYrCTgWKyD
> 6bUCQsKzcChV4mQPVDjXO/yv1t3dpXeMB+44KwCVB3bFPTediwISzTxInCSd/Kdu
> I+57Rcrclto8S3+GRsUPRG3dwsNMYMIxHpuzj/LYzLNdoANI8vM5NntYdQ4cwEFM
> H23i54m00WQ5RLuRJGzker+T5H0NvGlVwFQnqO9kCkA57o1Gi+vk34UuNPVLsqHx
> sKq6Eb4s3MeslZBPHhJWYXGPx226+T6sEXO1y2UZ9GuWYzfI3MF6/xcFOI2/W3id
> kYZEnR3R1Xes7GzsSLuCXVRDQco3GhXvSiyLvYC9xwgIsjnM61Q=
> =Q/vf
> -END PGP SIGNATURE-
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Re: Per EndPoint Threads???

2017-08-12 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Owen,

On 8/12/17 11:21 AM, Owen Rubel wrote:
> On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas 
> wrote:
> 
>> On 12/08/17 06:00, Christopher Schultz wrote:
>>> Owen,
>>> 
>>> Please do not top-post. I have re-ordered your post to be
>>> bottom-post.
>>> 
>>> On 8/11/17 10:12 PM, Owen Rubel wrote:
 On Fri, Aug 11, 2017 at 5:58 PM, 
 wrote:
>>> 
>> Hi All,
>> 
>> I'm looking for a way (or a tool) in Tomcat to associate 
>> threads with endpoints.
> 
> It isn't clear to me why this would be necessary. Threads
> should be allocated on demand to individual requests. If
> one route sees more traffic, then it should automatically
> be allocated more threads. This could starve some requests
> if the maximum number of threads had been allocated to a
> lessor used route, while available threads went unused for
> more commonly used route.
>>> 
 Absolutely but it could ramp up more threads as needed.
>>> 
 I base the logic on neuron and neuralTransmitters. When
 neurons talk to each other, they send back neural
 transmitters to enforce that pathway.
>>> 
 If we could do the same through threads by adding additional 
 threads for endpoints that receive more traffic vs those
 which do not, it would enforce better and faster
 communication on those paths.> The current way Tomcat does it
 is not dynamic and it just applies to ALL pathways equally
 which is not efficient.
>>> How would this improve efficiency at all?
>>> 
>>> There is nothing inherently "showy" or "edity" about a
>>> particular thread; each request-processing thread is
>>> indistinguishable from any other. I don't believe there is a
>>> way to improve the situation even if "per-endpoint" (whatever
>>> that would mean) threads were a possibility.
>>> 
>>> What would you attach to a thread that would make it any better
>>> at editing records? Or deleting them?
>> 
>> And I'll add that the whole original proposal ignores a number of
>> rather fundamental points about how Servlet containers (and web
>> servers in general) work. To name a few:
>> 
>> - Until the request has been parsed (which requires a thread)
>> Tomcat doesn't know which Servlet (endpoint) the request is
>> destined for. Switching processing to a different thread at that
>> point would add significant overhead for no benefit.
>> 
>> - Even after parsing, the actual Servlet that processes the
>> request (if any) can change during processing (e.g. a Filter that
>> conditionally forwards to a different Servlet, authentication,
>> etc.)
>> 
>> There is nothing about a endpoint specific thread that would
>> allow it to process a request more efficiently than a general
>> thread.
>> 
>> Any per-endpoint thread-pool solution will require the
>> additional overhead to switch processing from the general parsing
>> thread to the endpoint specific thread. This additional cost
>> comes with zero benefits hence it will always be less efficient.
>> 
>> In short, there is no way pre-allocating threads to particular
>> endpoints can improve performance compared to just adding the
>> same number of additional threads to the general thread pool.

> Ah ok thank you for very concise answer. am chasing a pipe dream I 
> guess. Maybe there is another way to get this kind of benefit.
The answer is caching, and that can be done at many levels, but the
thread-level makes the least sense due to the reasons Mark outlined abov
e.

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPLngACgkQHPApP6U8
pFisbw//aiIg0vGmmlm4T/xoEbAKblKf6Qn9zmDzbLY9IbIG7MdsMcuV9hnsasEp
iaZs3ROTy3BvWKGoyIGThtRsBPSFmb1H/XuKs4bqxgdRNgcbxEbjkH+1wZCx76Aq
aqdIiCFdWvkOll4EqC4UYjNXCMkMBoTGN4GTxGmB8arujOyiC1KVPLY+wiRtXusF
BrV3n9G+wN7Qq+rHIvgct1J29xTnPwQWhcdTrR5+IXn7vuNhEe9yxlKyJh4N6Pkt
TW8ZlZfUgPnAXYZFvb0UfRK43cOCP4HsgncvIDjnnRJVTnaqRKBuRE4ZVYJG91SN
CHUCYAmCR/rUZcOO3VJZ0dE7OEkrtcs6tmRT7j0qfS2qxbAb6YuW5xNYrCTgWKyD
6bUCQsKzcChV4mQPVDjXO/yv1t3dpXeMB+44KwCVB3bFPTediwISzTxInCSd/Kdu
I+57Rcrclto8S3+GRsUPRG3dwsNMYMIxHpuzj/LYzLNdoANI8vM5NntYdQ4cwEFM
H23i54m00WQ5RLuRJGzker+T5H0NvGlVwFQnqO9kCkA57o1Gi+vk34UuNPVLsqHx
sKq6Eb4s3MeslZBPHhJWYXGPx226+T6sEXO1y2UZ9GuWYzfI3MF6/xcFOI2/W3id
kYZEnR3R1Xes7GzsSLuCXVRDQco3GhXvSiyLvYC9xwgIsjnM61Q=
=Q/vf
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
Ah ok thank you for very concise answer. am chasing a pipe dream I guess.
Maybe there is another way to get this kind of benefit.

Thanks again for your answer.

Owen Rubel
oru...@gmail.com

On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas  wrote:

> On 12/08/17 06:00, Christopher Schultz wrote:
> > Owen,
> >
> > Please do not top-post. I have re-ordered your post to be bottom-post.
> >
> > On 8/11/17 10:12 PM, Owen Rubel wrote:
> >> On Fri, Aug 11, 2017 at 5:58 PM,  wrote:
> >
>  Hi All,
> 
>  I'm looking for a way (or a tool) in Tomcat to associate
>  threads with endpoints.
> >>>
> >>> It isn't clear to me why this would be necessary. Threads should
> >>> be allocated on demand to individual requests. If one route sees
> >>> more traffic, then it should automatically be allocated more
> >>> threads. This could starve some requests if the maximum number of
> >>> threads had been allocated to a lessor used route, while
> >>> available threads went unused for more commonly used route.
> >
> >> Absolutely but it could ramp up more threads as needed.
> >
> >> I base the logic on neuron and neuralTransmitters. When neurons
> >> talk to each other, they send back neural transmitters to enforce
> >> that pathway.
> >
> >> If we could do the same through threads by adding additional
> >> threads for endpoints that receive more traffic vs those which do
> >> not, it would enforce better and faster communication on those
> >> paths.> The current way Tomcat does it is not dynamic and it just
> >> applies to ALL pathways equally which is not efficient.
> > How would this improve efficiency at all?
> >
> > There is nothing inherently "showy" or "edity" about a particular
> > thread; each request-processing thread is indistinguishable from any
> > other. I don't believe there is a way to improve the situation even if
> > "per-endpoint" (whatever that would mean) threads were a possibility.
> >
> > What would you attach to a thread that would make it any better at
> > editing records? Or deleting them?
>
> And I'll add that the whole original proposal ignores a number of rather
> fundamental points about how Servlet containers (and web servers in
> general) work. To name a few:
>
> - Until the request has been parsed (which requires a thread) Tomcat
> doesn't know which Servlet (endpoint) the request is destined for.
> Switching processing to a different thread at that point would add
> significant overhead for no benefit.
>
> - Even after parsing, the actual Servlet that processes the request (if
> any) can change during processing (e.g. a Filter that conditionally
> forwards to a different Servlet, authentication, etc.)
>
> There is nothing about a endpoint specific thread that would allow it to
> process a request more efficiently than a general thread.
>
> Any per-endpoint thread-pool solution will require the additional
> overhead to switch processing from the general parsing thread to the
> endpoint specific thread. This additional cost comes with zero benefits
> hence it will always be less efficient.
>
> In short, there is no way pre-allocating threads to particular endpoints
> can improve performance compared to just adding the same number of
> additional threads to the general thread pool.
>
> Mark
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: Per EndPoint Threads???

2017-08-12 Thread Mark Thomas
On 12/08/17 06:00, Christopher Schultz wrote:
> Owen,
> 
> Please do not top-post. I have re-ordered your post to be bottom-post.
> 
> On 8/11/17 10:12 PM, Owen Rubel wrote:
>> On Fri, Aug 11, 2017 at 5:58 PM,  wrote:
> 
 Hi All,

 I'm looking for a way (or a tool) in Tomcat to associate
 threads with endpoints.
>>>
>>> It isn't clear to me why this would be necessary. Threads should
>>> be allocated on demand to individual requests. If one route sees
>>> more traffic, then it should automatically be allocated more
>>> threads. This could starve some requests if the maximum number of
>>> threads had been allocated to a lessor used route, while
>>> available threads went unused for more commonly used route.
> 
>> Absolutely but it could ramp up more threads as needed.
> 
>> I base the logic on neuron and neuralTransmitters. When neurons
>> talk to each other, they send back neural transmitters to enforce
>> that pathway.
> 
>> If we could do the same through threads by adding additional
>> threads for endpoints that receive more traffic vs those which do
>> not, it would enforce better and faster communication on those
>> paths.> The current way Tomcat does it is not dynamic and it just
>> applies to ALL pathways equally which is not efficient.
> How would this improve efficiency at all?
> 
> There is nothing inherently "showy" or "edity" about a particular
> thread; each request-processing thread is indistinguishable from any
> other. I don't believe there is a way to improve the situation even if
> "per-endpoint" (whatever that would mean) threads were a possibility.
> 
> What would you attach to a thread that would make it any better at
> editing records? Or deleting them?

And I'll add that the whole original proposal ignores a number of rather
fundamental points about how Servlet containers (and web servers in
general) work. To name a few:

- Until the request has been parsed (which requires a thread) Tomcat
doesn't know which Servlet (endpoint) the request is destined for.
Switching processing to a different thread at that point would add
significant overhead for no benefit.

- Even after parsing, the actual Servlet that processes the request (if
any) can change during processing (e.g. a Filter that conditionally
forwards to a different Servlet, authentication, etc.)

There is nothing about a endpoint specific thread that would allow it to
process a request more efficiently than a general thread.

Any per-endpoint thread-pool solution will require the additional
overhead to switch processing from the general parsing thread to the
endpoint specific thread. This additional cost comes with zero benefits
hence it will always be less efficient.

In short, there is no way pre-allocating threads to particular endpoints
can improve performance compared to just adding the same number of
additional threads to the general thread pool.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Per EndPoint Threads???

2017-08-11 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Owen,

Please do not top-post. I have re-ordered your post to be bottom-post.

On 8/11/17 10:12 PM, Owen Rubel wrote:
> On Fri, Aug 11, 2017 at 5:58 PM,  wrote:
> 
>>> Hi All,
>>> 
>>> I'm looking for a way (or a tool) in Tomcat to associate
>>> threads with endpoints.
>> 
>> It isn't clear to me why this would be necessary. Threads should
>> be allocated on demand to individual requests. If one route sees
>> more traffic, then it should automatically be allocated more
>> threads. This could starve some requests if the maximum number of
>> threads had been allocated to a lessor used route, while
>> available threads went unused for more commonly used route.
> 
> Absolutely but it could ramp up more threads as needed.
> 
> I base the logic on neuron and neuralTransmitters. When neurons
> talk to each other, they send back neural transmitters to enforce
> that pathway.
> 
> If we could do the same through threads by adding additional
> threads for endpoints that receive more traffic vs those which do
> not, it would enforce better and faster communication on those
> paths.> The current way Tomcat does it is not dynamic and it just
> applies to ALL pathways equally which is not efficient.
How would this improve efficiency at all?

There is nothing inherently "showy" or "edity" about a particular
thread; each request-processing thread is indistinguishable from any
other. I don't believe there is a way to improve the situation even if
"per-endpoint" (whatever that would mean) threads were a possibility.

What would you attach to a thread that would make it any better at
editing records? Or deleting them?

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmOi2YACgkQHPApP6U8
pFh+ohAAkIDqAaZK3mmQsSAE100a4RMwCyAjT076eiEkqj3MVJHUBuYf2adNlRYe
jvcKJCmvu061mW+/kos0+YIrt6ao2j60+fryX1goMOXhBxxrSlioccOwLkBu4HIG
SB/AuFIYqIG6S1ICqVunCFJsrYnMuJEX6WfA8O7G+sQWFH54w9XadewabEduu3uO
PwoP14a7XFOC8RPp9HM9Rdx8EfADRXrFugN0E5YSjXN5cdMs8bxJcabo8vjVnfNH
JDCkvF0tDd+FWj4t/AqXugM6fc6EYb8sSxEifxkdbu701A4doe8n1d1zawd3+qd4
IBVR6jFDHGqRm6cHvmhI8G4Tlx6c5EX29ZGTTdKnPvNloyob0a3/LauPJMr/97Xv
eIsj0shEfbUOWgcBWHRMbXbmZRjOAU7wxXtm2KsLZpJ6ZVZe9c7wSRLThYjp0Yyx
jgpwHN4sVPGG821trGht29E3v1e2GN1A7nuYbM7A7BK1PHP3MmLozVxAMxAip1T4
hVaVDHc1hd/G79Jvugq/T7atKQfOetLD4vg9ZFGIukaPZwA+3BtMYTNWn/bX2u9d
hBsWCw5Abn1SABlQ4cl87OJF9jya4p/P3Kqejyg9jbDbUy9J21QFEP6n5qHy9/vy
Jg6cjWpho6s9Ajx690ZNsdudDPoRuBe2TRLkFTOnUXsgwHTmToY=
=tiO+
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Per EndPoint Threads???

2017-08-11 Thread Owen Rubel
Absolutely but it could ramp up more threads as needed.

I base the logic on neuron and neuralTransmitters. When neurons talk to
each other, they send back neural transmitters to enforce that pathway.

If we could do the same through threads by adding additional threads for
endpoints that receive more traffic vs those which do not, it would enforce
better and faster communication on those paths.

The current way Tomcat does it is not dynamic and it just applies to ALL
pathways equally which is not efficient.


Owen Rubel
oru...@gmail.com

On Fri, Aug 11, 2017 at 5:58 PM,  wrote:

> > Hi All,
> >
> > I'm looking for a way (or a tool) in Tomcat to associate threads with
> > endpoints.
>
> It isn't clear to me why this would be necessary. Threads should be
> allocated on demand to individual requests. If one route sees more
> traffic, then it should automatically be allocated more threads. This
> could starve some requests if the maximum number of threads had been
> allocated to a lessor used route, while available threads went unused
> for more commonly used route.
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: Per EndPoint Threads???

2017-08-11 Thread christopher
> Hi All,
> 
> I'm looking for a way (or a tool) in Tomcat to associate threads with
> endpoints.

It isn't clear to me why this would be necessary. Threads should be
allocated on demand to individual requests. If one route sees more
traffic, then it should automatically be allocated more threads. This
could starve some requests if the maximum number of threads had been
allocated to a lessor used route, while available threads went unused
for more commonly used route.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org