Re: ClientResource leaves inactive thread

2011-07-28 Thread Matt Kennedy
I'm not clear from the question if you're asking about the number of task 
threads as Tim has explained, or the number of http listener threads, for that 
use:

Server httpServer = new Server(Protocol.HTTP, port);
serviceComponent.getServers().add(httpServer);
httpServer.getContext().getParameters().add(maxThreads, 
maxThreads);



On Jul 27, 2011, at 2:02 PM, Tim Peierls wrote:

 You can set the pool size of the executor used by the TaskService with 
 org.restlet.service.TaskService.setPoolSize.
 
 Or you can provide your own TaskService and override createExecutorService.to 
 return an ExecutorService tuned exactly the way you want.
 
 --tim
 
 On Wed, Jul 27, 2011 at 8:14 AM, Klemens Muthmann al...@gmx.de wrote:
 Hi,
 
 I read several threads about this problem now (including this one) and still 
 can't figure out how to solve the issues (My Restlet Version is 2.0.8). May 
 someone point me to the relevant tutorial or show some code on how to 
 increase the thread pool size on the RESTlet Server?
 
 Thanks and regards
 
 --
 http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2804569


--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2805287

Re: ClientResource leaves inactive thread

2011-07-28 Thread Tim Peierls
Oh ... that's probably what the original question was asking about. I just
jumped reflexively on the phrase thread pool. Sorry...

--tim

On Thu, Jul 28, 2011 at 8:40 AM, Matt Kennedy stinkym...@gmail.com wrote:

 I'm not clear from the question if you're asking about the number of task
 threads as Tim has explained, or the number of http listener threads, for
 that use:

   Server httpServer = new Server(Protocol.HTTP, port);
   serviceComponent.getServers().add(httpServer);
   httpServer.getContext().getParameters().add(maxThreads, 
 maxThreads);




 On Jul 27, 2011, at 2:02 PM, Tim Peierls wrote:

 You can set the pool size of the executor used by the TaskService with
 org.restlet.service.TaskService.setPoolSize.

 Or you can provide your own TaskService and override
 createExecutorService.to return an ExecutorService tuned exactly the way you
 want.

 --tim

 On Wed, Jul 27, 2011 at 8:14 AM, Klemens Muthmann al...@gmx.de wrote:

 Hi,

 I read several threads about this problem now (including this one) and
 still can't figure out how to solve the issues (My Restlet Version is
 2.0.8). May someone point me to the relevant tutorial or show some code on
 how to increase the thread pool size on the RESTlet Server?

 Thanks and regards

 --

 http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2804569





--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2805348

RE: ClientResource leaves inactive thread

2011-07-27 Thread Klemens Muthmann
Hi,

I read several threads about this problem now (including this one) and still 
can't figure out how to solve the issues (My Restlet Version is 2.0.8). May 
someone point me to the relevant tutorial or show some code on how to increase 
the thread pool size on the RESTlet Server?

Thanks and regards

--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2804569


Re: ClientResource leaves inactive thread

2011-07-27 Thread Tim Peierls
You can set the pool size of the executor used by the TaskService with
org.restlet.service.TaskService.setPoolSize.

Or you can provide your own TaskService and override
createExecutorService.to return an ExecutorService tuned exactly the way you
want.

--tim

On Wed, Jul 27, 2011 at 8:14 AM, Klemens Muthmann al...@gmx.de wrote:

 Hi,

 I read several threads about this problem now (including this one) and
 still can't figure out how to solve the issues (My Restlet Version is
 2.0.8). May someone point me to the relevant tutorial or show some code on
 how to increase the thread pool size on the RESTlet Server?

 Thanks and regards

 --

 http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2804569


--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2804693

Re: ClientResource leaves inactive thread

2010-09-15 Thread Tim Peierls
This is all good, but is there any reason you're not letting the user
provide a configured ExecutorService? It would simplify your API
considerably, I think.

--tim

On Tue, Sep 14, 2010 at 5:18 PM, Jerome Louvel jerome.lou...@noelios.comwrote:

 Hi Tim,



 In the upcoming HTTP/NIO internal connectors for version 2.1, I’ve made the
 thread pool fully customizable. See the org.restlet.engine.nio. BaseHelper
 class for more details. Currently in Restlet incubator but soon to be moved
 to SVN trunk.



 * tdcontrollerDaemon/td

  * tdboolean/td

 * tdtrue (client), false (server)/td

 * tdIndicates if the controller thread should be a daemon (not blocking
 JVM

 * exit)./td



 * tdcontrollerSleepTimeMs/td

 * tdint/td

 * td50/td

 * tdTime for the controller thread to sleep between each control./td



 * tdminThreads/td

 * tdint/td

 * td5/td

 * tdMinimum number of worker threads waiting to service calls, even if
 they

 * are idle./td



  * tdlowThreads/td

 * tdint/td

  * td8/td

 * tdNumber of worker threads determining when the connector is considered

 * overloaded. This triggers some protection actions such as not accepting
 new

 * connections./td



 * tdmaxThreads/td

  * tdint/td

 * td10/td

  * tdMaximum number of worker threads that can service calls. If this
 number

 * is reached then additional calls are queued if the maxQueued value
 hasn't

 * been reached./td



  * tdmaxQueued/td

 * tdint/td

  * td10/td

 * tdMaximum number of calls that can be queued if there aren't any worker

 * thread available to service them. If the value is '0', then no queue is
 used

 * and calls are rejected. If the value is '-1', then an unbounded queue is
 used

 * and calls are never rejected./td



 * tdmaxIoIdleTimeMs/td

 * tdint/td

 * td3/td

 * tdMaximum time to wait on an idle IO operation./td



 * tdmaxThreadIdleTimeMs/td

 * tdint/td

 * td6/td

 * tdTime for an idle thread to wait for an operation before being
 collected./td



 * tdtracing/td

 * tdboolean/td

 * tdfalse/td

 * tdIndicates if all messages should be printed on the standard
 console./td



 * tdworkerThreads/td

  * tdboolean/td

 * tdtrue/td

  * tdIndicates if the processing of calls should be done via threads
 provided

 * by a worker service (i.e. a pool of worker threads). Note that if set to

 * false, calls will be processed a single IO selector thread, which should

 * never block, otherwise the other connections would hang./td



 * tdinboundBufferSize/td

 * tdint/td

 * td8*1024/td

 * tdSize of the content buffer for receiving messages./td



 * tdoutboundBufferSize/td

 * tdint/td

 * td32*1024/td

 * tdSize of the content buffer for sending messages./td



 * tddirectBuffers/td

  * tdboolean/td

 * tdtrue/td

  * tdIndicates if direct NIO buffers should be allocated instead of
 regular

 * buffers. See NIO's ByteBuffer Javadocs./td



 * tdtransport/td

  * tdString/td

 * tdTCP/td

  * tdIndicates the transport protocol such as TCP or UDP./td





 Best regards,
 Jerome

 --
 Restlet ~ Founder and Technical Lead ~
 http://www.restlet.o​rg http://www.restlet.org/

 Noelios Technologies ~ http://www.noelios.com









 *De :* tpeie...@gmail.com [mailto:tpeie...@gmail.com] *De la part de* Tim
 Peierls
 *Envoyé :* samedi 3 juillet 2010 19:15

 *À :* discuss@restlet.tigris.org
 *Objet :* Re: ClientResource leaves inactive thread



 My earlier mail said something wrong, or at least misleading:

 ...defaulting coreThreads=1 and maxThreads=255 with a SynchronousQueue
 seems like it's asking for trouble* with CPU count  255.*



 I shouldn't have included that last italicized phrase with CPU count 
 255. The point was that SynchronousQueues should have unbounded pool size.



 Jerome's response of setting maxPoolSize to 10 by default (and still using
 SynchronousQueue) means that tasks will be rejected that much sooner, which
 will probably cause more problems for people than a value of 255.



 The thing about a SynchronousQueue is that it isn't really a queue -- it
 has zero capacity. Putting something on a synchronous queue blocks until
 there's something (i.e., a thread) at the other end to hand it off to
 directly. In development or for small applications where you aren't too
 worried about exhausting thread resources, this is fine. In production
 systems, though, you want to be able to configure something other than
 direct handoff.



 Here is the relevant section from the TPE 
 javadochttp://java.sun.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html
 :

 ---

 Any 
 BlockingQueuehttp://java.sun.com/javase/6/docs/api/java/util/concurrent/BlockingQueue.html
  may
 be used to transfer and hold submitted tasks. The use of this queue
 interacts with pool sizing:

- If fewer than corePoolSize threads are running, the Executor always
prefers adding a new thread rather than queuing.
- If corePoolSize or more threads are running, the Executor always
prefers queuing a request

RE: ClientResource leaves inactive thread

2010-09-14 Thread Jerome Louvel
Hi Tim,

 

In the upcoming HTTP/NIO internal connectors for version 2.1, I’ve made the 
thread pool fully customizable. See the org.restlet.engine.nio. BaseHelper 
class for more details. Currently in Restlet incubator but soon to be moved to 
SVN trunk.

 

* tdcontrollerDaemon/td

* tdboolean/td

* tdtrue (client), false (server)/td

* tdIndicates if the controller thread should be a daemon (not blocking JVM

* exit)./td

 

* tdcontrollerSleepTimeMs/td

* tdint/td

* td50/td

* tdTime for the controller thread to sleep between each control./td

 

* tdminThreads/td

* tdint/td

* td5/td

* tdMinimum number of worker threads waiting to service calls, even if they

* are idle./td

 

* tdlowThreads/td

* tdint/td

* td8/td

* tdNumber of worker threads determining when the connector is considered

* overloaded. This triggers some protection actions such as not accepting new

* connections./td

 

* tdmaxThreads/td

* tdint/td

* td10/td

* tdMaximum number of worker threads that can service calls. If this number

* is reached then additional calls are queued if the maxQueued value hasn't

* been reached./td

 

* tdmaxQueued/td

* tdint/td

* td10/td

* tdMaximum number of calls that can be queued if there aren't any worker

* thread available to service them. If the value is '0', then no queue is used

* and calls are rejected. If the value is '-1', then an unbounded queue is used

* and calls are never rejected./td

 

* tdmaxIoIdleTimeMs/td

* tdint/td

* td3/td

* tdMaximum time to wait on an idle IO operation./td

 

* tdmaxThreadIdleTimeMs/td

* tdint/td

* td6/td

* tdTime for an idle thread to wait for an operation before being 
collected./td

 

* tdtracing/td

* tdboolean/td

* tdfalse/td

* tdIndicates if all messages should be printed on the standard console./td

 

* tdworkerThreads/td

* tdboolean/td

* tdtrue/td

* tdIndicates if the processing of calls should be done via threads provided

* by a worker service (i.e. a pool of worker threads). Note that if set to

* false, calls will be processed a single IO selector thread, which should

* never block, otherwise the other connections would hang./td

 

* tdinboundBufferSize/td

* tdint/td

* td8*1024/td

* tdSize of the content buffer for receiving messages./td

 

* tdoutboundBufferSize/td

* tdint/td

* td32*1024/td

* tdSize of the content buffer for sending messages./td

 

* tddirectBuffers/td

* tdboolean/td

* tdtrue/td

* tdIndicates if direct NIO buffers should be allocated instead of regular

* buffers. See NIO's ByteBuffer Javadocs./td

 

* tdtransport/td

* tdString/td

* tdTCP/td

* tdIndicates the transport protocol such as TCP or UDP./td

 

 

Best regards,
Jerome
--
Restlet ~ Founder and Technical Lead ~  http://www.restlet.org/ 
http://www.restlet.o​rg
Noelios Technologies ~  http://www.noelios.com/ http://www.noelios.com

 

 

 

 

De : tpeie...@gmail.com [mailto:tpeie...@gmail.com] De la part de Tim Peierls
Envoyé : samedi 3 juillet 2010 19:15
À : discuss@restlet.tigris.org
Objet : Re: ClientResource leaves inactive thread

 

My earlier mail said something wrong, or at least misleading: 

...defaulting coreThreads=1 and maxThreads=255 with a SynchronousQueue seems 
like it's asking for trouble with CPU count  255. 

 

I shouldn't have included that last italicized phrase with CPU count  255. 
The point was that SynchronousQueues should have unbounded pool size. 

 

Jerome's response of setting maxPoolSize to 10 by default (and still using 
SynchronousQueue) means that tasks will be rejected that much sooner, which 
will probably cause more problems for people than a value of 255.

 

The thing about a SynchronousQueue is that it isn't really a queue -- it has 
zero capacity. Putting something on a synchronous queue blocks until there's 
something (i.e., a thread) at the other end to hand it off to directly. In 
development or for small applications where you aren't too worried about 
exhausting thread resources, this is fine. In production systems, though, you 
want to be able to configure something other than direct handoff.

 

Here is the relevant section from the TPE javadoc 
http://java.sun.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html
 :

---

Any  
http://java.sun.com/javase/6/docs/api/java/util/concurrent/BlockingQueue.html 
BlockingQueue may be used to transfer and hold submitted tasks. The use of this 
queue interacts with pool sizing:

*   If fewer than corePoolSize threads are running, the Executor always 
prefers adding a new thread rather than queuing.
*   If corePoolSize or more threads are running, the Executor always 
prefers queuing a request rather than adding a new thread.
*   If a request cannot be queued, a new thread is created unless this 
would exceed maximumPoolSize, in which case, the task will be rejected.

There are three general strategies for queuing:

1. Direct handoffs. A good default choice for a work queue is a  
http://java.sun.com

Re: ClientResource leaves inactive thread

2010-07-03 Thread Tim Peierls
My earlier mail said something wrong, or at least misleading:

 ...defaulting coreThreads=1 and maxThreads=255 with a SynchronousQueue
 seems like it's asking for trouble* with CPU count  255.*


I shouldn't have included that last italicized phrase with CPU count 
255. The point was that SynchronousQueues should have unbounded pool size.

Jerome's response of setting maxPoolSize to 10 by default (and still using
SynchronousQueue) means that tasks will be rejected that much sooner, which
will probably cause more problems for people than a value of 255.

The thing about a SynchronousQueue is that it isn't really a queue -- it has
zero capacity. Putting something on a synchronous queue blocks until there's
something (i.e., a thread) at the other end to hand it off to directly. In
development or for small applications where you aren't too worried about
exhausting thread resources, this is fine. In production systems, though,
you want to be able to configure something other than direct handoff.

Here is the relevant section from the TPE
javadochttp://java.sun.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html
:
---
Any 
BlockingQueuehttp://java.sun.com/javase/6/docs/api/java/util/concurrent/BlockingQueue.html
 may be used to transfer and hold submitted tasks. The use of this queue
interacts with pool sizing:

   - If fewer than corePoolSize threads are running, the Executor always
   prefers adding a new thread rather than queuing.
   - If corePoolSize or more threads are running, the Executor always
   prefers queuing a request rather than adding a new thread.
   - If a request cannot be queued, a new thread is created unless this
   would exceed maximumPoolSize, in which case, the task will be rejected.

There are three general strategies for queuing:

   1. *Direct handoffs.* A good default choice for a work queue is a
   
SynchronousQueuehttp://java.sun.com/javase/6/docs/api/java/util/concurrent/SynchronousQueue.html
that
   hands off tasks to threads without otherwise holding them. Here, an attempt
   to queue a task will fail if no threads are immediately available to run it,
   so a new thread will be constructed. This policy avoids lockups when
   handling sets of requests that might have internal dependencies. Direct
   handoffs generally require unbounded maximumPoolSizes to avoid rejection of
   new submitted tasks. This in turn admits the possibility of unbounded thread
   growth when commands continue to arrive on average faster than they can be
   processed.
   2. *Unbounded queues.* Using an unbounded queue (for example a
   
LinkedBlockingQueuehttp://java.sun.com/javase/6/docs/api/java/util/concurrent/LinkedBlockingQueue.html
without
   a predefined capacity) will cause new tasks to wait in the queue when all
   corePoolSize threads are busy. Thus, no more than corePoolSize threads will
   ever be created. (And the value of the maximumPoolSize therefore doesn't
   have any effect.) This may be appropriate when each task is completely
   independent of others, so tasks cannot affect each others execution; for
   example, in a web page server. While this style of queuing can be useful in
   smoothing out transient bursts of requests, it admits the possibility of
   unbounded work queue growth when commands continue to arrive on average
   faster than they can be processed.
   3. *Bounded queues.* A bounded queue (for example, an
ArrayBlockingQueuehttp://java.sun.com/javase/6/docs/api/java/util/concurrent/ArrayBlockingQueue.html)
   helps prevent resource exhaustion when used with finite maximumPoolSizes,
   but can be more difficult to tune and control. Queue sizes and maximum pool
   sizes may be traded off for each other: Using large queues and small pools
   minimizes CPU usage, OS resources, and context-switching overhead, but can
   lead to artificially low throughput. If tasks frequently block (for example
   if they are I/O bound), a system may be able to schedule time for more
   threads than you otherwise allow. Use of small queues generally requires
   larger pool sizes, which keeps CPUs busier but may encounter unacceptable
   scheduling overhead, which also decreases throughput.

---

(Tim writing again:)

In summary:

   - SynchronousQueues should use unbounded max pool size, risk unbounded
   thread pool growth.
   - Unbounded work queues *ignore* max pool size, risk unbounded work queue
   growth.
   - With bounded work queues there are ways to go:
  1. Large queues/small pools, risk artificially low throughput when
  many tasks are I/O bound.
  2. Small queues/large pools, risk high scheduling overhead, decreased
  throughput.

If tasks are interdependent, you want to avoid long queues and small pools,
because of the risk that a task will get stuck behind a task that depends on
it.

So I think the safest default is Executors.newCachedThreadPool, as long as
there's a way to provide a different ExecutorService instance for BaseHelper
to use.

How 

RE: ClientResource leaves inactive thread

2010-07-02 Thread Jerome Louvel
Hi Nina,

 

We had some issues with our automated build (fixed now), so I would
recommend trying the latest snapshot again if you still have the issue.

 

If you want to prevent automatic thread creation, you can either:

1. Create a Client(Protocol.HTTP) instance and attach it to each
ClientResource via setNext()

2. Use another HTTP connector such as Apache HTTP client by adding
org.restlet.ext.httpclient.jar in your classpaht (+ dependencies)

 

The fix in the latest snapshot should however take care of collecting the
automatically created client connectors/threads.

 

Best regards,
Jerome Louvel
--
Restlet ~ Founder and Technical Lead ~  http://www.restlet.org/
http://www.restlet.org
Noelios Technologies ~  http://www.noelios.com/ http://www.noelios.com

 

 

 

 

De : Nina Jeliazkova [mailto:n...@acad.bg] 
Envoyé : mardi 29 juin 2010 13:50
À : discuss@restlet.tigris.org
Objet : Re: ClientResource leaves inactive thread

 

Tim Peierls wrote: 

On Thu, Jun 24, 2010 at 11:52 AM, Nina Jeliazkova n...@acad.bg wrote:

Tim Peierls wrote: 

What was the date of that snapshot? It looks like there's a fix as of June
11, revision 6696 in svn.

Not sure about the date, it's the snapshot, available in the maven
repository,
http://maven.restlet.org/org/restlet/jee/org.restlet/2.0-SNAPSHOT/org.restle
t-2.0-SNAPSHOT.pom 

 

So it depends on when you downloaded it, since the snapshot changes.

 

The snapshot was downloaded (via maven) and tested June 22 (first post in
this thread).

Today snapshot seems to have the thread leak fixed. 



 

I would actually prefer a configurable ClientResource, to be able to switch
on/off launching separate threads - does this already exist? 

 

Have you tried using something other than the internal connector?

The leak was found when running some service under Tomcat , and then
reported to me, so it's not specific to the internal connector.

Regards,
Nina



 

--tim

--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2628674

Re: ClientResource leaves inactive thread

2010-07-02 Thread Tal Liron
As long as you're part of the decision-making process for Restlet, 
then I'm OK with it.

 The caveat is that people don't always understand how to use the 
 configuration parameters of ThreadPoolExecutor. There was an exchange 
 on the concurrency-interest mailing list recently that brought this 
 home to me. For example, it seems that a lot of people think of 
 corePoolSize as minPoolSize, the opposite of maxPoolSize, which is the 
 wrong way to think about it. A conservative default in Restlet is 
 probably better than a user configuration based on a misunderstanding.

 --tim

--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2628698


Re: ClientResource leaves inactive thread

2010-06-25 Thread Nina Jeliazkova
Tim Peierls wrote:
 What was the date of that snapshot? It looks like there's a fix as of
 June 11, revision 6696 in svn.
Not sure about the date, it's the snapshot, available in the maven
repository, 
http://maven.restlet.org/org/restlet/jee/org.restlet/2.0-SNAPSHOT/org.restlet-2.0-SNAPSHOT.pom


I would actually prefer a configurable ClientResource, to be able to
switch on/off launching separate threads - does this already exist?

Best regards,
Nina

 But (talking to Jerome and Thierry now) I'm a little worried that this
 fix isn't really addressing the heart of the problem. In particular,
 the use of thread pool per BaseHelper instance prevents efficient
 re-use of threads in the JVM. 

 Also, defaulting coreThreads=1 and maxThreads=255 with a
 SynchronousQueue seems like it's asking for trouble with CPU count 
 255. What about a bounded queue with high capacity to get through the
 bursts, but keep the pool size to some small multiple of the CPU
 count? Remember that core size is _not_ really min size.

 (And a minor nit: BaseHelper.workerService is a volatile instance
 field, so visibility isn't problem, but there are some atomicity
 issues -- at init time and shutdown time. Fix to the latter just means
 copying volatile value to local variable before testing and using.
 Fixing former ... needs some thought. Maybe it's OK as is.)

 --tim

 On Tue, Jun 22, 2010 at 6:02 AM, Nina Jeliazkova n...@acad.bg
 mailto:n...@acad.bg wrote:

 Hello All,

 I am experiencing memory/thread leak ,with Restlet-2.0-RC4 and
 Restlet-2.0-SNAPSHOT , when using ClientResource . Basically,
 ClientResource doesn't close the thread it spawns and this result
 in number of inactive threads and  severe memory leak.

 Here is some very simple code to illustrate this behaviour.  The
 same code runs fine in Restlet-2.0-M6 (which doesn't span new
 thread in ClientResource).

 public void run(int instances) throws Exception {   

 for (int i=0; i  instances;i++) {
 ClientResource clientResource = null;
 Representation r = null;
 try {
 clientResource = new
 ClientResource(http://restlet.org; http://restlet.org);
 r = clientResource.get();
 } finally {
 try { r.release(); } catch (Exception x) {}
 try { clientResource.release(); } catch (Exception
 x) {}
 }
 }
 }

 public static void main(String[] args) throws Exception {
 ThreadTest test = new ThreadTest();
 test.run(1000);
 }


 I guess there might be something missing in the code to explicitly
 close threads, but since the same code runs fine in M6, it is
 quite confusing to experience leaks after upgrade.

 Best regards,
 Nina Jeliazkova

 P.S. Inactive threads while executing the example above




--
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447dsMessageId=2625730