You can set the queue type on the client to POOLED and
then configure the pool to have a boundary and to say,
block on put or to run it in the current thread, when
the boundary is reached and the maximum number of
threads is reached.   There is some information on
configuring the pools in the disk cache documentation.
 Basically, JCS wraps the util concurrent pooled
executor.  There is a config example for the jdbc disk
cache here:

http://jakarta.apache.org/jcs/JDBCDiskCache.html

There is more on the bottom of this page:

http://jakarta.apache.org/jcs/IndexedDiskAuxCache.html


You'd set the eventQueueType attribute on the remote
cache client to POOLED and define a pool . . . .  To
verify that the config worked, call getStats and see
that there is different queue info included.  I think
you'll get an info log on starup saying what kind of
queue is used too.  

The idea is that all the auxiliary queues in the
system can be configured as either the single (self
destoying) thread or pooled. . . . . 

Cheers,

Aaron

--- Dennis Jacobs <[EMAIL PROTECTED]> wrote:

> Greetings fellow JCS users,
> 
> I've got an issue with the queue for remote cache
> puts growing very large.
> Currently my system uses multiple separate apps
> using a shared filesystem
> for serialized object storage.  My plan is to
> replace the filesystem with a
> number of clustered standalone remote cache servers,
> and add thin JCS
> clients (no memory / disk) to my apps to access the
> remote cache stores.
> 
> One of the applications is responsible for building
> the objects and sending
> them to the remote servers.  The problem is that it
> is queing puts locally
> much faster than the queue is flushed to the remote
> store, and the memory
> usage is growing too rapidly.  I have considered
> adding the remote server to
> this application instead of having it standalone,
> but I suspect the same
> problem will occur with the clustered server
> updates.
> 
> So - 2 questions:
> Is there a way to configure my client for this
> 'builder' application, either
> through cache attributes or thread pool:
> 
> 1)  to increase my remote cache put throughput - to
> limit the rate at which
> the queue grows?
> 
> 2)  to limit the size of the remote cache queue so
> that cache puts will
> block until the queue is flushed, such as a min/max
> remote queue size?
> 
> This particular client doesn't need to get objects
> from the cache, so get
> performance is not a factor.
> 
> Thanks in advance!
> 
> Sincerely,
> 
> Dennis Jacobs
> Esurg Corporation
> 
> 
>
---------------------------------------------------------------------
> To unsubscribe, e-mail:
> [EMAIL PROTECTED]
> For additional commands, e-mail:
> [EMAIL PROTECTED]
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to