Gerhard Froehlich wrote:

>>From: Sylvain Wallez [mailto:[EMAIL PROTECTED]]
>>
>>
>>Berin Loritsch a écrit :
>>
>>>Sylvain Wallez wrote:
>>>
>>>
> 
> <skip/>
> 
>>>Keep in mind, that I am also working on an asynchronous command structure for
>>>Avalon, something that ActiveMonitor, Pools, etc. would all be able to take
>>>advantage of.  The basic gist is this:
>>>
>>>There are a number of maintenance/management tasks that components have to
>>>take at periodic times like ActiveMonitor, DataSourcePools, Cache implementations,
>>>etc.  The problem is that having a whole thread for control management for
>>>each of these items is too heavy, as is performing the maintenance synchronously.
>>>
>>Have you looked at java.util.Timer in JDK1.3 ? AFAIK, it is meant for
>>this kind of things.
>>
> 
> AIUI it forces for every TimerTaks an own Thread (non Deamon). When you have
> many Maintainance Taks you will have large ThreadPool and that would be a
> Performance issue again.


Actually, java.util.Timer uses a priority queue of TimerTasks, ordered 
by the next execution time, and the timer thread can be run as a daemon. 
A quote from JDK 1.3.1 javadocs:

"Implementation note: This class scales to large numbers of concurrently 
scheduled tasks (thousands should present no problem). Internally, it 
uses a binary heap to represent its task queue, so the cost to schedule 
a task is O(log n), where n is the number of concurrently scheduled tasks."


> The approach here is that you have a queue, in which you put your events and
> at the end of this queue there is maybe 1-3 threads which are popping the
> events from the queue and hand them over to the corresponding EventHandler.
> 
> Therefor you can have so many tasks if you want, but the number of threads
> is small...


This is exactly the idea and one of the key concepts of SEDA.


> 
> 
>>>Part of the lessons learned from the Staged Event Driven Architecture is that
>>>many times, you only need a small number of threads (in this case, one) to
>>>handle _all_ of your management tasks.  The SEDA architecture goes so far as
>>>to apply this to all I/O as it is asynchronous in that architecture.  The
>>>scalability of the solution is insane.  For instance, a plain HTTP server
>>>built on top of the SEDA architecture was able to handle 10,000 simultaneous
>>>connections and outperform Apache httpd using IBM JDK 1.1.8 on a Linux box
>>>with 4 500 MHz processors and a Gig of RAM.  Apache only made it to about
>>>512 simultaneous connections due to process per user limitations--not to
>>>mention several requests took much longer....
>>>
>>I definitely have to read this stuff...
>>
> 
> It's quite interesting...
> 
> 
>>>I am not proposing the SEDA architecture for Cocoon (although I might examine
>>>a simple HTTP implementation that supports Servlets later).....
>>>
>>>I am proposing the migration towards an infrastructure where maintenance
>>>commands can be performed asynchronously--and checking for source file
>>>changes is only one of those commands.
>>>
>>We have to identify what are those tasks that need to be done
>>periodically and those that don't need it.
>>
>>I would say that checking configuration file changes doesn't need to be
>>done asynchronously, for several reasons :
>>- in a production environment, these files aren't likely to change
>>often, and thus periodic check is most often a waste of time,
>>- in a development environment, people that change these files
>>immediately issue a request to check the effect of the change.
>>Synchronous (possibly deferred) check is then IMO more adequate.
>>
>>Cache management, on the contrary, can be considered the other way :
>>even in a production environment, periodic asynchronous check effecively
>>has something to do, which is purge cache entries associated to changed
>>resources. This isn't absolutely needed from a functional point of view
>>but allows to free earlier some memory/disk resources that would without
>>that have been freed later by the MRU policy.
>>
> 
> Be careful, at the end you have for every special case a special solution.
> Personally I want something more generic, which fits for the most cases.


Without getting into Cocoon internals (not an expert, _yet_ ;), in the 
long run, it would be nice to have a generic "scheduled task management" 
architecture, probably based on the event handling model. In addition to 
providing a consistent API, it should really help to improve 
scalability, and make things easier to manage (self-adjusting Pool, 
Cache implementations [with fewer threads], less external I/O, etc.)

(: Anrie ;)


> 
> 
>>>I am also going to supply a PoolResource implementation in the next version
>>>of Excalibur so that we can now *gasp* get the internal information of the
>>>Pool implementations of Cocoon's resources!  How about them apples?
>>>
>>Aahh, this would be really great :)
>>
>>
>>>But you have to start somewhere....
>>>
>>>
>>>>>If you want, I remove this stuff for now, but only for a clean reboot of this
>>>>>issue ;-).
>>>>>
>>>>>
>>>>Let's keep it for now, until we decide if this is the right way to go or
>>>>not. I will try to spend some time for show-casing the strategy I
>>>>proposed, so people can look at both.
>>>>
>>>:)
>>>
>>>Your solution works for the file issue, but the asyncronous maintenance issues
>>>are something completely different.
>>>
>>Agree. That's why we have to carefully identify those things that need
>>asynchronous management.
>>
>>BTW, I added a new components.source.DelayedLastModified and updated
>>sitemap.Handler to showcase deferred calls to Source.getLastModified().
>>Please have a look at it.
>>
>>
> 
>   Gerhard
> 
> "Three o'clock is always too late or too early for
> anything you want to do.
> (Jean-Paul Sartre)"
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, email: [EMAIL PROTECTED]
> 
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to