RE: [vote] David Legg as new Cocoon committer

2008-08-04 Thread Ard Schrijvers

 
 Please cast your votes:
 here's mine +1

+1 

-Ard

 
 --
 Best regards,
 Grzegorz Kossakowski
 


RE: Webdav and link-rewrite

2008-07-30 Thread Ard Schrijvers

 
  Apart from the link-rewrite block he will also migrate the 
  webdav block. 
  Any thoughts or recommendations on this? The plan is that all 
  Avalon components are migrated to Spring and that Jackrabbit 
  is used as webdav server.
  
  I remember Jasha and Jereon have started to work on this in 
  Rome last year. Any comments from you?
 
 Jeroen has looked further at the WebDAV block after the Rome 
 GT. The problem was the imcompatibility between HttpClient 2 
 (used in Slide) and HttpClient 3 (used in the WebDAV block). 
 Jeroen still has an open Jira issue for this [1], but he's 
 enjoying a well deserved holiday now. 
 
 Ard, do you know how far the WebDAV implementation in Jackrabbit is?

Jackrabbit does not supply a webdav client, only a server without the
DASL possibility AFAIK

-Ard

 
 [1] https://issues.apache.org/jira/browse/COCOON-2153
 
 Jasha Joachimsthal 
 
 www.onehippo.com
 Amsterdam - Hippo B.V. Oosteinde 11 1017 WT Amsterdam 
 +31(0)20-5224466 
 San Francisco - Hippo USA Inc. 101 H Street, suite Q Petaluma 
 CA 94952-3329 +1 (707) 773-4646
 
 


RE: [vote] Steven Dolg as committer

2008-07-29 Thread Ard Schrijvers

 
 Dear community,
 
 it's a great honor for me to propose Steven Dolg as a committer.
 

+1

-Ard

 


RE: [vote] Luca Morandini as new Cocoon committer

2008-07-29 Thread Ard Schrijvers

 David Crossley wrote:
  I would like to propose Luca Morandini as a new Cocoon 
 committer and 
  PMC member.
 
 +1 from me.

+1

-Ard

 
 -David
 


RE: [vote] Andreas Hartmann as new Cocoon committer

2008-07-29 Thread Ard Schrijvers


 
 I propose Andreas Hartmann as a new Cocoon committer and PMC member.
 

+1

-Ard


RE: [vote] Thorsten Scherler as new Cocoon committer

2008-07-29 Thread Ard Schrijvers

 
 I propose Thorsten Scherler as a new Cocoon committer and PMC member.


+1 

-Ard
 



RE: Eventcache dependency to JMS

2008-07-29 Thread Ard Schrijvers
Hello,

 
 Hey,
 
 I'm sorry for my delayed answer too ;-)
 
 Well, what I'm trying to gain is implementation independent blocks.
 I cleaned up the JMS block using Spring provided mechanisms 
 (templates) for message delivery.
 
 The sample block is now based on the ActiveMQ implementation 
 running in embedded mode (initialized via Spring namespace). 
 For demonstration purpose, it uses the 
 JMSEventMessageListener component for invalidating cached responses.
 
 As far as I get this, there are no dependencies between 
 eventcache and jms-impl at all?
 If we consider (re)placing the JMSEventMessageListener into 
 the jms-sample block, because it is a concrete subclass of 
 the AbstractMessageListener, we would get a more satisfying situation.

Doesn't the JMSEventMessageListener have a dependency on eventcache, or
none at all (it's been a while so I might be off here...)

 
 The way you might go, consists of writing a concrete listener 
 in a separate block, using whatever other block (e.g. 
 eventcache) you might need.

I must also admit I haven't yet been working with 2.2 and only 2.1. Do I
get correctly, that we would have in the end a jms block and a separate
event block, and if you want the event block using the jms, you need to
add a separate block containing the concrete listener impl? Or is it not
necessary to add a block...? 

 
 Can you live with that?

Honestly, you can better judge it then I can, because I am still to much
thinking and looking at it from the 2.1 days... :-). So if it makes the
architecture of the blocks better, you ahead

-Ard

 
 Regards,
 Lukas
 
 Ard Schrijvers schrieb:
  Hello Lukas,
  
  Sry for my late respond
  
  Hello,
 
  I'm wondering, why the eventcache block has dependencies 
 on the JMS 
  block and not the other way round?
  
  I do not know what you would win by switching the 
 dependency around. 
  JMS seems to me more uncoupled from eventcache then 
 eventcache to jms.
  Perhaps I would like to use JMS listeners, while at the 
 same time I do 
  not have any eventcache at all. I would for example just use JMS 
  to...I don't know, trigger an email to send...
 
 
 
  
  For those who are familiar with these blocks, in my opinion the 
  JMSEventListener makes use of eventcache capabilities.
 
  So I would say, JMS provides callback support via eventcache.
 
  What do you think?
  
  It's been a while for since I last worked with it, and I 
 suppose it is 
  targeted for Cocoon 2.2 where my knowledge is mainly 2.1.x, 
 so I might 
  be missing something. Anyway, it is not directly clear for 
 me what to 
  gain with this dependency switch
  
  -Ard
  
  Regards,
  Lukas
 
 
  
  
 
 
 


RE: [Vote] Jasha Joachimsthal as new Cocoon committer

2008-07-29 Thread Ard Schrijvers

 Hi,
 
 It's my pleasure to propose Jasha Joachimsthal as a new 
 committer on the Apache Cocoon project.
 

My unbiased +1

-Ard





RE: Eventcache dependency to JMS

2008-07-23 Thread Ard Schrijvers
Hello Lukas,

Sry for my late respond

 
 Hello,
 
 I'm wondering, why the eventcache block has dependencies on 
 the JMS block and not the other way round?

I do not know what you would win by switching the dependency around. JMS
seems to me more uncoupled from eventcache then eventcache to jms.
Perhaps I would like to use JMS listeners, while at the same time I do
not have any eventcache at all. I would for example just use JMS to...I
don't know, trigger an email to send...

 
 For those who are familiar with these blocks, in my opinion 
 the JMSEventListener makes use of eventcache capabilities.
 
 So I would say, JMS provides callback support via eventcache.
 
 What do you think?

It's been a while for since I last worked with it, and I suppose it is
targeted for Cocoon 2.2 where my knowledge is mainly 2.1.x, so I might
be missing something. Anyway, it is not directly clear for me what to
gain with this dependency switch

-Ard

 
 Regards,
 Lukas
 
 


RE: Upgrade Cocoon 2.1 to ehcache 1.3

2008-06-04 Thread Ard Schrijvers

 
  Now that the vote to switch to a more recent baseline JDK 
 has passed, 
  I'd like to upgrade to ehcache 1.3.0, which gives us nicer shutdown 
  and jmx instrumentation.
 
 +1 (Please, please)

+0, we are using JCSCache anyway :-). Hopefully you can configure
ehcache 1.3.0 its maximum disk cachekeys (which used to be not possible,
but I vaguely recall something that it was implemented: it should be
added then to be an optional configuration IMO to set the limit of max
disk cachekey), one of the reasons I chose to switch to JCSCache

-Ard

 
 
 
 --
 Peter Hunsberger
 


RE: test

2008-05-22 Thread Ard Schrijvers

 Usually, if unsubscribe to a list doesn't work it's because 
 one's trying to unsubscribe from a different address than the 
 one used to subscribe.
 
 For more info, send a message to [EMAIL PROTECTED] - 
 that includes a command to unsubscribe a different address 
 than the sender's.

Indeed: try 

'[EMAIL PROTECTED]'

Where XX = the email you want to unsubscribe, with the '@' replaced
by '='

You still need to reply the confirmation

ard

 
 -Bertrand
 


Event validities in (event)caching pipelines

2008-05-14 Thread Ard Schrijvers
Hello,

Unfortunately, I am not aware about Cocoon 2.2.x specific
(event)caching, but will it use a similar caching strategy as Cocoon
2.1.x or is this still under discussion? Also, I am not sure wether the
webdav block, or event validities will be used/supported in Cocoon
2.2.x. If not, this mail is pretty much redundant for those using Cocoon
2.2

Though, for those using the webdav block, or any other transformer
making use of event validities in event caching pipelines in combination
with 2.1.x, did somebody ever experience some concurrency problems with
the webdav transformer? We had some issues regarding the
WebDAVTransformer which started me thinking about how event validities
are used within a transformer in event caching pipelines. Basically,
IMO, the use of event validities in a transformer like it is for example
in the WebDAVTransformer is flawed. There is no unique place for
validation of cached responses anymore, leading to possible cached
pipelines which use old cached responses which should have been evicted
by an event (they still are evicted, but the calling pipeline caches the
old response it got before the eviction, while it was still in cache,
embedding an EventValidity, in other words, having a valid old cached
response which is never validated again anymore), for example jms. 

Anyway, before diving into deep, perhaps there are no more living souls
using the WebDAVTransformer and EventValidity's at all, or are not
interested anyway, but, OTOH, if somebody is having these headaches
about some unpredictable WebDAVTransformer behavior, I might be able to
help out. Just let me know,

-Ard



RE: JNet integration

2008-03-27 Thread Ard Schrijvers
  To be honest, I don't care about caching or how complex it 
 is. It has 
  to work and it does it nicely in Cocoon. If your name isn't 
 Ard ( ;-) 

:-)

  ) you usually don't need to know more details. And that's 
 what it is 
  as Vadim pointed out: implementation details.
  
 My name is not Ard but I care; knowing how the caching works 
 and fine tuning it (by changing parameters or choosing the 
 right strategy) makes a big difference.

I agree that you should care Carsten, butyou have been a Cocoon
committer for the last zillion years: should a fresh new cocoon user
from the start know how to do everything correct to have the best of
Cocoon's caching? 

I do not agree the former Cocoon caching was ultra complex (to use!!): I
started working with Cocoon, and only after some months I noticed:
heeey, what is that caching vs noncaching doing in the top of my
pipelines :-)  So...how hard and complex is it? I used it without ever
noticing it

And in the end, when you want to push Cocoon to its limits, and you hit
some performance issues, then you can take a look how the caching
actually works: that is IMHO the way how it should be implemented, and,
how it used to be implemented

-Ard

 
 Carsten
 


[jira] Commented: (COCOON-2153) DaslTransformer fails because of conflicting http client dependencies

2007-12-07 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12549394
 ] 

Ard Schrijvers commented on COCOON-2153:


slide is closed so updating the slide webdav client is not an option 
AFAICSand there are no other complete (apache) webdav clients that I am 
aware of. Jackrabbit has an incomplete one. Interesting reads about this:

http://www.mail-archive.com/[EMAIL PROTECTED]/msg13631.html
http://wiki.apache.org/jakarta/TLPHttpComponents
http://www.mail-archive.com/[EMAIL PROTECTED]/msg02691.html
http://www.mail-archive.com/[EMAIL PROTECTED]/msg05509.html


 DaslTransformer fails because of conflicting http client dependencies
 -

 Key: COCOON-2153
 URL: https://issues.apache.org/jira/browse/COCOON-2153
 Project: Cocoon
  Issue Type: Bug
  Components: Blocks: WebDAV
Affects Versions: 2.2-dev (Current SVN)
Reporter: Jeroen Reijn
Assignee: Jeroen Reijn
 Fix For: 2.2-dev (Current SVN)


 While using the webdav block in your cocoon 2.2 application the 
 DaslTransformer will fill fail with a NoSuchMehtod exception on the 
 setCredentials in the DASL transformer. After further investigation it 
 appeared the the 'slide'  dependency has a dependency on commons-httpclient 
 version 2.0.2 and the webdav block has a dependency for 3.0.1. These two 
 dependencies conflict with each other.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2152) EventAware cache does not persist correctly when using the StoreEventRegistryImpl

2007-11-28 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12546223
 ] 

Ard Schrijvers commented on COCOON-2152:


Since you found the problem my comment from COCOON-2146 can be discarded.  I 
merely stated there that for   2.1.8  cocoon versions it used to work. Since 
we have our own implementation for it, I wasn't aware that it actually changed. 
Anyway, good to know you found the problem!

 EventAware cache does not persist correctly when using the 
 StoreEventRegistryImpl
 -

 Key: COCOON-2152
 URL: https://issues.apache.org/jira/browse/COCOON-2152
 Project: Cocoon
  Issue Type: Bug
  Components: Blocks: Event Cache
Affects Versions: 2.1.10, 2.1.11-dev (Current SVN), 2.2-dev (Current SVN)
Reporter: Ellis Pritchard

 When using the DefaultEventRegistryImpl the functionality now works as 
 expected (events are persisted and restored) after the patch applied in 
 COCOON-2146.
 However, there's still a problem with StoreEventRegistryImpl.
 The behaviour is that it doesn't seem to actually write/restore any event 
 entries: the maps in the EventRegistryDataWrapper are empty (but not null) 
 after restart, even though the actual cache entry (key EVENTREGWRAPPER) was 
 found in the Store, and the entries were present when persist() was called.
 The effect of this is to correctly restore the cached entries, but discard 
 all the events, which means that event-flushes don't work any more, which is 
 not a good thing.
 I've tracked this down to the fact that 
 AbstractDoubleMapEventRepository#dispose() which performs the persist(), then 
 immediately clear()s the maps, WHICH HAVEN'T YET BEEN WRITTEN TO DISK BY 
 EHCACHE SHUTDOWN!
 This code has probably never worked :)
 Patches to follow; I propose modifying dispose() to null the map fields, but 
 not perform clear() on them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2146) Using EventAware cache implementation breaks persistent cache restore on restart

2007-11-27 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545761
 ] 

Ard Schrijvers commented on COCOON-2146:


Presumably though, if site-usage is fairly uniform, stuff that expires and is 
still referenced from AbstractDoubleMapEventRegistry will get regenerated again 
at some fairly near point, and replace the old entry with the new one, allowing 
it to be freed by the JVM? Only if you're generating lots of unique pages or 
short-lived pages and caching them for a long time may it become a 'real' 
problem? 

Yes, you would think you absolutely had a point here.but, unfortunately I 
found out, that the MultiHashMap doen *not* act that way !! (note, it has been 
a year, and i am not in a position now to dive in the code again but from the 
top of my head) : 

If you have a MultiHashMap and you put something like  

m_keyMMap.put(key1, value1);  multiple times, your map grows! you'll have 5 
times value1 in it (obviously from a performance POV this is easy to 
understand)

So, exactly what you describe above, will result in OOM. Even if you have only 
a few possible links, resulting in limited cachekeys, you'll run into OOM in 
the endI am sorry :-) 

Anyway, the WeakReference solution is capable of being persisted, but needs 
some extra stuff to be done at restart. OTOH, notification from ehcache with 
some listeners might be a solution as well, though 

1) it has to be independant of ehcache
2) I still think a DoubleMap is really not the way it should be solved. I would 
always use WeakReferences for these kind of problems, because the overhead of 
keeping the maps in sync is not needed. 



 Using EventAware cache implementation breaks persistent cache restore on 
 restart
 

 Key: COCOON-2146
 URL: https://issues.apache.org/jira/browse/COCOON-2146
 Project: Cocoon
  Issue Type: Bug
  Components: Blocks: Event Cache
Affects Versions: 2.1.10
Reporter: Ellis Pritchard
Assignee: Jörg Heinicke
Priority: Minor
 Fix For: 2.1.11-dev (Current SVN)

 Attachments: patch.txt


 In revision 412307 (Cocoon 2.1.10), AbstractDoubleMapEventRegistry and 
 EventRegistryDataWrapper were changed (without an informative SVN comment!) 
 to use the commons MultiValueMap instead of the MultiHashMap; I presume this 
 was done in good faith because the latter map is deprecated and will be 
 removed from Apache commons-collections 4.0
 However, as a result, the persistent cache cannot be restored if the 
 EventAware cache implementation is used, since MultiValueMap is not 
 Serializable! The old MultiHashMap was...
 Depending on whether StoreEventRegistryImpl or DefaultEventRegistryImpl is 
 used, either the event cache index is never written (ehcache doesn't store 
 non-serializable objects on disk), or a java.io.NotSerializableException is 
 thrown (and caught, causing a full cache-clear) when attempting to restore 
 the event cache index.
 This is Major for us, since we use Event-based caching alot, and this is 
 causing the *entire* cache to no-longer persist across restarts (it's been 
 like that for 8 months, since I upgraded Cocoon to 2.1.10 in the last week I 
 was working here, and now I'm back, they've actually noticed!!)
 Work-around at the moment is to down-grade AbstractDoubleMapEventRegistry and 
 EventRegistryDataWrapper to the 2.1.9 versions (pre-412307), which works so 
 long as Apache-commons 3.x is still in use.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2151) Sub-optimal implementation of AbstractDoubleMapEventRegistry

2007-11-27 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545763
 ] 

Ard Schrijvers commented on COCOON-2151:


I commented on COCOON-2146, I'll repeat my text over here:


Presumably though, if site-usage is fairly uniform, stuff that expires and is 
still referenced from AbstractDoubleMapEventRegistry will get regenerated again 
at some fairly near point, and replace the old entry with the new one, allowing 
it to be freed by the JVM? Only if you're generating lots of unique pages or 
short-lived pages and caching them for a long time may it become a 'real' 
problem? 

Yes, you would think you absolutely had a point here.but, unfortunately I 
found out, that the MultiHashMap doen *not* act that way !! (note, it has been 
a year, and i am not in a position now to dive in the code again but from the 
top of my head) :

If you have a MultiHashMap and you put something like

m_keyMMap.put(key1, value1); multiple times, your map grows! you'll have 5 
times value1 in it (obviously from a performance POV this is easy to 
understand)

So, exactly what you describe above, will result in OOM. Even if you have only 
a few possible links, resulting in limited cachekeys, you'll run into OOM in 
the endI am sorry :-)

Anyway, the WeakReference solution is capable of being persisted, but needs 
some extra stuff to be done at restart. OTOH, notification from ehcache with 
some listeners might be a solution as well, though

1) it has to be independant of ehcache
2) I still think a DoubleMap is really not the way it should be solved. I would 
always use WeakReferences for these kind of problems, because the overhead of 
keeping the maps in sync is not needed.

 Sub-optimal implementation of AbstractDoubleMapEventRegistry
 

 Key: COCOON-2151
 URL: https://issues.apache.org/jira/browse/COCOON-2151
 Project: Cocoon
  Issue Type: Improvement
  Components: Blocks: Event Cache
Affects Versions: 2.1.10, 2.2-dev (Current SVN)
Reporter: Jörg Heinicke

 This is just a follow-up from COCOON-2146 where Ard pointed out some issues 
 with AbstractDoubleMapEventRegistry. I just didn't want to lose the 
 information when I actually fixed the issue. So I will add it here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2151) Sub-optimal implementation of AbstractDoubleMapEventRegistry

2007-11-27 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545767
 ] 

Ard Schrijvers commented on COCOON-2151:


Only if you're generating lots of unique pages or short-lived pages and 
caching them for a long time may it become a 'real' problem

Sorry, one more add on this line: It is actually vice versa: when you cache for 
a short period of time, you'll run into a 'real' problem much faster: Every 
time a page is evicted from the cache by its TTL (time to live) or LRU/MRU 
(you'll have faster OOM when you have a small possible number of cache entries) 
 and it is reloaded,  an extra cachekey and event is added to the double maps! 

So, the smaller your TTL, the faster you'll run into OOMsounds  as a 
contradiction I admit :-)

ps if you are suffering from OOM because of growing registry (set TTL low, and 
crawl you site several times and use some profiler to see memory) and you don't 
want the hassle of WeakReferences with JCSCache (it will take you a few weeks) 
you can have a lightweight improvement which would delay the OOM pretty much:

Instead of storing in the doublemaps the events as string, you can easily use 
the hashcode of the events. This reduces memory consumption pretty much 
already. In the absolute rare cases that two hashcodes coincide, all that 
happens is a 'not necessary' removal from a cachekey. 

Furthermore, You might hook in something that is similar to the StoreJanitor 
for the event registry: ie, every 10 or 60 seconds, check their maps, and if 
something is found like 

[key1,  {value1, value1, value2, value1, value2}  ] replace it by 

[key1 , {value1, {value2}]

I did something similar on putting a key in, because I had a growing multimap 
with WeakReferences(null) after the cachekeys where removed.

Anyway, think there is a lightweight solution possible which works pretty 
wellbut i still don't like it :-)

 Sub-optimal implementation of AbstractDoubleMapEventRegistry
 

 Key: COCOON-2151
 URL: https://issues.apache.org/jira/browse/COCOON-2151
 Project: Cocoon
  Issue Type: Improvement
  Components: Blocks: Event Cache
Affects Versions: 2.1.10, 2.2-dev (Current SVN)
Reporter: Jörg Heinicke

 This is just a follow-up from COCOON-2146 where Ard pointed out some issues 
 with AbstractDoubleMapEventRegistry. I just didn't want to lose the 
 information when I actually fixed the issue. So I will add it here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2151) Sub-optimal implementation of AbstractDoubleMapEventRegistry

2007-11-27 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545809
 ] 

Ard Schrijvers commented on COCOON-2151:


The problem with (1) is that it did not work for me for ehcache when also using 
filesystem cache (which I always wanted to have). You can configure caches like 
ehcache or jcscache to use memory only cache, or to overflow to disk. When 
overflowing to disk was enabled, ehcache in combination with the WeakReference 
did not work. I investigated ehcache, but could not find the reason. Since, 
ehcache at that moment (think they enabled it) did not offer a way to limit the 
number of filesystem entries, and jcscache did, and jcscache worked with the 
WeakReferences, I choose my solution with the WeakReferences. 

Do realize however, that I build my solution on the fact, that jcscache 
honoured references (overflowing cache entries to disk kept the same references 
for cachekey) but this is *not* a general specification of how a cache works. 
Therefor, however my solution works very well, and sites never went down 
anymore because of OOM, I do not think the solution is strong enough as  
standard cocoon code

The problem with (2) is, that if you serialize the WeakReferences registry, you 
only serialize the string cachekeys and string event. At startup, you have to 
traverse to entire cache to recreate the correct WeakReferences, which might 
imply slow restarts for persistent caches.

So, the problem is really quite hard to have a proper cocoon core solution if 
you ask me.  I can help out with pointers, but am really occupied ATM with 
other things, so am not in a position to contribute code (except for the code I 
wrote). 

Perhaps the easiest way to solve the current problem is a background thread 
which from time to time checks the multihashmaps for duplicate values like 
[{key1}, {value1,value1,value1}] and remove these. To use hashcodes for the 
events (to reduce memory a lot for the events (remember, the cachekey strings 
are already present in the jvm because of the cache, so shouldn't take a lot of 
extra memory...)).  Then, the only thing the background thread (similar to  the 
StoreJanitor ) should check for keys in the multivaluemap wether they are still 
somewhere in cache, and if not, clean up these values. There is one pretty 
obvious con: testing for the presence of a key in the cache influences 

1) the TTL (it is used again)
2) messes up the logical LRU/MRU eviction policy

Sry for only giving pointers, and not the time to help with the implementation

ps Also whirlycache might be worth looking at by the way (jcscache has a very 
hard configuration to start with) 

 Sub-optimal implementation of AbstractDoubleMapEventRegistry
 

 Key: COCOON-2151
 URL: https://issues.apache.org/jira/browse/COCOON-2151
 Project: Cocoon
  Issue Type: Improvement
  Components: Blocks: Event Cache
Affects Versions: 2.1.10, 2.2-dev (Current SVN)
Reporter: Jörg Heinicke

 This is just a follow-up from COCOON-2146 where Ard pointed out some issues 
 with AbstractDoubleMapEventRegistry. I just didn't want to lose the 
 information when I actually fixed the issue. So I will add it here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2146) Using EventAware cache implementation breaks persistent cache restore on restart

2007-11-27 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545962
 ] 

Ard Schrijvers commented on COCOON-2146:


H, I am really confident that persistent caches with the event registry 
used to work (we have used it for years until i disabled it...but the last time 
i used it was with 2.1.8, so not sure if something changed :-) ). A restart did 
use to work. You are really sure that you have a restart persistent cache 
configuration? 

I've tracked this down to the fact that 
AbstractDoubleMapEventRepository#dispose() which performs the persist(), then 
immediately clear()s the maps, WHICH HAVEN'T YET BEEN WRITTEN TO DISK BY 
EHCACHE SHUTDOWN!  

This shouldn't matter. When the registry is persisted, you can clear the double 
maps. The cache takes care of persisting itself. Can you comment your store 
configuration of your ehcache?

Can you confirm, that on disk, in your work dir after shutdown, you have 
serialized ehcache files? Can you also confirm you have (i think in the WEB-INF 
dir somewhere) serialized .ser registry files. Open them with notepad to see 
that they contain some data if you want

At startup, I know that for every entry in the registry, the cachekey is 
checked wether it is still present in the cache. When not, the registry of this 
cachekey is discarded. I think you have a problem somewhere in this (though 
depends on wether your .ser files are empty or not...if empty, the problem is 
ofcourse in the shutdown)



 Using EventAware cache implementation breaks persistent cache restore on 
 restart
 

 Key: COCOON-2146
 URL: https://issues.apache.org/jira/browse/COCOON-2146
 Project: Cocoon
  Issue Type: Bug
  Components: Blocks: Event Cache
Affects Versions: 2.1.10
Reporter: Ellis Pritchard
Assignee: Jörg Heinicke
Priority: Minor
 Fix For: 2.1.11-dev (Current SVN)

 Attachments: patch.txt


 In revision 412307 (Cocoon 2.1.10), AbstractDoubleMapEventRegistry and 
 EventRegistryDataWrapper were changed (without an informative SVN comment!) 
 to use the commons MultiValueMap instead of the MultiHashMap; I presume this 
 was done in good faith because the latter map is deprecated and will be 
 removed from Apache commons-collections 4.0
 However, as a result, the persistent cache cannot be restored if the 
 EventAware cache implementation is used, since MultiValueMap is not 
 Serializable! The old MultiHashMap was...
 Depending on whether StoreEventRegistryImpl or DefaultEventRegistryImpl is 
 used, either the event cache index is never written (ehcache doesn't store 
 non-serializable objects on disk), or a java.io.NotSerializableException is 
 thrown (and caught, causing a full cache-clear) when attempting to restore 
 the event cache index.
 This is Major for us, since we use Event-based caching alot, and this is 
 causing the *entire* cache to no-longer persist across restarts (it's been 
 like that for 8 months, since I upgraded Cocoon to 2.1.10 in the last week I 
 was working here, and now I'm back, they've actually noticed!!)
 Work-around at the moment is to down-grade AbstractDoubleMapEventRegistry and 
 EventRegistryDataWrapper to the 2.1.9 versions (pre-412307), which works so 
 long as Apache-commons 3.x is still in use.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



RE: [jira] Closed: (COCOON-2146) Using EventAware cache implementation breaks persistent cache restore on restart

2007-11-27 Thread Ard Schrijvers

  
  Sorry Grek, that I supplanted you. I only saw on closing the issue 
  that you actually assigned it to you. I hope you don't mind :-)

 Grzegorz Kossakowski wrote: 
 No problem, Jörg! I assigned it to myself because I thought 
 that the issue was trivial but recent comments confused me 
 greatly. 

Sry about that :-) 

-Ard

Thanks for taking care of applying a patch.
 
 --
 Grzegorz Kossakowski
 Committer and PMC Member of Apache Cocoon 
 http://reflectingonthevicissitudes.wordpress.com/
 


[jira] Commented: (COCOON-2146) Using EventAware cache implementation breaks persistent cache restore on restart

2007-11-26 Thread Ard Schrijvers (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545506
 ] 

Ard Schrijvers commented on COCOON-2146:


FYI : IMO, the AbstractDoubleMapEventRegistry is a very bad implementation, and 
OOM sensitive.  I was facing sites needing a restart every few days (very high 
traffic sites,  100.000 pages ) because of the double map event registry in 
combination with ehcache, It has been more than a year ago, so I might be off 
at some places, but :

The AbstractDoubleMapEventRegistry  used for event caching was build a long 
time ago (I wasn't around) and was based on an internal cocoon cache, which in 
turn was managed by the StoreJanitor. This internal cache has been replaced by 
ehcache, or jcscache. These caches handle their own cache (TTL, LRU, ETERNAL, 
etc etc). This means, that when this cache decides to remove a cached entry, 
this remove was not initialized by the StoreJanitor, hence not propagated to 
the event registry. This ends up in an ever growing event registry. 

Also, I totally did not like the AbstractDoubleMapEventRegistry. Keeping  
double mapped maps in syncit kind of is stupid. And, this is exactly the 
thing you use WeakReferences for. So I rebuild the registry for our projects to 
use WeakReferences. Only problem I faced, was that for a reason I have never 
been able to find or reproduce outside cocoon, it didn't play well with ehcache 
because references seemed to change. Therefor I did implement it in combination 
with JCSCache (which by the way performed better). While I was busy I changed 
the registry to enable multiple caches because I wanted filesystem caches for 
binary repository data, a seperate cache for repository xml files, and a 
seperate one for pipelines. 

I am not sure if anyone is interested... :-)  Because of the ehcache reference 
problems and the fact that I had no easy way (without reading the entire cache 
at startup) to have a persistent cache with a registry (WeakReferences cannot 
be persisted ofcourse)  I never considered the code suitable for Cocoon. OTOH, 
I left EHCache without problems, and IMO, needing a persistent cache does seem 
to me that you should have implemented your application differently. I have 
always been concerned performance, and I just know that an uncached pipepline 
with external repository sources, with webdav calls en searches, can easily 
finish within 50-100 ms. If you must rely on your cache that havily that it 
should survive a restart, I think you are misusing cocoon's cache anyway. Just 
my 2 cents

Anyway, if anybody is interested, the code is over here:

https://svn.hippocms.org/repos/hippo/hippo-cocoon-extensions/trunk/eventcache/src/java/org/apache/cocoon/caching/impl/

and then mainly AbstractWeakRefMapEventRegistry.java.



 Using EventAware cache implementation breaks persistent cache restore on 
 restart
 

 Key: COCOON-2146
 URL: https://issues.apache.org/jira/browse/COCOON-2146
 Project: Cocoon
  Issue Type: Bug
  Components: Blocks: Event Cache
Affects Versions: 2.1.10, 2.1.11-dev (Current SVN)
Reporter: Ellis Pritchard
Assignee: Grzegorz Kossakowski
Priority: Minor
 Fix For: 2.1.11-dev (Current SVN)

 Attachments: patch.txt


 In revision 412307 (Cocoon 2.1.10), AbstractDoubleMapEventRegistry and 
 EventRegistryDataWrapper were changed (without an informative SVN comment!) 
 to use the commons MultiValueMap instead of the MultiHashMap; I presume this 
 was done in good faith because the latter map is deprecated and will be 
 removed from Apache commons-collections 4.0
 However, as a result, the persistent cache cannot be restored if the 
 EventAware cache implementation is used, since MultiValueMap is not 
 Serializable! The old MultiHashMap was...
 Depending on whether StoreEventRegistryImpl or DefaultEventRegistryImpl is 
 used, either the event cache index is never written (ehcache doesn't store 
 non-serializable objects on disk), or a java.io.NotSerializableException is 
 thrown (and caught, causing a full cache-clear) when attempting to restore 
 the event cache index.
 This is Major for us, since we use Event-based caching alot, and this is 
 causing the *entire* cache to no-longer persist across restarts (it's been 
 like that for 8 months, since I upgraded Cocoon to 2.1.10 in the last week I 
 was working here, and now I'm back, they've actually noticed!!)
 Work-around at the moment is to down-grade AbstractDoubleMapEventRegistry and 
 EventRegistryDataWrapper to the 2.1.9 versions (pre-412307), which works so 
 long as Apache-commons 3.x is still in use.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



RE: Deprecation of CInclude transformer

2007-11-23 Thread Ard Schrijvers


 Vadim Gritsenko wrote: 
 
 Have you seen this part?
 
   return key == null? I: I + key;
  }
 

I never realized this specifically, but used to solve it similar (see
below) 

 
 All you need to do is to pass in extra key:
 
   map:match pattern=main
 map:generate src=someXML.xml/
 map:transform type=include
   map:parameter name=key value={request-param:strange}/
 /map:transform
 map:serialize/
   /map:match

True, and as a matter of fact, you can put the key in *any* of the
generators (if they put the parameters in the cachekey) *or* any of the
transformers (which i used to do, but in the include transformer makes
more sense) (again if the honour the parameters in the cachekey
ofcourse). All that matters is that the parameter gets accounted for.
But still, there are other cases where you might not get away with this
'adding a key parameter'. Think about the usecase that instead of some
selector with a query-param, you use a resource-exist-selector. You
cannot account for this in the include transformer with a parameter (or
i think you have to add this input module...though it may be around
already. I can't remember from the top of my head) 

 
 Hopefully information above helps you to keep them caching :)
 

I would certainly recommend so :-) The thing I wanted to point out, is
that when using the IncludeTransformer, you might need to really
understand how cocoon's caching work, and how cachekeys are being
created. It is in fact really simple, but I have just seen to many times
pipelines like these:

 map:pipeline type=caching
map:match pattern=common-used-part
map:generate src=foo.xml/
map:transform src=bar.xsl
map:parameter name={date:MMddHHddmmss}/
/map:transform/
/map:match/
 /map:pipeline 

total useless cacheor my favorite:

 map:pipeline type=caching
map:match pattern=common-used-part
map:generate src=foo.xml/
map:transform src=bar.xsl
map:parameter name={header:Referer}/
/map:transform/
/map:match/
 /map:pipeline 

Anyway, bottomline is that the InludeTransformer is IMO definitely the
one to use. 

Regards Ard


RE: Deprecation of CInclude transformer

2007-11-22 Thread Ard Schrijvers

 Ralph Goers wrote:   
 Actually, this may be exactly the reason my developer 
 couldn't get the IncludeTransformer to work.  I'll have to 
 have hime take a look at it again.

If he still has problems let me know, I might be able to help you out.

Regards Ard

 
 Ralph
 


RE: Deprecation of CInclude transformer

2007-11-21 Thread Ard Schrijvers
Hello,

 Grzegorz Kossakowski wrote:
 Could you elaborate on the known differences between CInclude 
 and Include transformers? Was it discussed somewhere?

I am not sure if it was ever discussed somewhere, but I can elaborate a
little on it ( though it has been a while, so I might be a little off at
some places... though understanding of this IncludeTransformer gives you
a very good insight in Cocoon's caching mechanism. ). 

First of all, I think there are different useages of both transformers,
where I must admit I never used the CInclude ( I know you can also POST
with it,  append parameters, etc etc ). With the includeTransformer, all
you do is including a src which can use any of the defined
source-factories in you cocoon.xconf, for example

include:include src=cocoon://some/pipeline/ or
include:include src=file://some/fs/file/

We at Hippo *always* used the IncludeTransformer, because it is way
better cacheable (and blistering fast ones includedbut ofcourse,
there is a very subtle catch which the author didn't realize i
think...or just did not tell  :-) .), as opposed to the CInclude
transformer. AFAIU, the IncludeTransformer can only be cached by
defining some expires. Clearly, you cannot really know how to set this,
or if you fetch an external http source, just give it some heuristic
value. 

Now, the IncludeTransformer adds the validity object of the included
sources to the validity object of the calling pipeline (pfff).
Let's have an example:

someXML.xml : 

doc
include:include src=cocoon://fetchSnippet/
/doc

map:match pattern=main
map:generate src=someXML.xml/
map:transform type=include/
map:serialize/
/map:match

map:match pattern=fetchSnippet
map:generate src=included.xml/
map:serialize/
/map:match

included.xml : includethis/

So, when calling /main, I will get 

doc
includethis   
/doc

and the validity object of the fetchSnippet pipeline is added to the
validity of the main pipeline. So, the second call for /main, does the
setup for the main pipeline, and computes a cachekeyand finds a
hit in the cache with a valid cached object, thus returns the cached
result. NOTE: the second pipeline is never used in the second call. Not
even the setup!

Suppose I change now the second matcher the src from included.xml to
included2.xml, and call /main againWell, my cachekey of the first
pipeline is unchanged, as well as the validity object (the validity
object is with respect to included.xml which did *not* change). So, I
still get the same cached result. Now, when I open included.xml,
change something and save, then the validity obj of the first pipe is
invalid, so the result is re-computed, fetching included2.xml!! 

Also, if the second pipeline /fetchSnippet start with a map:select
first, the things are getting even complexer (the little catch)if
somebody wants to know about it I can elaborate.

Anyway, why use the IncludeTransformer if you have this 'strange'
behaviorWell, it is just way much faster than the
CIncludeTransformer, so if you have pipelines with 10-20 includes (which
we have in some projects) I want a cached result withing a couplt of ms.
Do realize, that setting up a pipeline, compute the compound cachekey of
a generators and transformers, check the cache, check the validity
object, do take time, which you do not have with the includetransformer
(it only returns the validity obj). OTOH, it really only makes sense to
use it when you have a lot of traffic, or when pages have to load fast. 

Hope things are a little clear. Sorry I cannot say to much about the
CIncludeTransformer...I just do not know enough about it. I just know it
is much slower (I just looked at the code and see it uses a
cachingSession with an expires...i do not like it that much i think,
though it can be used for much more than the IncludeTransformer)

Regards Ard

 
 BTW. I forgot to say that I would like to deprecate CInclude 
 in 2.2 only.
 
 --
 Grzegorz Kossakowski
 


RE: Deprecation of CInclude transformer

2007-11-21 Thread Ard Schrijvers

 AFAIU, the 
 IncludeTransformer can only be cached by defining some

Above is ofcourse a typo...I was talking about the CIncludeTransformer
which can only be cached by some expires...
 
 expires. Clearly, you cannot really know how to set this, or 
 if you fetch an external http source, just give it some 
 heuristic value. 
 


RE: Deprecation of CInclude transformer

2007-11-21 Thread Ard Schrijvers

 Vadim Gritsenko wrote:
 Indeed. I think this is a failure on CocoonSource part. 
 CocoonSource content depends on the sitemap.xmap, but 
 CocoonSource validity does not include validity of the 
 sitemap. As a result, even if sitemap is changed, 
 CocoonSource validity stays valid.
 
 I'm not sure though if I want to have this bug fixed - it 
 would add more overhead...

Don't think either it is worth the trouble. You can work years with it
without ever noticing, it is only happening during development when you
happen to change something in a way described above (which will hardly
ever happen)

 
 
  Also, if the second pipeline /fetchSnippet start with a map:select 
  first, the things are getting even complexer (the little 
 catch)if 
  somebody wants to know about it I can elaborate.
 
 Fire away! ;-)

Okay, you asked for it :-) 

Suppose I have an include : 

include:include src=cocoon://fetchSnippet/ 

and I have a pipeline :

map:selectors default=parameter
  map:selector name=simple
src=org.apache.cocoon.selection.SimpleSelector/
/map:selectors

map:match pattern=fetchSnippet
map:select type=simple
map:parameter name=value
value={request-param:strange}/
map:when test=true
map:generate src=test1.xml/
/map:when
map:otherwise
map:generate src=test2.xml/
/map:otherwise
/map:select   
  map:serialize type=xml/ 
/map:match

Now, when your first call is for example : /main?strange=true

and the main pipeline is again 

map:match pattern=main
map:generate src=someXML.xml/
map:transform type=include/
map:serialize/
/map:match

You will have included test1.xml, since {request-param:strange}
evaluates to 'true'. Now, calling  
/main?strange=false will *not* give you test2.xml as the include,
because, the main pipeline was perfectly cacheable, the 'stange'
parameter was not included in its cachekey and the validity object is
valid since no src whatsoever changed. Touching again test1.xml and
calling again returns you test2.xml. Calling /main?strange=true
afterwards gives you again test2.xml.

You might consider it undesirable/incorrect behavior, but I just learned
to use see the include transformer as if you add some pipeline parts
from another pipe in your pipeline. So, if you have an include involving
a map:select, you might see as if you were adding the pipe parts which
are resolved by the map:select. 

Also, the times I had to deal with this are few, but, OTOH, when I got
the pleasure of debugging externally developed websites with a main
sitemap of  4.000 lines, and this was actually one of the problems, you
can imagine how hard it is to find! :-) 

 
 BTW IncludeTransformer operation can not be changed easily. 
 If you try to resolve each included URI, to check its new key 
 and validity, you'd also have to parse (or store somewhere!) 
 included parts in order to resolve nested includes - if 
 recursive inclusion is switched on.

Yes indeed. We certainly do not want that! It is working really fine
ATM, but if you encounter one of the exceptional examples I
describedwell, then you either have to learn about cocoon caching,
or put you pipelines to noncaching :-) 

 
 
 IMHO POSTing stuff to a source does not really belong to a 
 include transformer, but rather to some other type of transformer...

A PostTransformer :-) 

Ard

 
 Vadim
 
 


RE: [GT2007] [VOTE] Conference location + time

2007-04-11 Thread Ard Schrijvers


 
 Hi all,
 
 Please cast your votes on both the location and the time for 
 this year's Cocoon GetTogether conference:
 
 A) The Netherlands, Amsterdam

-1

 B) Italy, Rome / Milano

+10

 C) England, London / Norwich

-100 :-) 

 
 When:
 
 A) Late september

+1

 B) Beginning of October (between the holidays season and ApacheCon)

-1 

 C) End of October

+1

 
 Thanks!
 
 Apart from all that, you should come over and visit Amsterdam 
 anyway from May 1-4 at the ApacheCon Europe! www.apachecon.com
 
 
 
 Kind regards,
 
 Arjé Cahn
 
 Hippo  
 
 Oosteinde 11
 1017WT Amsterdam
 The Netherlands
 Tel  +31 (0)20 5224466
 
 [EMAIL PROTECTED] / [EMAIL PROTECTED]
 
 --
  Hippo http://www.hippo.nl
  Hippo CMS community   http://www.hippocms.org
  My weblog http://blogs.hippo.nl/arje
 --
  ApacheCon Europe 2007 Gold Sponsor
  Join us from May 2-4 in Amsterdam!
 --
 
 


RE: StoreJanitor

2007-04-05 Thread Ard Schrijvers

 Ard Schrijvers wrote:
  Yes, this is exactly my point. The extra problem is that 
 the StoreJanitor
  never has access to the eviction policy of the cache, and 
 just starts
  throwing out entries at random.
 
 That's incorrect statement. I'd say that StoreJanitor always 
 has access to the 
 eviction policy, with the exception of incorrect cache 
 implementation, such as 
 EHCache:
 
 public class EHDefaultStore implements Store {
 
  public void free() {
  try {
  final List keys = this.cache.getKeysNoDuplicateCheck();
  if (!keys.isEmpty()) {
  // TODO find a way to get to the LRU one.
  final Serializable key = (Serializable) keys.get(0);
 
 
 If you were to fix the root of the problem first, many of 
 your other troubles 
 would simply evaporate.

I know and have been looking before already, and have been asking around at the 
ehcache list, but there is no way you have access to the eviction policy. The 
same holds for JCSCache (I would not know for whirly cache or others)

Ard

 
 Vadim
 


RE: StoreJanitor

2007-04-05 Thread Ard Schrijvers

 IIUC, EHCache allows you to set only the number of items in 
 cache, and not the 
 maximum amount of memory to use, or minimum amount of free 
 memory to leave.

true (but the cache can't know the size of objects it gets stuffed with  (you 
say it is possible with java 1.5? ) )

 
 In other words, EHCache leaves you up for guessing what the 
 number should be. It 
 should not do this. Maximum number of items in memory should 
 be whatever memory 
 can bear, and cache should dump unused memory once free 
 memory goes low.

I do not agree. This would imply a JVM always reaching max memory while this is 
not necessary. 
Think you are talking about a SoftReferences cache based, but apart from some 
drawbacks,
the eviction policy is very hard to implement because you do not kwow which 
reference the jvm throws away.

IMO, the cache configuration, like maxelements (memory) and maxdiskelements 
should be modifable during
run time, but, this is not possible with ehcache or jcs.

 
 Granted it can't be implemented cleanly in Java 1.4 (hence 
 thread + interval 
 hack) but on Java 1.5 it would work beautifully. All you need 
 is a stinking API 
 to clean out entries using LRU algorithm (or whatever is 
 chosen). Or just switch 
 to Cocoon's cache in the meantime.

I haven't got problems with jcs if you know how large your stores can be 
(crawler test and default sizes i have 
configured for different jvm mem values). Throwing away all cache experience of 
others
and switch to cocoon cache does't seem to make sense to me

 
 
  JCS will probably do the same. I guess that original purpose of 
  StoreJanitor was when Cocoon had its own store implementations 
  (transient, persistent) and we had to take care of cleaning 
 them up in 
  our code.
 
 It still does, and at the moment is easier to live with than 
 EHCache -- as shown 
 by Ard in this email.

Jcs does not give me any problems (except headache in configuration :-) )

 
 Vadim
 
 


RE: StoreJanitor (was: Re: Moving reduced version of CachingSource to core | Configuration issues)

2007-04-05 Thread Ard Schrijvers
Vadim,

I think you are reasoning from a POV of the cocoon cache, but I think you are 
in the minority compared to the number of users which are using EHCache. 
I tried to explain the inevitable OOM of the StoreJanitor in combination of 
EHCache and
the event registry in a high volume site. 

 
 
  o0o
  
  The rules I try to follow to avoid the Store Janitor to run
  
  1) use readers in noncaching pipelines and use expires on 
 them to avoid
  cache/memory polution
 
 Better - there is Apache HTTPD for it.

I know, we use both

 
 
  2) use a different store for repository binary sources
  which has only a disk store part and no memory part 
 (cached-binary: protocol
  added)
 
 Doesn't it result in some frequently used binary resource 
 always read from the disk?

not because as you said, if you also use Apache HTTPD for binary files *and* a 
reader with expires. 
I would be surprised to have a 1.000 concurrent users hitting ctrl-f5 at a 
large binary at the same time :-) 

 
 
  3) use a different store for repository sources then for pipeline
  cache
 
 Hm, what are the benefits?

because my repository sources are expensive compared to pipeline cache entries. 
If 
people do not construct proper pipelines, the pipeline cache entries can flood 
the cache. 
By using a different store for my repository sources, I make sure, wrong 
constructed pipelines
do not harm me by evicting expensive repository sources. 

Do realize that I am talking about high volume sites with many visitors and 
editor running live.

Ard

 
 
 Vadim
 
  4) replaced the abstract double mapping event registry to use
  weakreferences and let the JVM clean up my event registry
  5)  (4) gave me
  undesired behavior by removing weakrefs in combination with 
 ehcache when
  overflowing items to disk (i could not reproduce this, but 
 seems that my
  references to cachekeys got lost). Testing with JCSCache 
 solved this problem,
  gave me faster response times and gave me for free to limit 
 the number of
  disk cache entries. Disadvantage of the weakreferences, is 
 that I disabled
  persitstent caches for jvm restarts, but, I never wanted 
 this anyway (but
  this might be implemented quite easily, but might take long 
 start up times) 
  6) JCSCache has a complex configuration IMO. Therefor, I 
 added default
  configurations to choose from, for example:
  
  
  
  
  [1] http://www.minfin.nl [2] http://www.minbuza.nl
 
 


RE: StoreJanitor

2007-04-05 Thread Ard Schrijvers

 
 Strictly speaking, you don't need access to the eviction 
 policy itself - but 
 only some exposed method on Store, something like 
 purgeLastElementAccordingToEvictionPolicy -- can't they add 
 something like that? 

I of course did ask this, because this is obviously the way to go: 

http://sourceforge.net/mailarchive/forum.php?thread_name=A955EA1F8FE31749AEC8C998082F6C7CD6E71E%40hai01.hippo.localforum_name=ehcache-list

summarizing Greg: There is real way to plug into the eviction mechanism from 
outside.


 To ehcache or jcscache, does not matter :)

I did not ask for jcscache, but since ehcache is an original fork of jcscache, 
i doubt wether 

1) it is any different
2) i get an answer at all :-)

Ard

 
 Vadim
 


RE: StoreJanitor (was: Re: Moving reduced version of CachingSource to core | Configuration issues)

2007-04-05 Thread Ard Schrijvers

 
 Configurable Store registration with StoreJanitor alleviates 
 somewhat that 
 problem, but not solves completely as you still have to 
 properly configure all 
 your cache sizes correctly to avoid OOM.
 
 I think you can try combining Cocoon's MRU cache and EHCache 
 to get best of both 
 worlds.

I have now 3 eventaware jcs caches and 1 eventaware MRU cache and 1 MRU cache 
by default...it got 
out of hand :-)

 
 
  I tried to explain the inevitable OOM of the StoreJanitor 
 in combination of EHCache and
  the event registry in a high volume site. 
 
 That's only highlights that current cocoon default config is 
 not a viable option.

true


 I doubt that it is possible at all to make any programming 
 system robust enough 
 to withstand bad code. You can try though.

It indeed almost seems the more robust I make it (even with a skeleton 
generator creating an entire structure
based on best practice), the more they try to break it :-)...no not entirely 
true.
The only problem is that it abstracts the basics even further, and since we are 
outsourcing, they tend to sometimes
forget what they are really doing and think about it. OTH, large sites are 
running fine for months and this wasn't the case before I tried making it 
robust. But I get your point :-)  

Ard



RE: StoreJanitor

2007-04-04 Thread Ard Schrijvers

 
 Reinhard Poetz wrote:
 
  P.S. Ard, answering to your mails is very difficult because 
 there are no 

I am very sorry...I hardly dare saying i am using outlook :-) 
I'll try to find a way in the stupid program for linebreaks or make the switch 
to Thunderbird.

Ard

  line breaks. Is anybody else experiencing the same problem 
 or is it only 
  me?
 
 Jörg pointed me to the rewrap function of Thunderbird. 
 Using it fixes all my 
 problems with never ending lines. Thanks Jörg!
 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


email formatting

2007-04-04 Thread Ard Schrijvers
 
 Reinhard Poetz wrote:
 
  P.S. Ard, answering to your mails is very difficult because 
 there are no 
  line breaks. Is anybody else experiencing the same problem 
 or is it only 
  me?
 
 Jörg pointed me to the rewrap function of Thunderbird. 
 Using it fixes all my 
 problems with never ending lines. Thanks Jörg!

Reading this again, is it an error from my mail settings or was it something in 
Thunderbird? I am always complaining when people send html, so...if my mails 
are wrong as well in format...I should definitely do something about it

Ard


RE: email formatting

2007-04-04 Thread Ard Schrijvers

 Ard Schrijvers wrote:
  Reinhard Poetz wrote:
 
  P.S. Ard, answering to your mails is very difficult because 
  there are no 
  line breaks. Is anybody else experiencing the same problem 
  or is it only 
  me?
  Jörg pointed me to the rewrap function of Thunderbird. 
  Using it fixes all my 
  problems with never ending lines. Thanks Jörg!
  
  Reading this again, is it an error from my mail settings 
 
 Your lines are nearly endless, without any line breaks
 
  or was it something in Thunderbird? 
 
 Thunderbird offers a soltion for this kind of problems. The 
 funktion is called 
 rewrap which adds line breaks at the right places.
 
  I am always complaining when people send html, so...if my 
 mails are wrong as well in format...I should definitely do 
 something about it
 
 Thanks!

There will probably be coming a mail in just a moment which won't make you 
happy regarding formatting,
but I'll ask around here if somebody knows a solution for me :-)



 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


RE: StoreJanitor

2007-04-04 Thread Ard Schrijvers

 
 AFAICS there are two freeing algorithms in trunk: round-robin 
 and all-stores.

I already thought it would be something like this

/snip

 and this is IMO one of the major weakenesses of ehcache (or I 
 missed it 
 completely), I did not find any way to limit the number of 
 disk store entries.
 
 Actually we don't configure this value. According to 
 http://ehcache.sourceforge.net/documentation/configuration.htm
 l the default 
 value is 0 meaning unlimited. We should use the 1.2.4 
 constructor that allows to 
 set a maxElementsOnDisk parameter.

That is added lately to ehcache right? I never saw this one, but it is 
extremely important to set it to a sensible value in my opinion. Cocoon
uses some quite ingenious caching tricks, but the everage user won't be 
aware of the millions of cache entries you can leave behind (like when putting
a timestamp in a cachekey). 

 
 I wonder what StoreJanitor is good for at all. EHCache takes 
 care that the 
 number of items in the memory cache doesn't grow indefinitly 
 and starts its own 
 cleanup threads for the disc store 
 (http://ehcache.sourceforge.net/documentation/storage_options.
 html#DiskStore). 
 JCS will probably do the same. 

Yes, this is exactly my point. The extra problem is that the StoreJanitor never 
has
access to the eviction policy of the cache, and just starts throwing out 
entries at random.
From my experience, is that my app will only run solid, when the StoreJanitor 
never runs :-) 
Therefor, I have created a few store size options to choose from, matching 
different
JVM memory sizes. Then, when app is finished I start crawling the site (xenu 
[1]) for an hour 
and look at status generator mem useage or yourkit profiler or something. If I 
see the 
nice shaped sawtooth (is this only dutch? :-) ) of memory useage, the stores 
are configured correctly 


 I guess that original purpose 
 of StoreJanitor was 
 when Cocoon had its own store implementations (transient, 
 persistent) and we had 
 to take care of cleaning them up in our code.

That must indeed have been the reason (I did not know this one, before my time, 
so I have 
never understood how the StoreJanitor would ever help me out)

 Only the persistent store can grow unlimited but since it 
 should only be used 
 for special usecases, it shouldn't be a real problem.
 

/snip

 
 
 What do we want to do in order to improve the situation? 
 After reading your mail 
 and from my own experience I'd say
 
   - introduce a maxPersistentObjects parameter and use it in 
 EHDefaultCache to set maxElementsOnDisk

+1 

   - make the registration of stores at StoreJanitor configureable
 (Though I wonder what the default value should be, true or false?)

0 : I would avoid the StoreJanitor to run anyway

   - fix EventRegistry

+1: I have fixed this locally to let it work also when cache entries are 
removed by the internals of the cache
I did this, by instead of using the AbstractDoubleMapEventRegistry use 
WeakReferences, so that when the cache keys
aren't present anymore, the JVM itself cleans the Registry. Two problems:

1: I removed the persistent cache between JVM retarts, but could rebuild this 
(at the cost of long start up times though)
2: With former versions of EHCache, my weakreferences where not honoured when 
cache entries where overflowed to disk.
Therefor, I thought EHCache might be doing something with the cachekey when 
moving to the disk cachekey map. I could only see this behavior in combination 
with Cocoon, and not when I tested EHCache seperatly. 
On the EHCache userlist, Greg told me that it was not possible, and also showed 
it. 
I am using now JCSCache, which I am pretty ok with (only hard configuration)

If by the way, we start fixing the others, like setting a maxdiskobjects, the 
OOM due to event registry will increase. 
This is a problem from MultiHashMap (also the not deprecated replacer) that 
when you do:
map.put(1,test);
map.put(1,test);

you have two values for key 1. 


 
 Any further ideas?

Hmmm, yes, but I am not sure wether others like it: I think, it might be good, 
that
when the StoreJanitor runs, there should be at least an info (error level...? I 
frequently want to 
give info in messages which is so important, that it must be at error level to 
not be missed, but this
is stupid, right?) message about possible problems:

either:
1) your JVM memory settings are too low
2) your stores are configured to have too many memory items
3) your cached objects are very large
4) you have a memory leak in some custom component (a little vague yes :-) )
etc
Try runnning a crawler (xenu) and watch your status page memory useage.

Another improvement might be trying to avoid binary readers putting entries in 
memory cache. But, this might 
be to complex for the average user. In principal, I have have been bugging 
everybody here to:

1) use readers in *noncaching* pipelines, and use appropriate expires times in 
the readers, very important
for fast pages 

RE: StoreJanitor

2007-04-04 Thread Ard Schrijvers

 
 I suggest that we don't register them at StoreJanitor by 
 default anymore but 
 make it configureable for users who rely on it in their custom Store 
 implementations/configurations.

+1

 
 AFAIU, StoreJanitor only runs if at least one store is 
 registered so we don't 
 have to remove it.
 
- fix EventRegistry
  

/snip

 
 I have to learn more about the EventRegistry in order to 
 comment on your 
 suggestions.

I will mail tonight or tomorrow morning all ins and outs, pros and cons (I know 
of at least) of the current
eventRegistry and my suggested (implemented) fix (though we have to discuss 
wether I can get it to work for EHCache and wether it needs to be 
diskpersistent between JVM restarts)

 
  4) you have a memory leak in some custom component (a 
 little vague yes :-) )
  etc
 
 hehe, if we can implement an algorithm that can provide such 
 analysis reliably, 
 why not ;-)

I think this is extremely hard. Not for the pipeline caches because they store 
the response in byte[], but for continuations, internal component maps (for 
example i18n resource bundles I think, compiled jx), memory stores which 
contain any complex non serializable objects, I think it is impossible to know 
the amount of memory. I test these things the most dumb possible way you can 
imagine: I crawl my site, and look in status page what happens. I have
many stores in my status page. I have a clear link for each seperate store, and 
I look at the memory which gets
freed when clearing one store. This gives me a heuristic measurement of how 
large my stores are and should be configured

 
 Are you suggesting some kind of online monitor? I think 
 having a seperate 
 component would be better than merging it into StoreJanitor. 
 This component 
 could also be made available as MBean.

See above, very complex I think...and if we fix the standard things that it is 
harder for users to 
get bugged by the StoreJanitor, and they want to take it to the next level, 
there are always things
like yourkit profiler. But, perhaps I am not ambitious enough now :-) 
 

 
 yes please, I would be interested in more comments too! Are 

more comments like in wiki or in the cocoon.xconf more comment for different 
configurations? 
I can try to write extended documentation on what IMO is best for 
configuration, and tricks to 
avoid the StoreJanitor mechanism

 Ard and I right that 
 we shouldn't register EHDefaultStore and MRUMemoryStore at 
 StoreJanitor anymore 
 by default and make it configureable instead?

In principle, you could see the StoreJanitor as a real last resort (but IMO, it 
will never actually help). 
The StoreJanitor might still run, and give proper warnings when low on memory. 
Configuring your stores correctly
(and making sure no binary files of many Mb's end up in it), and certainly 
having the maxdiskelements 
configured should do the trick! Not running the StoreJanitor when JVM is low, 
will result in a little faster OOM,
but in my opinion, it differs not much. I also think, the maxdiskelements 
should have a sensible default, which
should be less then indefinitely (something like 30.000-50.000 should cover I 
think almost everybodies apps)

 
  Ard
  

 
 the formatting is okay now, but it seems that your mails 
 still don't set the 
 in-reply-to header correctly.

Hmmm, I will start using Thunderbird on short notice (not yet today :-) )

 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


RE: StoreJanitor

2007-04-04 Thread Ard Schrijvers

 On 4/4/07, Reinhard Poetz [EMAIL PROTECTED] wrote:
  Ard Schrijvers wrote:
 
   yes please, I would be interested in more comments too! Are
  
   more comments like in wiki or in the cocoon.xconf more 
 comment for different configurations?
   I can try to write extended documentation on what IMO is 
 best for configuration, and tricks to
   avoid the StoreJanitor mechanism
 
  I'm interested in further comments by you too, but actually 
 I meant that other
  people should comment on our plan to change the behaviour 
 of StoreJanitor.
 
 Personally, I wonder if you can't just get rid of it altogether? 

Yes we can, but we have to make sure the default store configurations are such 
that 
the maxdiskelements is not set to indefinitely in combination of 
time2liveseconds and
time2idleseconds of 0, or eternal = true. This would mean, cache would always 
fill all 
available memory in the end. Proper configured defaults should make the 
StoreJanitor redundant.

Ard

 Do
 you think anyone really has dependencies on it and is going to want to
 migrate those dependencies to 2.2?
 
 -- 
 Peter Hunsberger
 


RE: StoreJanitor (was: Re: Moving reduced version of CachingSource to core | Configuration issues)

2007-04-03 Thread Ard Schrijvers
Hello,
 
 Ard Schrijvers wrote:
  i would be glad to share the code and my ideas, for example 
 about this whole 
 StoreJanitor idea :-)  )
 
 Just curious, what did you mean by this whole StoreJanitor idea?

Before I say things that are wrong, please consider that the StoreJanitor was 
invented long before I looked into the cocoon code, so probably a lot of 
discussion and good ideas has been around which I am not aware of. But still, 
my ideas about the StoreJanitor (and sorry for the long mail, but perhaps it 
might contain something useful):

1) How it works and its intention (I think :-) ):  The StoreJanitor is 
originally invented to monitor cocoon's memory useage and does this by checking 
some memory values every X (default 10) seconds. Beside the fact that I doubt 
users know that it is quite important to configure the store janitor correctly, 
I stick to the defaults and use a heapsize of just a little lower then JVM 
maxmemory. 

Now, every 10 seconds, the StoreJanitor does a check wether 
(getJVM().totalMemory() = getMaxHeapSize()  (getJVM().freeMemory()  
getMinFreeMemory()) is true, and if so, the next store is choosen (compared to 
previoud one) and entries are removed from this store (I saw a post that in 
trunk not one single store is chosen anymore, but an equal part of all of them 
is being removed, right?...probably you can configure which stores to use, i 
don't know)

2) My Observations: When running high traffic sites and render them live (only 
mod_cache in between which holds pages for 5 to 10 min) like [1] or [2], then 
checking every X sec for a JVM to be low on memory doesn't make sense to me. At 
the moment of checking, the JVM might be perfectly sound but just needed some 
extra memory for a moment, in that case, the Store Janitor is removing items 
from cache while not needed. Also, when the JVM is really in trouble, but the 
Store Janitor is not checking for 5 more secthis might be too long for a 
JVM in a high traffic site when it is low on memory. Problems that result from 
it are:

- Since there is no way to remove cache entries from the used cache impl by the 
cache's eviction policy, the cache entries from memory are removed by starting 
from entry 0, whatever this might be in the cache. There is a very likely 
situation, that at the very next request, the same cache entries are added 
again.

- Ones the JVM gets low on memory, and the StoreJanitor is needed, it is quite 
likely that from that moment on, the StoreJanitor runs *every* 10 seconds, and 
keeps removing cache entries which you perhaps don't want to be removed, like 
compiled stylesheets. 
1) suppose, from one store (or since trunk from multiple stores) 10% 
(default) is removed. This 10% is from the number of memory cache entries. I 
quite frequently happen to have only 200 entries in memory for each store ( I 
have added *many* different stores to enable all we wanted in a high traffic 
environment) and the rest is disk store. Now, suppose, the JVM which has 512 Mb 
of memory, is low on memory, and removes 10% of 200 entries = 20 entries, 
helping me zero! These memory entries are my most important ones, so, on the 
next request, they are either added again, or, from diskcache I have a hit, 
implying that the cache will put this cache entry in memory again. If I would 
use 2000 memory items, I am very sure, the 200 items which are cleaned are put 
back in memory before the next StoreJanitor runs.
2) I am not sure if in trunk you can configure wether the StoreJanitor 
should leave one store alone, like the DefaultTransientStore. In this store, 
typically, compiled stylesheets end up, and i18n resource bundles. Since these 
files are needed virtually on every request, I had rather not that the 
StoreJanitor removes from this store. I think, the StoreJanitor does so, 
leaving my critical app in an even worse state, and on the next request, the 
hardly improved JVM needs to recompile stylesheets and i18n resource bundles.
3) What if the JVM being low is not because of the storesFor 
example, you have added some component which has some problems you did not 
know, and, that component is the real reason for you OOM. The StoreJanitor, 
sees your low memory, and starts removing entries from your perfectly sound 
cache, leaving you app in a much worse situation then it already was. Your 
component with memory leak has some more memory it now can fill, and hapily 
does this, making the StoreJanitor remove more and more entries from cache, 
untill it ends up with an empty cache. You could blame the wrong component for 
this behavior. One of these wrong components in use is the event registry for 
event caching, which made our high traffic sites with 512 Mb crash every two 
days. Better that I write in another mail what I did to the event cache 
registry, why I did not yet post about it, and if others are interested and how 
to include it in the trunk. Bottom line

RE: StoreJanitor (was: Re: Moving reduced version of CachingSource to core | Configuration issues)

2007-04-03 Thread Ard Schrijvers
/snip

?? my mail got sended by accident :Sfinishing it now

 be implemented quite easily, but might take long start up times)
 6) JCSCache has a complex configuration IMO. Therefor, I 
 added default configurations to choose from, for example:

store logger=core.store
parameter name=region-name value=store/  
parameter name=size value=small/  

where size might be small, medium, large or huge. 

I think we have created with in this way a setup for cocoon, where it is harder 
for unexperienced users to have memory problems when trying to implement larger 
sites. 

Hopefully somebody read my mail until here :-) I am curious about what others 
think,

Ard


RE: E-mail threading (was: Re: Make status code attribute of seriailzers expandable)

2007-03-30 Thread Ard Schrijvers

  Ard
 
  [1] https://issues.apache.org/jira/browse/COCOON-1619

 
 Could you Ard use e-mail client that is standard compliant 
 and produces
 valid In-Reply-To headers? It's second time this week I have to raise
 this issue (and I really don't like doing it) but it's really crucial
 for me to not get lost in all threads...

Acceptee, was indeed my mistake. 

Ard

 
 Thanks.
 
 -- 
 Grzegorz Kossakowski
 http://reflectingonthevicissitudes.wordpress.com/
 
 


RE: [vote] Move CachingSource to cocoon-core

2007-03-29 Thread Ard Schrijvers

  several requests on the users list) I want to propose to move it to 
  cocoon-core.


+1

Ard

 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


RE: [vote] Make status code attribute of seriailzers expandable

2007-03-29 Thread Ard Schrijvers

 Reinhard Poetz wrote:
 
  I propose making the status code attribute of serializers 
 expandable.
  This makes it easier to provide REST style services with Cocoon that
  make use of the different meanings of status codes.
 
 +1. Should have actually been there right from the start!

big +1 because I have needed it quite some times before!

 
 Sylvain
 
 -- 
 Sylvain Wallez - http://bluxte.net
 
 


Make status code attribute of seriailzers expandable

2007-03-29 Thread Ard Schrijvers
I missed the discussion before the vote, but have one more question:

Is it also possible to add extra optional http headers in the serializer, like:

map:serialize type=xhtml pragma={cache} cache-control={cache}

I would like to store in a variable wether I am dealing with something that is 
not allowed to be cached by mod_cache, like a cforms with continuation. We had 
a discussion before on this list, but cannot find the thread ATM

I do not know about the serializers, but is this possible and/or desirable? 

Ard


 
 Reinhard Poetz wrote:
   Vadim Gritsenko wrote:
   Reinhard Poetz wrote:
  
   Is there a specific reason why this patch 
 (http://issues.apache.org/jira/browse/COCOON-1354) has never 
 been applied?
  
   Technically, other than rewriting the last chunk of the 
 patch, it looks Ok. 
  From procedure POV, I don't recall a VOTE on that. Previous 
 decision on what 
 can be expandable in the sitemap was VOTEd upon.
  
   I will start a vote on this then. Any opinions in the meantime?
 
 I don't see reason why not, even if not needed often.
 
 Vadim
 
   My usecase is that I want to provide REST-style 
 webservices with Cocoon and 
 knowing which status code to set is part of the business logic.
 
 
- o -
 
 
 I propose making the status code attribute of serializers 
 expandable. This makes 
 it easier to provide REST style services with Cocoon that 
 make use of the 
 different meanings of status codes.
 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


RE: Make status code attribute of seriailzers expandable

2007-03-29 Thread Ard Schrijvers

 Ard Schrijvers wrote:
  I missed the discussion before the vote, but have one more question:
  
  Is it also possible to add extra optional http headers in 
 the serializer, like:
  
  map:serialize type=xhtml pragma={cache} 
 cache-control={cache}
  
  I would like to store in a variable wether I am dealing 
 with something that is not allowed to be cached by mod_cache, 
 like a cforms with continuation. We had a discussion before 
 on this list, but cannot find the thread ATM
  
  I do not know about the serializers, but is this possible 
 and/or desirable? 
 
 Please see HttpHeaderAction, can set any response header you want

I know. But, it does not work when you set the response header in a 
subpipeline, so you *have* to do this in the pipeline which contains the 
serializer, but quite normally, I have one main catch all matcher with the 
xhtml serialzer. In this catch all matcher, I do not know wether or not I will 
have a form with continuation, and I only want to set the header no-cache when 
there actually is a continuation for example. The problem is way more subtle 
then just a HttpHeaderAction and currently there is not a really easy solution 
for having continuation in forms behind mod_cache (let alone a balanced 
environment, where the sticky sessions come in)

Ard

 
 Vadim
 


RE: Make status code attribute of seriailzers expandable

2007-03-29 Thread Ard Schrijvers


 Ard Schrijvers napisał(a):
  I missed the discussion before the vote, but have one more question:
 
  Is it also possible to add extra optional http headers in 
 the serializer, like:
 
  map:serialize type=xhtml pragma={cache} 
 cache-control={cache}
 
  I would like to store in a variable wether I am dealing 
 with something that is not allowed to be cached by mod_cache, 
 like a cforms with continuation. We had a discussion before 
 on this list, but cannot find the thread ATM

 I'm -1 for setting this in a serializer.
  I do not know about the serializers, but is this possible 
 and/or desirable? 

 
 I was going to propose the same but set on pipeline level. Then one
 would group pipelines sharing the same caching 
 characteristics. We have
 already expires parameter for pipelines so I think it fits into
 current design.

I am not in favor of this. When you only create a form with a continuation 
based on the contents of some xml file, you do not know which pipelines do, and 
which do not contain a form.  You do know in flow when you are creating a 
form/continuation. Setting a variable at that point and be able to use it in 
the serializer as an extra http header is IMO the only way

Ard

  

 As servlet sevice stuff demands high conformity with HTTP 
 specification
 I'm all +1 on such addition.
 
 -- 
 Grzegorz Kossakowski
 http://reflectingonthevicissitudes.wordpress.com/
 
 


RE: Moving reduced version of CachingSource to core | Configuration issues

2007-03-27 Thread Ard Schrijvers
Hello,

regarding the cache-expires and async thing in the cachingsource block, there 
are some things that are strange and seem bugs to me:

1) The expires value is always -1 (eternal), no matter what you define in the 
queryString. You can see this happen in getSource of the CachingSourceFactory. 
I think the 

if (name.startsWith(cocoon:cache)) {
params.setParameter(name.substring(cocoon:.length()), 
sp.getParameter(name));
remainingParameters.removeParameter(name);
   }

should also get an else:

else{
params.setParameter(name,sp.getParameter(name));
}

because all parameters are neglected in the current way. 

Then, when I do have my expires accounted for correctly, I do not understand 
why while the cached object is not expired, there is still a call for the 
remote source. This doesn't make sense to me. Also, when the expires is set 
correctly, and the object is expired, I am getting a NullPointerException, but 
it might be because we use an old version...?

Anyway, think to start with is to correct the getSource() above, or do I miss 
something?

Ard


 
 
  Vadim Gritsenko wrote:
   Oops, should have read it in full...
   
   Reinhard Poetz wrote:
   
   I can think of setting the expires parameter to -1 and using a
   background-refresher but this seems to be overly complex 
 for this 
   simple task.
   
   Yes async will do the trick. And IMHO it should be Ok to 
 alter sync 
   implementation to keep previous response if new one can't 
  be obtained.
  
  sounds easier than Ard's proposal (no offense ;-) ), or do I 
  overlook something?
 
 That certainly is a *lot* easier and I was not aware of this 
 part in the cachingsource! Might be useful for me as well :-) 
 
 Ard
 
  
   I would also like to move the basic functionality of the 
  CachingSource 
   into some core module and only have an extended versions 
  (event-cache 
   support, async updating) of it in the reposistory block. I 
  seems odd 
   to me that I have to add a dependency to the repository 
 block, the 
   event-cache block, the jms block and the cron block
   
   I do not think it has any dependencies on cron, where do 
 you see it?
  
  either it comes through a transitive dependency or I did 
  something wrong with my 
  setup. I will check where it comes from.
  
   just for this. Any comments before I start a vote on this?
   
   Async is a basic functionality which must be in core, IMHO. But I 
   completely agree that event-cache and jms should be 
 optional. I was 
   planning on doing this refactoring but did not manage to do 
  it so far.
  
  It would be great if you could help me with the design of the 
  refactoring: If 
  you did it, into which parts would you split it up?
  
  -- 
  Reinhard Pötz   Independent Consultant, Trainer  
 (IT)-Coach 
  
  {Software Engineering, Open Source, Web Applications, Apache Cocoon}
  
  web(log): 
http://www.poetz.cc
 
 


RE: Moving reduced version of CachingSource to core | Configuration issues

2007-03-27 Thread Ard Schrijvers

 
 This if statement checks if a parameter starts with 
 cocoon:cache and if yes, 
 it add it to the params object and removes it from the 
 normal request 
 parameters. It looks fine for me and the expires value is set 
 correctly at the 
 source AFAICS. BTW, I'm working on trunk.

Yes I saw this about the cocoon:cache, but it seems to me that the other 
parameters are forgotten to add to params, and therefor, int expires = 
params.getParameterAsInteger(CachingSource.CACHE_EXPIRES_PARAM, 
defaultExpires); always returns a -1...Look at the code snipper below from the 
getSource in the trunk:

index = uri.indexOf('?');
if (index != -1) {
sp = new SourceParameters(uri.substring(index + 1));
uri = uri.substring(0, index);
}

// put caching source specific query string parameters
// into a Parameters object
final Parameters params = new Parameters();
if (sp != null) {
SourceParameters remainingParameters = (SourceParameters) 
sp.clone();
final Iterator names = sp.getParameterNames();
while (names.hasNext()) {
String name = (String) names.next();
if (name.startsWith(cocoon:cache)) {
params.setParameter(name.substring(cocoon:.length()), 
sp.getParameter(name));
remainingParameters.removeParameter(name);
}
}
String queryString = remainingParameters.getEncodedQueryString();
if (queryString != null) {
uri += ? + queryString;
}
}

int expires = 
params.getParameterAsInteger(CachingSource.CACHE_EXPIRES_PARAM, defaultExpires);
 
The only parameter that is added to params is the cocoon:cache, and therefor 
expires always has the defaultExpires.

if (name.startsWith(cocoon:cache)) {
params.setParameter(name.substring(cocoon:.length()), 
sp.getParameter(name));
remainingParameters.removeParameter(name);
}
else{
params.setParameter(name,sp.getParameter(name));
}

solves this IMO

Ard
  

 
 
  because all parameters are neglected in the current way. 
  
  Then, when I do have my expires accounted for correctly, I 
 do not understand why while the cached object is not expired, 
 there is still a call for the remote source. This doesn't 
 make sense to me. Also, when the expires is set correctly, 
 and the object is expired, I am getting a 
 NullPointerException, but it might be because we use an old 
 version...?
 
 I can confirm this but don't know where those requests come 
 from. I could be 
 caused by a validity check but that's only a wild guess.
 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


RE: Moving reduced version of CachingSource to core | Configuration issues

2007-03-26 Thread Ard Schrijvers
Hello Reinhard,

 
 The repository block contains a the CachingSource. Does 
 anybody have experiences 
 with it?

Yes, we do have a lot of experience with it (though we have a slightly 
different version (Max Pfingsthorn changed it: the public SourceValidity 
getValidity() returns an eventValidity in our version instead of a 
timeStampValidity, but this is not according CacheableProcessingComponent 
Contracts [1] and it gave me *many* headaches to fix it all because many 
components use implicit caching which depend on source validity and as we all 
know, an eventValidity returns valid by defenition, so these cachings never get 
flushed. If I knew all this before, I would have reverted the change long ago 
:-(  )  

 
 I wonder how I can configure it so that the cached source 
 expires e.g. after 5 
 minutes but if it can't be updated (e.g. the wrapped source 
 isn't available), 
 the expired version should be used.

You probably know how to configure default-expires configuration and expires 
per generator, right? But this is probably not what you mean:

in configure:

this.defaultExpires = parameters.getParameterAsInteger(DEFAULT_EXPIRES_PARAM, 
-1);

and 

int expires = params.getParameterAsInteger(CachingSource.CACHE_EXPIRES_PARAM, 
this.defaultExpires);

in getSource(...) , but you probably already knew this

I do not really understand your feature? First, you have generated a repository 
source. Then, if it is expired, you want to refetch the source, but is if it 
isn't there, use the expired cached object? IMO, it is a strange thing to do 
(apart from how to integrate it in the current cache impl, that checks for 
validities of the cached response. but...i must be missing your point I 
think?). I have been thinking though for something like this to enable fetching 
remote rss feeds for example, that when the connection to this rss feed is lost 
(the remote site is down), your application still runs (or is this actually 
your use case? ). In that case, I would suggest a different mechanism, because 
the one you describe does not really fit in the cachingsource IMO. 


 I would like to add this 
 feature in a 
 transparent way and provide a configuration parameter to 
 switch it off if you 
 _don't_ want this behaviour but I'm not sure if I duplicate 
 something that 
 already exists.
 
 I can think of setting the expires parameter to -1 and using a 
 background-refresher but this seems to be overly complex for 
 this simple task.

The documentation says:

 * pThe value of the expires parameter holds some additional semantics.
 * Specifying code-1/code will yield the cached response to be considered 
valid
 * always. Value code0/code can be used to achieve the exact opposite. That 
is to say,
 * the cached contents will be thrown out and updated immediately and 
unconditionally.p

You want to change the -1 expires behavior, into something that after the 
expires of eg 5 minutes you have a background fetch (and have this in a cron 
job or something?) 

 
 I would also like to move the basic functionality of the 
 CachingSource into some 
 core module and only have an extended versions (event-cache 
 support, async 
 updating) of it in the reposistory block. I seems odd to me 
 that I have to add a 
 dependency to the repository block, the event-cache block, 
 the jms block and the 
 cron block just for this. Any comments before I start a vote on this?

You indeed have to add quite some jars to enable the cachingsource. The cron 
job is needed for jms reconnection only, the eventcache and jms are 
unseperatable. But, IMO, there shouldn't have to be a direct relation between 
the cachingsource, which is just some sort of proxy, and the eventcache, jms 
and cron block. Wanting to use the cachingsource without the eventblock seems a 
legitimate goal IMO. 

Ard

[1] http://cocoon.zones.apache.org/daisy/cdocs/g1/g1/g7/g1/675.html

 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 


RE: Moving reduced version of CachingSource to core | Configuration issues

2007-03-26 Thread Ard Schrijvers

 
 yes, the broken connection scenario is that what I want. If 
 you say that it 
 doesn't really fit into the caching source, what do you 
 propose instead? 

It does fit in the current caching source without touching the caching source I 
think :-) 

 Writing 
 another source wrapping source for this purpose only? (Could 
 be easier than to 
 refactor the existing one ...)

IMO, it is quite simple to implement this in the current code, without touching 
anything. I think step-by-step this needs to be done, and I can help if you want

1) you define a new source-factory in cocoon.xconf:

component-instance 
class=org.apache.cocoon.components.source.impl.CachingSourceFactory 
logger=core.source.caching name=cached
parameter name=cache-role 
value=org.apache.cocoon.caching.Cache/ExpiresAware/
  /component-instance

if you want to prefix you source-factory for the expires cron gets different 
change cached into expires-aware or something

2) A new class, ExpiresAwareCacheImpl with role

role default-class=org.apache.cocoon.caching.impl.ExpiresAwareCacheImpl 
name=org.apache.cocoon.caching.Cache/ExpiresAware shorthand=expires-aware/

3) expires-aware logger=core.cache.expires
  parameter name=store value=org.apache.excalibur.store.Store/
  parameter name=expiresregistry 
value=org.apache.cocoon.caching.ExpiresRegistry/
  /expires-aware

Now, like the EvantAwareCacheImpl, your ExpiresAwareCacheImpl examines a 
cachedresponse before actually storing it in the cache, and checks for expires 
validities. If found, store this like the eventRegistry in a registry, based on 
 some multi value maps: [expiresvalue,{key1,key2,key3}] (by the way, you have 
to store the uri of the source also ofcourse)

Now, there runs a cron job, checking this ExpiresRegistry, and if expired 
values found, do a refetch: if refetch succesfull: clear the cached entry and 
store new cached response and register new expires time. 

OTH, relying havely on this mechanism might result it naste registry behavior 
resulting in OOM. Therefor, I changed the event registry we use and use 
WeakReferencesRegistry instead of the AbstractDoubleMapEventRegistry, but this 
for obscure reasons did not work with EHCache which did not seem to honor my 
WeakReferences of cachekeys when overflowing cached objects to disk (I have 
tried to reproduce without cocoon, but did not succeed), and therefor, I moved 
to JCSCache, which does work quite a bit better IMO, though it has an insane 
complex configuration (at least, I found it complex :-) )but, all this is 
only when you have high traffic sites in combination with *many* event 
validities like we have, because of the changed cachingsource impl, returning 
an eventValidity instead of timestampvalidity (though, if you have OOM 
bothering you with eventcache, let me knowor if somebody is interested in 
the whole story, let me know. I haven't discussed it yet on this list, because 
I do not have the idea others have this problem as serious as I have had (sites 
having OOM every 2 days because of the eventRegistry) and  because it would 
imply not using the EHCache, and some more minor changes..but again, if 
somebody is interested, i would be glad to share the code and my ideas, for 
example about this whole StoreJanitor idea :-)  )

Anyway, Reinhard, I hope you where able to follow how I would attack the 
problem, and if I can help, let me know. We might want to discuss the 
registry OOM problems in this context as well, because with the 
ExpiresRegistry analogues to the EventRegistry in combination of a cache which 
is configured to have a time2live and time2idle of not eternal, you might run 
into similar problems

Regards Ard

 
  I would like to add this 
  feature in a 
  transparent way and provide a configuration parameter to 
  switch it off if you 
  _don't_ want this behaviour but I'm not sure if I duplicate 
  something that 
  already exists.
 
  I can think of setting the expires parameter to -1 and using a 
  background-refresher but this seems to be overly complex for 
  this simple task.
  
  The documentation says:
  
   * pThe value of the expires parameter holds some 
 additional semantics.
   * Specifying code-1/code will yield the cached 
 response to be considered valid
   * always. Value code0/code can be used to achieve the 
 exact opposite. That is to say,
   * the cached contents will be thrown out and updated 
 immediately and unconditionally.p
  
  You want to change the -1 expires behavior, into something 
 that after the expires of eg 5 minutes you have a background 
 fetch (and have this in a cron job or something?) 
 
 no, no, I don't want to change the behaviour of -1. I tried 
 to explain that I 
 *could* use it together with a background fetcher to reach my 
 goal but this is 
 overly complex for such a simple thing.
 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, 

RunningModeDependentPipeline (was:RE: New stuff for Cocoon)

2007-03-05 Thread Ard Schrijvers
Hello,

  
 http://www.mindquarry.org/repos/mindquarry-jcr/trunk/mindquarr
 y-jcr-source/
 
 I'd say add it to the cocoon-jcr block but maybe one of the 
 original authors of 
 the jcr block can give some comments.
 
  And various components:
  
  - RunningModeDependentPipeline: (Pipeline)
  
  to automatically use different pipelines depending on the 
 running mode, 
  eg. no caching in dev, full caching in prod, and optionally 
 enabling 
  profiling with a single system property (I know this is possible by 
  putting two different xconf/spring bean files under dev/ or 
 prod/, but 
  this code is quite young in cocoon, so we don't have it 
 available in our 
  cocoon version)
  
  
  http://www.mindquarry.org/repos/mindquarry-webapp/trunk/mindqu
arry-webapp-resources/src/main/java/com/mindquarry/webapp/pipelines/RunningModeDependentPipeline.java
 

 hey, I was thinking about something similar just some time ago. I guess this 
 needs further discussions on this list.

IMHO, I really do not like people developing in another mode regarding caching 
and switch to another caching strategy on production (regarding profiling, ok). 
I have been trying to persuade everybody to *not* develop against uncaching 
pipelines, and switch to caching when the project needs to go live. 
Inefficiencies regarding implementations won't be noticed, untill the project 
needs to go live, and you end up hacking around to get some performance. I have 
seen to many projects end up like big slow hacky (hacky to get it performing in 
the end) projects, which never perform the way they should. By far the best 
projects around, are the ones that kept performance and caching in mind from 
the very start, and monitored response times during development against a 
production mode implementation. 

OTH, why would you ever want a pipeline to be noncaching in development mode 
and caching in production? If you do everything right, changes in code would 
also directly work in caching pipelines. Certainly when you use eventcache, it 
is important to develop in evencaching pipelines, otherwise, you might end up 
with a production version in which some parts do not invalidate. Finding out 
the problems later on are much harder. 

So a -1 for me because it would make me having to persuade everybody around me 
even more, to not develop against a caching strategy that is different in 
production.

Ard


RE: [vote] Jeroen Reijn as a new Cocoon committer

2007-03-05 Thread Ard Schrijvers

 
 Please cast your votes.

my unbiased +1!!

Ard

 
 
 Thanks,
 
 Andrew.
 --
 Andrew Savory, Managing Director, Luminas Limited
 Tel: +44 (0)870 741 6658  Fax: +44 (0)700 598 1135
 Web: http://www.luminas.co.uk/
 Sourcesense: http://www.sourcesense.com/
 
 
 


RE: [vote] Felix Knecht as a new Cocoon committer

2007-03-05 Thread Ard Schrijvers
 
 Please cast your votes!

+1!

Ard

 
 --
 Reinhard
 
   
 ___ 
 Telefonate ohne weitere Kosten vom PC zum PC: 
 http://messenger.yahoo.de
 


RE: [vote] Grzegorz Kossakowski as a new Cocoon committer

2007-02-27 Thread Ard Schrijvers

 Please cast your votes.

+1

Ard

 
 /Daniel
 
 


RE: Not caching pages with continuations (was:...where is 304?)

2007-01-30 Thread Ard Schrijvers
Hello,

 
 Ard Schrijvers wrote:
 
 snip/
  This is actually almost the same hack we used, but 
 instead of a transformer a selector, and if some value set in 
 flowscript, an action to set headers. Because we are 
 outsourcing/other parties using our best practices, and I 
 did not want them to have to think about setting things in 
 flowscript like sessions and values to indicate caching 
 headers, I chose to put it in the black box transformer, 
 which handles it. Of course, also kind of a hack, because  
 users aren't really aware of it (certainly because i did not 
 want another sax transformer, so I did add it to the 
 StripNameSpaceTransformer which is by default used by us in 
 front of the serializer. But it does more then its name 
 suggests, and therefor, it is hacky ofcourse. But...at least 
 nobody has to think about it :-) ). I wondered if there was a 
 solid nonhacky solution to the issue

 
 If it's for CForms, we can add the setting of no-cache headers in
 Form.js since it's very unlikely that a form pipeline will be 
 cacheable.

Think we tried similar things (in flowscript), but we found the problem 
Bertrand also faced about the fact, that you have to set the headers on the 
pipeline the serializer is used from (but all requests for example arrive at a 
catch all matcher, which does not know wether it includes a form or not).

Do you think you can set it globally for the request in Form.js? I tried 
directly manipulating the HttpServletResponse, but obviously, this resulted in 
correct behavior on the first request, but consecutive cached responses did not 
have this direct HttpServletResponse manipulation, so this implied uncacheable 
pipelines in cocoon, which of course, I did not want either. 

But, I am curious about a possible solution

Ard  

 
 Also, for other flowscripts there could be a parameter on the 
 map:flow
 instruction stating the flowscript engine has to always set 
 the no-cache
 headers.
 
 Sylvain
 
 -- 
 Sylvain Wallez - http://bluxte.net
 
 


RE: FW: HTTPD mod_cache HttpCacheAction: where is 304?

2007-01-30 Thread Ard Schrijvers
Hello,
 
 
 
 Ard Schrijvers wrote:
  
  is there any best practice to have cforms in urls you do not know on
  beforehand, with continuations?
  
 
 Continuation is just randomly generated token, unique to each user
 interaction (like as session Id). Even if it is cached, even 
 if GET is used
 as a form method, next form submit will request URL with 
 different (not
 cached yet) continuation. (I believe continuation ID is 
 always unique).

Think you are slightly missing the point: it is not about the submitting of the 
form, it is about the form itself. Suppose, we have a link, like 
/form/query.html.

Now this for has a CForms form, with a continuation id, either in an input 
element or in the action. Not the submit is the problem, but the fact that 
/form/query.html is cached by mod_cache (not, we do not know on beforehand 
which urls are forms, so we can not tell mod_cache this). Then, mod_cache 
delivers forms with the same continuation ids for example for 10 minutes, but 
only the first one to submit will submit a valid continuation. Therefor, from 
CForms on generation, you want to set globally on the response, the Pragma and 
Cache-Control header to no-cache (and in load balanced environment, enforce a 
sticky session)

Ard

 
 We are talking here about GET methods in general, not about HTTPD...
 Continuation ID may be send via POST also. It could be hidden 
 field, and it
 could be cookie. Usually all web-developers prefer POST with 
 forms just
 because replies to POST method should not be cached until 
 server explicitly
 provides expiration header (see HTTP 1.1 specs). 
 
 All cached pages have a key which is simply URL, for better 
 caching use GET,
 for non-caching - POST.
 -- 
 View this message in context: 
http://www.nabble.com/FW%3A-HTTPD-mod_cache--HttpCacheAction%3A-where-is-304--tf3132401.html#a8695784
Sent from the Cocoon - Dev mailing list archive at Nabble.com.



RE: Not caching pages with continuations (was:...where is 304?)

2007-01-30 Thread Ard Schrijvers

 
  
  If it's for CForms, we can add the setting of no-cache headers in
  Form.js since it's very unlikely that a form pipeline will be 
  cacheable.
 
 Think we tried similar things (in flowscript), but we found 
 the problem Bertrand also faced about the fact, that you have 
 to set the headers on the pipeline the serializer is used 
 from (but all requests for example arrive at a catch all 
 matcher, which does not know wether it includes a form or not).
 
 Do you think you can set it globally for the request in 
 Form.js? I tried directly manipulating the 
 HttpServletResponse, but obviously, this resulted in correct 
 behavior on the first request, but consecutive cached 
 responses did not have this direct HttpServletResponse 
 manipulation, so this implied uncacheable pipelines in 
 cocoon, which of course, I did not want either.

Hmmm, reading this again, of course, CForms aren't cached anyway of course
 
 
 But, I am curious about a possible solution
 
 Ard  
 
  
  Also, for other flowscripts there could be a parameter on the 
  map:flow
  instruction stating the flowscript engine has to always set 
  the no-cache
  headers.
  
  Sylvain
  
  -- 
  Sylvain Wallez - http://bluxte.net
  
  
 


RE: Not caching pages with continuations (was:...where is 304?)

2007-01-30 Thread Ard Schrijvers


-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / http://www.hippo.nl
-- 

 -Original Message-
 From: Sylvain Wallez [mailto:[EMAIL PROTECTED]
 Posted At: dinsdag 30 januari 2007 11:42
 Posted To: Cocoon Dev List
 Conversation: Not caching pages with continuations (was:...where is
 304?)
 Subject: Re: Not caching pages with continuations 
 (was:...where is 304?)
 
 
Hello,


 
 Cached responses don't involve the serializer, and this is why the
 headers aren't set. 
 On the contrary, the flowscript is always 
 executed,
 meaning headers will always correctly been set even if the pipeline it
 calls is cacheable (BTW why is this pipeline cacheable at all???)

I indeed already replied on my own remark about caching pipelines involving 
CForms. Obviously, when using CForms, these are uncacheable 

 
 Also, response.setHeader() is ignored for internal requests. Si if the
 flowscript is called through a cocoon:, headers really have 
 to be set
 on the Http request, accessible in the object model with the
 HttpEnvironment.HTTP_RESPONSE_OBJECT key.

Yes indeed, since the pipeline involving CForms won't be cached anyway, this 
will work (might be setting a session/cookie to enable sticky sessions whenever 
CForms are used also be an option, to be able to use them in balanced 
environments? Perhaps configurable because in not balanced environments it is 
redundant)

I did have problems in a slightly different setting with 
HttpEnvironment.HTTP_RESPONSE_OBJECT, when you want to set for example a 404 
status code in an internal pipeline, but still the entire pipeline can is 
cacheable. Then, only the first time the response header is set correctly, but 
not when fetched later from cache. 

Anyway, for CForms directly manipulating the 
HttpEnvironment.HTTP_RESPONSE_OBJECT should work fine indeed because they are 
uncacheable anyway.

Ard

 
 Sylvain
 
 -- 
 Sylvain Wallez - http://bluxte.net
 
 


RE: FW: HTTPD mod_cache HttpCacheAction: where is 304?

2007-01-29 Thread Ard Schrijvers
Hello,

now we are talking about httpd mod_cache already, is there any best practice to 
have cforms in urls you do not know on beforehand, with continuations? For 
high-traffic sites, we obviously want to use mod_cache, but, at the same time, 
mod_cache shouldn't cache pages with a continuation in it. Since we don't know 
which urls these are, we cannot configure some urls(parts) in mod_cache not to 
cache. To make it even more complex, a continuation should always return to the 
same cocoon instance in a balanced environment. 

Is there a common best practice on this? You can not just say in the cforms 
part matcher, to set headers, because you must set the headers the main 
matcher, from which the serializer is used. The only (poor) solution I could 
come up with, is a SAX transformer that looks for an action that end with 
.continue, or een input with a name=continuation-id. If it finds one, 

response.setHeader(Pragma,no-cache);
response.setHeader(Cache-Control,no-cache);

are set, and if the cookie map of the request is empty for sessionhost, I add a 
cookie to the response, to enable sticky session for a load balanced 
environment. 

Well, clearly, since I assume things like action ending with .continue or an 
input element with some name, I made some assumptions (of course, could make it 
configurable). I choose this solutions, because third parties are implementing 
projects as well, and I do not want them to have to think about headers and 
sticky sessions. 

In short, did other people have had this same requirements, and is there some 
best practice known?

Regards Ard

 
 On 1/28/07, Fuad Efendi [EMAIL PROTECTED] wrote:
  Following to
  http://wiki.apache.org/cocoon/ControllingModCache
 
  I found this class:  org.apache.cocoon.acting.HttpCacheAction (dated
  2004-07-29)
 
  Unfortunately, this action can't reply with 304 on request 
 with HTTP Header
  [If-Modified-Since: ..]
 
 The idea is that the httpd front-end would handle that case: if a page
 has been cached by the httpd front-end, conditional GETs will not hit
 Cocoon until the httpd cache expires.
 
 So yes, you could say that there's no caching between httpd and Cocoon
 in this case. But this doesn't prevent you from using Cocoon's
 internal caches in pipelines, if you need to.
 
 -Bertrand
 


RE: Not caching pages with continuations (was:...where is 304?)

2007-01-29 Thread Ard Schrijvers
Hello Bertrand,

 
  For high-traffic sites, we obviously want to use mod_cache, 
 but, at the same time,
  mod_cache shouldn't cache pages with a continuation in it
 
 The problem is making the HTTP cache headers variable according to
 which pipeline is executed.
 
 But, in principle, you have to set the headers before any content is
 written to the output, and at this point you might not know what kind
 of caching you need.
 
 The way I've been solving this is a follows:
 
 a) In pipelines, actions or flowscript set request attributes to
 indicate what type of caching is needed
 
 b) A custom transformer at the end of the pipeline sets the HTTP cache
 headers according to these request attributes
 
 Now, you're not supposed to set headers at the end of the pipeline,
 but anyway the serializer has to buffer the content to be able to set
 the Content-Length header. So nothing is actually written to the
 output before b) *if* the serializer returns true for
 shouldSetContentType.
 
 https://issues.apache.org/jira/browse/COCOON-1619 also plays a role in
 this, as headers set by internal pipelines are ignored.

Yes, this is kind of a bummer indeed, which I already knew, but do you regard 
it as a bug (aah reading the bug, you do not seem to really regard it as a bug, 
me neither)?

 
 Not sure if this solution can be defined as a best practice, as it's a
 bit of a hack...but if works for me ;-)

This is actually almost the same hack we used, but instead of a transformer a 
selector, and if some value set in flowscript, an action to set headers. 
Because we are outsourcing/other parties using our best practices, and I did 
not want them to have to think about setting things in flowscript like sessions 
and values to indicate caching headers, I chose to put it in the black box 
transformer, which handles it. Of course, also kind of a hack, because  users 
aren't really aware of it (certainly because i did not want another sax 
transformer, so I did add it to the StripNameSpaceTransformer which is by 
default used by us in front of the serializer. But it does more then its name 
suggests, and therefor, it is hacky ofcourse. But...at least nobody has to 
think about it :-) ). I wondered if there was a solid nonhacky solution to the 
issue

Ard

 
 -Bertrand
 


RE: [jira] Created: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2007-01-18 Thread Ard Schrijvers
Hello,

 
 Cocoon 2.1.9 introduced the concept of a lock in 
 AbstractCachingProcessingPipeline, an optimization to prevent 
 two concurrent requests from generating the same cached 
 content. The first request adds the pipeline key to the 
 transient cache to 'lock' the cache entry for that pipeline, 
 subsequent concurrent requests wait for the first request to 
 cache the content (by Object.lock()ing the pipeline key 
 entry) before proceeding, and can then use the newly cached content.
 
 However, this has introduced an incompatibility with the 
 IncludeTransformer: if the inclusions access the same 
 yet-to-be-cached content as the root pipeline, the whole 
 assembly hangs, since a lock will be made on a lock already 
 held by the same thread, and which cannot be satisfied.
 
 e.g.
 i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
 ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key 
 to the transient store as a lock.
 iii) subsequently in the root pipeline, the IncludeTransformer is run.
 iv) one of the inclusions also generates with 
 cocoon:/foo.xml, this sub-pipeline locks in 
 AbstractProcessingPipeline.waitForLock() because the 
 sub-pipeline key is already present.
 v) deadlock.

I do not understand one part of it. If a sub-pipeline is called, 
cocoon:/foo.xml, there is lock generated for this sub-pipeline seperately, 
right? (if not, I do not understand why it is not like this. I suppose a lock 
is generated for the root pipeline, but as well for every sub-pipeline 
individually. I suppose though, because i did not actually look at the code). 

Now, if the include transformer calls this same sub-pipeline, which is having 
its own lock, I do not see why a deadlock can occur? The root-pipeline is 
locked, the sub-pipeline is locked as well. The include transformer wants to 
include the same sub-pipeline, waits untill this one is finished, then can 
includes it, right? 

I most be missing something, 

Regards Ard

 
 I've found a (partial, see below) solution for this: instead 
 of a plain Object being added to the transient store as the 
 lock object, the Thread.currentThread() is added; when 
 waitForLock() is called, if the lock object exists, it checks 
 that it is not the same thread before attempting to lock it; 
 if it is the same thread, then waitForLock() returns success, 
 which allows generation to proceed. You loose the efficiency 
 of generating the cache only once in this case, but at least 
 it doesn't hang! With JDK1.5 this can be made neater by using 
 Thread#holdsLock() instead of adding the thread object itself 
 to the transient store.
 
 See patch file.
 
 However, even with this fix, parallel includes (when enabled) 
 may still hang, because they pass the not-the-same-thread 
 test, but fail because the root pipeline, which holds the 
 initial lock, cannot complete (and therefore statisfy the 
 lock condition for the parallel threads), before the threads 
 themselves have completed, which then results in a deadlock again.
 
 The complete solution is probably to avoid locking if the 
 lock is held by the same top-level Request, but that requires 
 more knowledge of Cocoon's processing than I (currently) have!
 
 IMHO unless a complete solution is found to this, then this 
 optimization should be removed completely, or else made 
 optional by configuration, since it renders the 
 IncludeTransformer dangerous.
 
 
 -- 
 This message is automatically generated by JIRA.
 -
 If you think it was sent incorrectly contact one of the 
 administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




RE: [jira] Created: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2007-01-18 Thread Ard Schrijvers

 
 Hi,
 
 The crux is that the sub-pipeline is called twice within the 
 context of 
 the master pipeline (once by the root pipeline, once by an include); 
 thus the pipeline keys which are the same are those for the 
 sub-pipeline, not the master pipeline.
 
 My 'broken' pipeline is too complex to explain, but it's basically 
 something like:

If this results in a deadlock, then there is something basically wrong with 
this locking. Does the master pipeline lock its subpipeline untill it is 
finished itself? That wouldn't make sense. 

I mean, for the thing below, the master would have a lock related to the 
cachekey where the cachekey is something like: 

PK_G-file-cocoon:/foo?pipelinehash=-3411761154931530775_T-xslt-/page.xsl_S-xml-1

Then, as I would understand this key is locked. Now, the pipeline with 
pattern=foo gets its own pipeline cachkey, which is also locked. But after 
this one is finished, your problem indicates that the lock of this sub pipeline 
is not cleared untill the master pipeline is finished? This doesn't make sense 
to me.

Furthermore, if this setup gives problems, then wouldn't

map:aggregate
map:part src=cocoon:/foo
map:part src=cocoon:/foo
/map:aggregate

result in the same deadlock? I must be missing something trivial

Ard

 
 map:match pattern=master
   map:generate src=cocoon:/foo
   map:transform src=page.xsl/ !-- generates include element for 
 cocoon:/included --
   map:transform type=include/ !-- includes included 
 sub-pipeline --
   map:serialize/
 /map:match
 
 map:match pattern=included
   map:generate src=cocoon:/foo
   map:transform src=included-page.xsl/
   map:serialize/
 /map:match
 
 map:match pattern=foo !-- this gets called twice --
   map:generate ... /
   map:serialize/
 /map:match
 
 Ellis.
 
 
 Ard Schrijvers wrote:
 
 Hello,
 
   
 
 Cocoon 2.1.9 introduced the concept of a lock in 
 AbstractCachingProcessingPipeline, an optimization to prevent 
 two concurrent requests from generating the same cached 
 content. The first request adds the pipeline key to the 
 transient cache to 'lock' the cache entry for that pipeline, 
 subsequent concurrent requests wait for the first request to 
 cache the content (by Object.lock()ing the pipeline key 
 entry) before proceeding, and can then use the newly cached content.
 
 However, this has introduced an incompatibility with the 
 IncludeTransformer: if the inclusions access the same 
 yet-to-be-cached content as the root pipeline, the whole 
 assembly hangs, since a lock will be made on a lock already 
 held by the same thread, and which cannot be satisfied.
 
 e.g.
 i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
 ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key 
 to the transient store as a lock.
 iii) subsequently in the root pipeline, the 
 IncludeTransformer is run.
 iv) one of the inclusions also generates with 
 cocoon:/foo.xml, this sub-pipeline locks in 
 AbstractProcessingPipeline.waitForLock() because the 
 sub-pipeline key is already present.
 v) deadlock.
 
 
 
 I do not understand one part of it. If a sub-pipeline is 
 called, cocoon:/foo.xml, there is lock generated for this 
 sub-pipeline seperately, right? (if not, I do not understand 
 why it is not like this. I suppose a lock is generated for 
 the root pipeline, but as well for every sub-pipeline 
 individually. I suppose though, because i did not actually 
 look at the code). 
 
 Now, if the include transformer calls this same 
 sub-pipeline, which is having its own lock, I do not see why 
 a deadlock can occur? The root-pipeline is locked, the 
 sub-pipeline is locked as well. The include transformer wants 
 to include the same sub-pipeline, waits untill this one is 
 finished, then can includes it, right? 
 
 I most be missing something, 
 
 Regards Ard
 
   
 
 I've found a (partial, see below) solution for this: instead 
 of a plain Object being added to the transient store as the 
 lock object, the Thread.currentThread() is added; when 
 waitForLock() is called, if the lock object exists, it checks 
 that it is not the same thread before attempting to lock it; 
 if it is the same thread, then waitForLock() returns success, 
 which allows generation to proceed. You loose the efficiency 
 of generating the cache only once in this case, but at least 
 it doesn't hang! With JDK1.5 this can be made neater by using 
 Thread#holdsLock() instead of adding the thread object itself 
 to the transient store.
 
 See patch file.
 
 However, even with this fix, parallel includes (when enabled) 
 may still hang, because they pass the not-the-same-thread 
 test, but fail because the root pipeline, which holds the 
 initial lock, cannot complete (and therefore statisfy the 
 lock condition for the parallel threads), before the threads 
 themselves have completed, which then results in a deadlock again.
 
 The complete solution is probably to avoid locking if the 
 lock is held by the same top-level Request

RE: [jira] Created: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2007-01-18 Thread Ard Schrijvers
Hello Ellis,

 
 Hi Ard,
 
 I've not tried the double-aggregate thing (yet), but I've now 
 attached 
 to the bug a very simple repeatable test that demonstrates 
 the lock up 
 as I've experienced it.

I do not doubt about your testsI will try to find some time to verify your 
findings (and the aggregate thing), and your solution


 
 Have fun!

I doubt it :-) 

 
 Ellis.

Regards Ard

 
 
 Ard Schrijvers wrote:
 
 Hi,
 
 The crux is that the sub-pipeline is called twice within the 
 context of 
 the master pipeline (once by the root pipeline, once by an 
 include); 
 thus the pipeline keys which are the same are those for the 
 sub-pipeline, not the master pipeline.
 
 My 'broken' pipeline is too complex to explain, but it's basically 
 something like:
 
 
 
 If this results in a deadlock, then there is something 
 basically wrong with this locking. Does the master pipeline 
 lock its subpipeline untill it is finished itself? That 
 wouldn't make sense. 
 
 I mean, for the thing below, the master would have a lock 
 related to the cachekey where the cachekey is something like: 
 
 PK_G-file-cocoon:/foo?pipelinehash=-3411761154931530775_T-xsl
 t-/page.xsl_S-xml-1
 
 Then, as I would understand this key is locked. Now, the 
 pipeline with pattern=foo gets its own pipeline cachkey, 
 which is also locked. But after this one is finished, your 
 problem indicates that the lock of this sub pipeline is not 
 cleared untill the master pipeline is finished? This doesn't 
 make sense to me.
 
 Furthermore, if this setup gives problems, then wouldn't
 
 map:aggregate
  map:part src=cocoon:/foo
  map:part src=cocoon:/foo
 /map:aggregate
 
 result in the same deadlock? I must be missing something trivial
 
 Ard
 
   
 
 map:match pattern=master
   map:generate src=cocoon:/foo
   map:transform src=page.xsl/ !-- generates include 
 element for 
 cocoon:/included --
   map:transform type=include/ !-- includes included 
 sub-pipeline --
   map:serialize/
 /map:match
 
 map:match pattern=included
   map:generate src=cocoon:/foo
   map:transform src=included-page.xsl/
   map:serialize/
 /map:match
 
 map:match pattern=foo !-- this gets called twice --
   map:generate ... /
   map:serialize/
 /map:match
 
 Ellis.
 
 
 Ard Schrijvers wrote:
 
 
 
 Hello,
 
  
 
   
 
 Cocoon 2.1.9 introduced the concept of a lock in 
 AbstractCachingProcessingPipeline, an optimization to prevent 
 two concurrent requests from generating the same cached 
 content. The first request adds the pipeline key to the 
 transient cache to 'lock' the cache entry for that pipeline, 
 subsequent concurrent requests wait for the first request to 
 cache the content (by Object.lock()ing the pipeline key 
 entry) before proceeding, and can then use the newly 
 cached content.
 
 However, this has introduced an incompatibility with the 
 IncludeTransformer: if the inclusions access the same 
 yet-to-be-cached content as the root pipeline, the whole 
 assembly hangs, since a lock will be made on a lock already 
 held by the same thread, and which cannot be satisfied.
 
 e.g.
 i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
 ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key 
 to the transient store as a lock.
 iii) subsequently in the root pipeline, the 
 
 
 IncludeTransformer is run.
 
 
 iv) one of the inclusions also generates with 
 cocoon:/foo.xml, this sub-pipeline locks in 
 AbstractProcessingPipeline.waitForLock() because the 
 sub-pipeline key is already present.
 v) deadlock.

 
 
 
 I do not understand one part of it. If a sub-pipeline is 
   
 
 called, cocoon:/foo.xml, there is lock generated for this 
 sub-pipeline seperately, right? (if not, I do not understand 
 why it is not like this. I suppose a lock is generated for 
 the root pipeline, but as well for every sub-pipeline 
 individually. I suppose though, because i did not actually 
 look at the code). 
 
 
 Now, if the include transformer calls this same 
   
 
 sub-pipeline, which is having its own lock, I do not see why 
 a deadlock can occur? The root-pipeline is locked, the 
 sub-pipeline is locked as well. The include transformer wants 
 to include the same sub-pipeline, waits untill this one is 
 finished, then can includes it, right? 
 
 
 I most be missing something, 
 
 Regards Ard
 
  
 
   
 
 I've found a (partial, see below) solution for this: instead 
 of a plain Object being added to the transient store as the 
 lock object, the Thread.currentThread() is added; when 
 waitForLock() is called, if the lock object exists, it checks 
 that it is not the same thread before attempting to lock it; 
 if it is the same thread, then waitForLock() returns success, 
 which allows generation to proceed. You loose the efficiency 
 of generating the cache only once in this case, but at least 
 it doesn't hang! With JDK1.5 this can be made neater by using 
 Thread#holdsLock() instead of adding

RE: WildcardMatcherHelper caching issue

2007-01-08 Thread Ard Schrijvers

 Ard,
 
 What is cached is the pattern, not the string to be matched 
 against it,
 so what you describe isn't a problem IIUC.

I already was a little amazed. But then, who is using always-changing patterns? 
Like in dynamic sitemaps or something? Do not really see this possible memory 
leak in here..

Ard

 
 On Mon, 2007-01-08 at 10:30 +0100, Ard Schrijvers wrote:
  Hello,
  
  think I kind of missed this WildcardMatcherHelper untill 
 now. From which cocoon version on is this available? Can you 
 define in your matcher wether it should use this 
 WildcardMatcherHelper, or is this by default?
  
  Regarding the caching, currently it would seem to me like a 
 very possible memory leak. What if I have something like
  
  map:part element=othermatcher 
 value=cocoon://foo/{date:MMddHHmmssSS}/
  
  or if you have an active forum build with cforms, and 
 2ervw3verv452345435wdfwfw.continue patterns are cached (or is 
 it only for caching pipelines?)
  
  This would imply a new cached pattern for every request. Of 
 course, the thing above with the date is stupid, but it is 
 too easy to  create memory leaks for a user. The solution 
 that a user should choose between caching or noncaching 
 WildcardMatcherHelper seems to me to difficult for an average 
 user to make a judgement on this. The option about a 
 WeakHashMap should be some sort of SoftHashMap (SoftRef) 
 instead. WeakReferences are deleted when no longer a strong 
 ref is available, so either there would be a strong ref 
 (implying the same memory leak) or there whould be no strong 
 ref, so all cached patterns are removed on every gc. With 
 SoftReferences they are only removed when jvm decides to do 
 so (when low on memory). But, IMO, it is not ok to have the 
 jvm possibly go low on memory, and the jvm to remove cached 
 patterns at random (more sense it makes, to have the most 
 used patterns kept in memory). 
  
  I really think the best way is some simple LRUMemoryStore 
 with a maxitems configured by default to 1000 or something, 
 and possibly overridden for the user who knows more about it. 
 Default, every user can easily work with it without having to 
 think about it. 
  
  Regards Ard
 
 -- 
 Bruno Dumon http://outerthought.org/
 Outerthought - Open Source, Java  XML Competence Support Center
 [EMAIL PROTECTED]  [EMAIL PROTECTED]
 
 


RE: i18nTransformer problems with static pages

2007-01-03 Thread Ard Schrijvers
Sry for the delay.

Shall I commit the StripNameSpaceTransformer to 
trunk/core/cocoon-pipeline/cocoon-pipeline-components/./transformation , or 
is there a more preferable location? Should I also add it to the branch?

Ard


  
   1) The lightweight StripNameSpaceTransformer ... Add
   this to trunk/branch or not?
 
 If no objections, I'll add my version of the 
 StripNameSpaceTransformer to the trunk/branch on short notice
 
  
  +1
  
   2) The XHTML serializer: Make it by default strip the list of
   namespaces we know people don't want to sent to the browser.
  
  -1 No, please don't. If you have a browser that does not understand 
  XHTML (like IE) don't feed it with XHTML! It's as simple like 
  that. If 
  it understands XHTML it also must be able to handle namespace 
  declarations and additional XML-specific attributes like 
 xml:space or 
  xml:lang. Do you want to suppress them as well?
  
  Such a behaviour might be valid for a HTMLSerializer though. But 
  actually I don't care for that one when we have 
  StripNameSpaceTransformer.
  
   About serializers: Does anybody know why we have a 
  serialization part
   in cocoon core and one in a serializers block? Is it 
  preferred to use
   serializers from the serializers block? Normally, I am using
   org.apache.cocoon.serialization.HTMLSerializer and configure
   doctype-public.
  
  Those from core are Xalan's serializers at the end. Those from 
  serializers block are own implementations from Cocoon once 
  made by Pier. 
  As Cocoon should not write serializer IMO I prefer the core ones.
 
 I agree on this one.
 
  A 
  better integration with Xalan community to get our wishes 
 applied to 
  those serializers might be desirable (one issue was the 
  closing of tags, 
  which must not be closed or consist only of a start tag in 
  HTML IIRC). 
  This would make our own implementations superfluous.
  
  Jörg
  
 


RE: i18nTransformer problems with static pages

2007-01-03 Thread Ard Schrijvers

 
 
 Ard Schrijvers skrev:
  Sry for the delay.
 
  Shall I commit the StripNameSpaceTransformer to 
 trunk/core/cocoon-pipeline/cocoon-pipeline-components/./tr
 ansformation , or is there a more preferable location? Should 
 I also add it to the branch?

 Seem like a reasonable place as long as the cocoon-pipline-components 
 give have all the dependencies you need, 

Yes, it has. I just committed it to the trunk and branche. Not sure wether it 
is common practice to create a jira issue for it when committing something new 
and resolve it? Should I do this normally? Or still do this?

Ard

 if not, put it in the 
 cocoon-sitemap-components or cocoon-core.
 
 /Daniel
 
 


RE: i18nTransformer problems with static pages

2007-01-03 Thread Ard Schrijvers

 
 On 1/3/07, Ard Schrijvers [EMAIL PROTECTED] wrote:
 
  ...Not sure wether it is common practice to create a jira 
 issue for it when committing
  something new and resolve it?...
 
 I don't think it's common practice here but IMHO it's a Good Thing.

Locally we added this svn 'pre-commit' hook to prohibit a commit that did not 
contain a reference to a jira issue. In jira, via an issue, you can then find 
the commits that below to it. Although it is sometimes a little annoying, it 
forces everybody here to work always according jira issues and commits can 
always be easily traced from a certain jira issue. Not sure if it would make 
people happy in an open source community :-) 

Ard

 
 -Bertrand
 


RE: i18nTransformer problems with static pages

2006-12-08 Thread Ard Schrijvers

 
  1) The lightweight StripNameSpaceTransformer ... Add
  this to trunk/branch or not?

If no objections, I'll add my version of the StripNameSpaceTransformer to the 
trunk/branch on short notice

 
 +1
 
  2) The XHTML serializer: Make it by default strip the list of
  namespaces we know people don't want to sent to the browser.
 
 -1 No, please don't. If you have a browser that does not understand 
 XHTML (like IE) don't feed it with XHTML! It's as simple like 
 that. If 
 it understands XHTML it also must be able to handle namespace 
 declarations and additional XML-specific attributes like xml:space or 
 xml:lang. Do you want to suppress them as well?
 
 Such a behaviour might be valid for a HTMLSerializer though. But 
 actually I don't care for that one when we have 
 StripNameSpaceTransformer.
 
  About serializers: Does anybody know why we have a 
 serialization part
  in cocoon core and one in a serializers block? Is it 
 preferred to use
  serializers from the serializers block? Normally, I am using
  org.apache.cocoon.serialization.HTMLSerializer and configure
  doctype-public.
 
 Those from core are Xalan's serializers at the end. Those from 
 serializers block are own implementations from Cocoon once 
 made by Pier. 
 As Cocoon should not write serializer IMO I prefer the core ones.

I agree on this one.

 A 
 better integration with Xalan community to get our wishes applied to 
 those serializers might be desirable (one issue was the 
 closing of tags, 
 which must not be closed or consist only of a start tag in 
 HTML IIRC). 
 This would make our own implementations superfluous.
 
 Jörg
 


RE: loading document in xsl dynamicly generated by cocoon

2006-12-08 Thread Ard Schrijvers
Hello PIotr,

I am double posting your message because it is clearly meant for the [EMAIL 
PROTECTED] 

There are some threads already on this at 
http://marc.theaimsgroup.com/?l=xml-cocoon-usersr=1w=2

Also why we discourage the use of doc() in xsl, and point to include or 
cinclude transformer,

If you still have questions, plz respond to the [EMAIL PROTECTED],

Regards Ard

 
 
 Hi.
 I got this problem: I need to generate xml and load it to xsl as
 xsl:variable.
 Now I'm doing something like this:
 xsl:variable name=filesList
 select=doc('http://localhost:/Invoices/getDir/', $someValue))/
 And I have coresponding match in sitemap.xmap:
 map:match pattern=getDir/**
   map:generate type=directory src=invoices/{1}/
   map:serialize type=xml/
 /map:match
 
 Is any possibility to do something like this:
 xsl:variable name=filesList select=doc('cocoon:/getDir/',
 $someValue))/
 
 I'm using Saxon by default.
 
 Greetings 
 PIotr
 -- 
 View this message in context: 
 http://www.nabble.com/loading-document-in-xsl-dynamicly-genera
 ted-by-cocoon-tf2780469.html#a7757172
 Sent from the Cocoon - Dev mailing list archive at Nabble.com.
 
 


RE: i18nTransformer problems with static pages

2006-12-07 Thread Ard Schrijvers
Recapitulating this thread:

1) The lightweight StripNameSpaceTransformer is an option to strip intermediate 
namespaces you want to get rid of (like after sql transformer, I would like to 
get rid of them as fast as possible). Add this to trunk/branch or not?

2) The XHTML serializer: Make it by default strip the list of namespaces we 
know people don't want to sent to the browser. Configurable: added namespaces 
to be stripped.

About serializers: Does anybody know why we have a serialization part in cocoon 
core and one in a serializers block? Is it preferred to use serializers from 
the serializers block? Normally, I am using 
org.apache.cocoon.serialization.HTMLSerializer and configure doctype-public. 

Ard



 Mark Lundquist wrote:
 
  Is there ever a need to retain namespace declarations for namespaces
  that are not actually used in the result document, i.e. for which
  there is no element with that namespace?  I think the idea 
 is to just
  delete extraneous namespace declarations, not to delete them all...
 Yep, the problem with this approach is that you manage to know if a
 namespace declaration has been used only when you reach the end of the
 document (after checking that no element used it), while the 
 declaration
 is quite commonly on the root element. Buffering all the SAX event for
 each html page served by cocoon would be a problem :)
 
 What i was proposing would be simply to enable it by default (already
 too many options in cocoon, and if a page containing a i18n namespace
 declaration is not visualized by IE, by default cocoon should not send
 it), but limit it's influence on a set of namespaces (all namespaces
 http://cocoon.apache.org for example) and eventually have this set
 configurable by the user so that there will be no need in the 
 future for
 remove-that-certain-unwanted-ns.xsl files :D
 
 Simone
 


RE: i18nTransformer problems with static pages

2006-12-05 Thread Ard Schrijvers


 You should add an extra stylesheet that removes superfluous namespace 
 attributes. This is what I've done:

I used to use this strategy as well, though recently I replaced this xsl 
transformer with a custom StripNameSpacesTransformer (about just 10 lines), 
which outperforms the slow xsl transformation a factor 30 for small xml docs, 
over hundreds of times for bigger xml docs. I am not sure if it is already 
available in cocoon in some transformer. 

If somebody is interested in the code, I can attach it,

Regards Ard

 
 xsl:template match=*
!-- remove element prefix (if any) --
xsl:element name={local-name()}
  !-- process attributes --
  xsl:for-each select=@*
!-- remove attribute prefix (if any) --
xsl:attribute name={local-name()}
  xsl:value-of select=./
/xsl:attribute
  /xsl:for-each
  xsl:apply-templates/
/xsl:element
/xsl:template
 
 Add a generic catchall template or you end up with nothing:
 
 !-- = --
 !-- generic catchall template --
 !-- = --
 xsl:template match=text()
xsl:copy
   xsl:apply-templates select=text()/
/xsl:copy
 /xsl:template
 
 
 HTH
 
 Bye, Helma
 
 


RE: i18nTransformer problems with static pages

2006-12-05 Thread Ard Schrijvers

 
  ...I replaced this xsl transformer with a custom 
 StripNameSpacesTransformer
  (about just 10 lines), which outperforms the slow xsl 
 transformation a factor 30...
 
 This would be useful to have in Cocoon, go for it!

I thought it was too trivial :-) 

Will add it to trunk/branch

Ard

 -Bertrand
 


[SURVEY] cocoon eventcache block

2006-12-05 Thread Ard Schrijvers
Hello,

I am wondering if there is anybody using the cocoon eventcache block (without 
using hippo jars), or are planning to do so in the future? 

Regards Ard


-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / http://www.hippo.nl
-- 


RE: Re: new avalon/excalibur test release

2006-10-12 Thread Ard Schrijvers
 Joerg Heinicke wrote:
 
  
  Yes, that should be the way to go. Furthermore (if it is 
 possible) you 
  should add a dependency on the xmlutil issue to the Cocoon 
 issue, so 
  that we get informed when xmlutil is fixed.
 
 Yes that sounds about right to me. Create the ticket and I'll 
 see to it 
 that it gets applied.
 
 OTOH, if you feel confident enough about the patch you can commit it 
 directly yourself, all Cocoon committers have write access to 
 excalibur/avalon svn.

I am pretty confindent about the patch, but I have some problems getting my 
password resetted (long story..), so I cannot write yet...Anyway, I will create 
the issue in excalibur, try to set a dependency on the Cocoon issue, add a 
patch and hopefully see it ending up in cocoon. 

Regards Ard

 
 Cheers
 Jorg
 
 


[jira] Commented: (COCOON-1909) Cache validity of XSLT stylesheets does not reflect included or imported stylesheets.

2006-10-12 Thread Ard Schrijvers (JIRA)
[ 
http://issues.apache.org/jira/browse/COCOON-1909?page=comments#action_12441677 
] 

Ard Schrijvers commented on COCOON-1909:


The bug is in XSLTProcessorImpl in public javax.xml.transform.Source resolve( 
String href, String base ), at List includes = (List)m_includesMap.get( base 
);. The problem lies in the base, because the base is related to the 
stylesheet calling the import, so, if it is an import calling an import, the 
base is different from the main stylesheet, therefor not adding its validity 
to the main stylesheet aggregated validity.

I added a global m_id of the main stylesheet that is now used in 
XSLTProcessorImpl, and replace base in List includes = 
(List)m_includesMap.get( base ) by List includes = (List)m_includesMap.get( 
m_id );. This solves the invalidation of main stylesheets.

The bug is solved when a new xmlutil jar is included in cocoon

 Cache validity of XSLT stylesheets does not reflect included or imported 
 stylesheets.
 -

 Key: COCOON-1909
 URL: http://issues.apache.org/jira/browse/COCOON-1909
 Project: Cocoon
  Issue Type: Bug
  Components: - Components: Sitemap
Affects Versions: 2.1.9
Reporter: Conal Tuohy

 XSLT stylesheets which either import or include other stylesheets are cached 
 too aggressively: if you change an imported or included stylesheet the change 
 does not take effect until you update the main stylesheet.
 This bug is supposed to have been fixed years ago, but it still doesn't work 
 for us.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




RE: Closing JIRA issues

2006-10-12 Thread Ard Schrijvers

 
 On 10/11/06, Antonio Gallardo [EMAIL PROTECTED] wrote:
 
  ...IMHO, we should leave open this bug, because the fixi is 
 not finished
  yet until we update excalibur...
 
 Same here. If update excalibur is a Jira or bugzilla issue
 somewhere, I'd make a link or a dependency to it to indicate what's
 happening.

I am still a bit confused: I now have 
http://issues.apache.org/jira/browse/COCOON-1909, and added comment that it has 
been fixed, and needs a excalibur-xmlutil update. In excalibur-components, I 
created http://issues.apache.org/jira/browse/EXLBR-31 and added the patch file.

Can I create some sort of dependency between the to, or do I need to make a 
cocoon JIRA issue that says update excalibur jars, and somehow relate 
http://issues.apache.org/jira/browse/COCOON-1909 to it (in principal, then when 
the update jira issue is closed, the 1909 should also be closed)

Sry for the perhaps trivial questions, but I am just not (yet) familiar enough 
about the methodolies

Ard

 
 -Bertrand
 


RE: [VOTE] Lars Trieloff as a new Cocoon committer

2006-10-03 Thread Ard Schrijvers

 
 So please cast your votes!
 -- 

+1, welcome!

-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / http://www.hippo.nl
-- 


RE: Caching jx *without* flow

2006-10-02 Thread Ard Schrijvers

 
 Leszek Gawron escribió:
  If user wants to make JXTG automatically cacheable he/she must 
  explicitly state it in configuration:
 
  map:generator src=org.apache.cocoon.template.JXTemplateGenerator
 automatic-cache-key
   use-sitemap-parameterstrue/use-sitemap-parameters
   use-request-parameterstrue/use-request-parameters
   use-request-attributestrue/use-request-attributes
   use-session-parametersfalse/use-session-parameters
   use-cookie-parametersfalse/use-cookie-parameters
 /automatic-cache-key
  /map:generator
 Hi Leszek,
 
 Sorry to join the party too late. I have been busy the last 
 months, but 
 I am still alive. :-)

I am sorry as wellbut as was mentioned, I was on holiday untill this 
morning, so now, after being awake for 34 hours, have had a terrible flight and 
now spitting through 2782 unread mails, I will try to give my to cents about it:

First of all, there are two types of caching things which tend to get mixed up 
in the thread all along:

1) The jx:import, in which a change does not get picked up without re-saving 
the parent (main) one. According the TraxTransformer, we could, depending on 
the check-includes parameter, include the validity objects of the imported 
templates (if it should work for imports importing a template, then it should 
be recursive, implying perhaps a small performance decrease for initial runs). 
In xsl it works with specifying setting the check-includes to true. Then, 
changing an imported xsl, will work without saving the main. The problem 
remains for an imported xsl in an imported xsl, because the validities are not 
added recursively, but only one level deep (not to hard to fix either I think, 
see my comment in http://issues.apache.org/jira/browse/COCOON-1909, but it is 
not part of cocoon). Or, we leave it the way it is like now, and let people 
save the main template after changing an imported one. 

2) The cachekey created. Actually, this is the only one affected by the 
suggested patch for the JXTG, and is independant of the former one. Leszek 
seems to be wondering why it is needed. After all, he showed (by the way quite 
cool way) how to just put it in the jx template, like 

page 
jx:cache-key=${Packages.org.apache.cocoon.template.CocoonParametersCacheKeyBuilder.buildKey(
 
cocoon.parameters )} jx:cache-validity=${some validity}
.../
/page

and add a CocoonParametersCacheKeyBuilder class. This works, only, it is hard 
to specifiy which parameters you want and do not want. Also, how to include for 
example one specific session value you want the cache to depend on, or the 
current date. This means, adding multiple static buildKey 's, or overload it 
or whatever. But then, why does it not work like this for the TraxTransformer 
by default? That you add your own CocoonParametersCacheKeyBuilder, and build 
the keys? There is a very simple reason, and that is, that it is just way 
easier to just include the sitemap parameters (assuming all the parameters in 
configuration are set to false, which is almost always the case: I mailed 
already many many times to the user list: *always* set 
use-request-parametersfalse/use-request-parameters for your 
TraxTransformer. Crawlers tend to come along with arbitrary parameters, blowing 
up your cache and leaving it with unreacheable cachekeys). The reason the 
TraxTransformer by default puts in its cachekey all sitemap parameters is that 
it is *very* easy: you can hardly make an error! 

And this is the exact reason, that it is nice to have a way, to have by default 
added to the cachekey the sitemap parameters defined in the JXTG. It is exactly 
the same as for the TraxTransformer. Only difference, is that for the 
TraxTransformer, you *need* to have them passed in as sitemap parameters to 
have them available in the xsl, so you cannot forget one. For the suggested 
JXTG, you have to think about them, and not forget. I do agree, that 
jx:includeInKey and jx:excludeFromKey are completely redundant, and therefor 
probably also never used. 

Then, there might be an argument, that the jx template is to dynamic to have it 
cached on simple sitemap parameters. Leszek states The jx object model 
does not get narrowed only to cocoon parameters so you are still able to use 
cocoon.session, cocoon.request, although these two examples are easily added 
to sitemap parameters (you know on which ones the cache depends). The caching 
becomes impossible when you use operations in your jx template, and have 
outcomes, that cannot be known at sitemap level. Well, in that case, probably, 
do you want caching at all? Use a parameter like Antonio suggests. Don't 
forget, that implicitely, the very same assumtion is made for the 
TraxTransformer. You can easily use xslt extensions, with date functions, which 
do not get included in the cachekey/validity, implying faulty behavior. This 
also implies, that using the very neat i18n:date pattern=MMdd/ in an 
xsl, which gets translated by the 

[jira] Commented: (COCOON-1909) Cache validity of XSLT stylesheets does not reflect included or imported stylesheets.

2006-09-07 Thread Ard Schrijvers (JIRA)
[ 
http://issues.apache.org/jira/browse/COCOON-1909?page=comments#action_12433063 
] 

Ard Schrijvers commented on COCOON-1909:


The problem is a little more sophisticated then depicted above:

First of all, the TraxTransformer allows you to set a parameter, 
check-includes to true, to add the validity of imported/included stylesheets. 

From the TraxTransformer:

// Get a Transformer Handler if we check for includes
// If we don't check the handler is get during setConsumer()
try {
if ( _checkIncludes ) {
XSLTProcessor.TransformerHandlerAndValidity handlerAndValidity =

this.xsltProcessor.getTransformerHandlerAndValidity(this.inputSource, null);
this.transformerHandler = 
handlerAndValidity.getTransfomerHandler();
this.transformerValidity = 
handlerAndValidity.getTransfomerValidity();
} else {
this.transformerValidity = this.inputSource.getValidity();
}
} catch (XSLTProcessorException se) {
throw new ProcessingException(Unable to get transformer handler 
for  + this.inputSource.getURI(), se);
}

So, the XSLTProcessor (org.apache.excalibur.xml.xslt.XSLTProcessor) returns the 
validity of the imported/included stylesheets. Changing one of the imported 
stylesheets now, will affect the (cached) pipeline, since one of the validities 
in the AggregatedValidity is not valid anymore.

The only problem is, that this check for includes is not done recursively, and 
only one level deep.  Then again, this does not seem to be wrong in cocoon, but 
merely in org.apache.excalibur.xml.xslt.XSLTProcessorImpl. 

public TransformerHandlerAndValidity getTransformerHandlerAndValidity( Source 
stylesheet, XMLFilter filter )
throws XSLTProcessorException

must be recursive for each included stylesheet to get this working. Not sure if 
this is a little an overkill. Since we are used to work with many included xsl, 
including other ones, etc, to have xsls generic, I implemented a monitor for 
development mode, that simple clears my DefaultTransientStore cache, to get rid 
of having to save parent xsls  

Ard



 Cache validity of XSLT stylesheets does not reflect included or imported 
 stylesheets.
 -

 Key: COCOON-1909
 URL: http://issues.apache.org/jira/browse/COCOON-1909
 Project: Cocoon
  Issue Type: Bug
  Components: - Components: Sitemap
Affects Versions: 2.1.9
Reporter: Conal Tuohy

 XSLT stylesheets which either import or include other stylesheets are cached 
 too aggressively: if you change an imported or included stylesheet the change 
 does not take effect until you update the main stylesheet.
 This bug is supposed to have been fixed years ago, but it still doesn't work 
 for us.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




RE: cocoon ehcache usage vs. hibernate + spring

2006-08-30 Thread Ard Schrijvers

 This has come up a few times before, independent of it being used in  
 hibernate's session factory.
 
 http://marc.theaimsgroup.com/?l=xml-cocoon-devm=110846153823635w=2


This problem now seems to be on the ehcache list as well:
http://sourceforge.net/mailarchive/forum.php?thread_id=30378955forum_id=48004

I am having the The cocoon-ehcache-X Cache is not  alive also from time to 
time, but did not have time to check when exactly it happens. 

Another problem we have with ehcache 1.2.2 is (though it might have been in 
there always, because our diskPersistent between JVM restarts did not work 
before) that I don't know what it does on a JVM restart. I suppose that it 
stores its memory store keys and disk store keys to cocoon-ehcache-X.index on 
shut down, and on starting up, re-populates its cache according that index 
(with the LRU and things correctly restored). We have large sites running, with 
disk stores easily growing to 1 Gb in a couple of days (keeping only 1000 items 
in memory). Start up takes insanely long now when the disk store is large (up 
to 10 min for 1 Gb disk store). Are there other people having this problem?

Regards Ard  

 
 Jorg
 
 On 22 Aug 2006, at 11:08, Leszek Gawron wrote:
 
  It looks like cocoon ehcache based store does not live 
 happily with  
  hibernate's session factory (spring managed):
 
  304859 [Shutdown] INFO / - Closing Spring root 
 WebApplicationContext
  304859 [Shutdown] INFO  
  
 org.springframework.web.context.support.XmlWebApplicationContext -  
  Closing application context [Root WebApplicationContext]
  304859 [Shutdown] INFO  
  
 org.springframework.beans.factory.support.DefaultListableBeanFactory  
  - Destroying singletons in  
  {org.springframework.beans.factory.support.DefaultL
  istableBeanFactory defining beans  
  
 [filterChainProxy,httpSessionContextIntegrationFilter,basicProcessing 
  Filter,basicProcessingFilterEntryPoint,exceptionTranslationFilter,f
  
 ilterSecurityInterceptor,roleVoter,accessDecisionManager,userDetailsS 
  
 ervice,userCache,daoAuthenticationProvider,anonymousAuthenticationPro 
  vider,testingAuthenticationProvi
  
 der,rememberMeAuthenticationProvider,authenticationManager,beanSecuri 
  
 tyInterceptor,beanSecurityAdvisor,placeholderConfig,org.springframewo 
  rk.beans.factory.annotation.Requ
  
 iredAnnotationBeanPostProcessor,dataSource,sessionFactory,baseHiberna 
  
 teDao,transactionManager,annotationTransactionAttributeSource,transac 
  tionInterceptor,transactionAdvis
  
 or,org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProx 
  
 yCreator,clientDao,transportOrderDao,orderLogEntryDao,clientService,t 
  ransportOrderService,org.springf
  ramework.scheduling.quartz.SchedulerFactoryBean]; root of  
  BeanFactory hierarchy}
  304859 [Shutdown] INFO  
  org.springframework.scheduling.quartz.SchedulerFactoryBean -  
  Shutting down Quartz Scheduler
  304859 [Shutdown] INFO org.quartz.core.QuartzScheduler - 
 Scheduler  
  DefaultQuartzScheduler_$_NON_CLUSTERED shutting down.
  304859 [Shutdown] INFO org.quartz.core.QuartzScheduler - 
 Scheduler  
  DefaultQuartzScheduler_$_NON_CLUSTERED paused.
  304859 [Shutdown] INFO org.quartz.core.QuartzScheduler - 
 Scheduler  
  DefaultQuartzScheduler_$_NON_CLUSTERED shutdown complete.
  304859 [Shutdown] INFO  
  org.springframework.orm.hibernate3.LocalSessionFactoryBean -  
  Closing Hibernate SessionFactory
  304859 [Shutdown] INFO org.hibernate.impl.SessionFactoryImpl -  
  closing
  304859 [Shutdown] ERROR  
  
 org.springframework.beans.factory.support.DefaultListableBeanFactory  
  - Destroy method on bean with name 'sessionFactory' threw an  
  exception
  java.lang.IllegalStateException: The cocoon-ehcache-1 
 Cache is not  
  alive.
  at net.sf.ehcache.Cache.checkStatus(Cache.java:1062)
  at net.sf.ehcache.Cache.dispose(Cache.java:939)
  at net.sf.ehcache.CacheManager.shutdown(CacheManager.java: 
  538)
  at org.hibernate.cache.EhCacheProvider.stop 
  (EhCacheProvider.java:145)
  at org.hibernate.impl.SessionFactoryImpl.close 
  (SessionFactoryImpl.java:756)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native  
  Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke 
  (NativeMethodAccessorImpl.java:39)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke 
  (DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:585)
  at  
  org.springframework.orm.hibernate3.LocalSessionFactoryBean 
  $TransactionAwareInvocationHandler.invoke 
  (LocalSessionFactoryBean.java:1124)
  at $Proxy10.close(Unknown Source)
  at  
  org.springframework.orm.hibernate3.LocalSessionFactoryBean.destroy 
  (LocalSessionFactoryBean.java:1078)
  at  
  
 org.springframework.beans.factory.support.DisposableBeanAdapter.destr 
  oy(DisposableBeanAdapter.java:97)
  at  
  
 

RE: [Summary] [Vote] Ard Schrijvers as a new Cocoon committer

2006-08-22 Thread Ard Schrijvers

 Ard Schrijvers wrote:
  
   [ snip ]
   
 Ard, if you accept your nomination, please get familiar with 
 http://www.apache.org/dev/new-committers-guide.html and send 
 your signed CLA to the ASF Secretary.
   
   The secretary recorded your CLA late last week, so now
   ready to go to the next phase. 
  
  Great!
  
   Your name is recorded
   as Ard Schryuers ... is that correct?
  
  Is it hard to change is into Ard Schrijvers :-) 
 
 A bit, but can be done. Does that match what was sent
 on the CLA and it was just that the secretary couldn't
 read the fax?

Ard Schrijvers was sent in the fax, and I really tried to write it as clear 
as possible :-).  v looks quickly like a u ofcourse, and y like ij, certainly 
when it is in a different language and when I write it. Anyway, great that you 
can fix it!

 
 Please also let us know which user id you prefer and what 
 your forwarding email  address is.
   
   Please provide that info to private AT cocoon.apache.org
   Alternative committer IDs too please.
  
  Shoul I do this after my name is corrected, or does this not matter?
 
 Do it now.

Great! 

Regards Ard

 
 -David
 


RE: FW: [cs.uu.nl #4642] [Fwd: watch out with mirror http://apache.cs.uu.nl/]

2006-08-22 Thread Ard Schrijvers

 
 Ard Schrijvers wrote:
  
  I did not know all this, and addressing it in this way to 
 the dev list might not be the most appropriate way :S. All I 
 need to know now, is how I can reach the correct person, from 
 infrastructure I suppose, that the issue can be resolved.
 
 Try the infrastructure at a.o mailing list.
 Various people there would help.
 
 http://www.apache.org/dev/infra-mail.html

Thx, 

Ard

 
 -David
 


RE: watch out with mirror http://apache.cs.uu.nl/

2006-08-21 Thread Ard Schrijvers

 Ard Schrijvers wrote:
  Hello,
 
  everybody using mirror http://apache.cs.uu.nl/ in his maven 
 build.properties should be aware of downloading wrong jars. 
 There seem to be strange rewrite rules configured, which 
 rewrites you a couple of times when you want to download 
 certain jakarta jars, and then just downloads 
 http://jakarta.apache.org/site/downloads/index.html. You end 
 up with hard to solve problems because of nonsense jars just 
 containing an html page.

 [snip]
  Ps: How can we inform the right people that this flaw gets 
 fixed? If you can't rely on mirrors to return the right jars, 
 we really end up in unmanageable situations.

 I'll tell them first thing tomorrow morning, over coffee :)

Great and thx!

 
 Cheers,
 Sandor
 
 


RE: watch out with mirror http://apache.cs.uu.nl/

2006-08-21 Thread Ard Schrijvers

 
  everybody using mirror http://apache.cs.uu.nl/ in his maven  
  build.properties should be aware of downloading wrong jars. There  
  seem to be strange rewrite rules configured, which rewrites you a  
  couple of times when you want to download certain jakarta 
 jars, and  
  then just downloads http://jakarta.apache.org/site/downloads/ 
  index.html. You end up with hard to solve problems because of  
  nonsense jars just containing an html page.
 
  For example:
  http://apache.cs.uu.nl/dist/jakarta/jakarta-slide
  http://apache.cs.uu.nl/dist/jakarta/jakarta-bcel
  http://apache.cs.uu.nl/dist/jakarta/jakarta-bsf
 
 
 That's odd.
 
 You can try contacting [EMAIL PROTECTED], but my experience from 
 the past  
 shows that they don't seem to care much about what mirrors do. Best  
 bet probably is to contact the mirror admins.
 
  I did not test further, but it seems that jakarta projects, from  
  which the jars are not present in apache.cs.uu.nl, are redirected,  
  instead of returning a 404. If you for example have the dependency
 
  dependency
idjakarta-slide:slide-stores/id
versiona-random-version-here/version
  /dependency
 
 Just to set things clear, this is a maven1-only repo right ?

Think so...I was using maven1 when running into this problem. But, seems that 
Sandor Spruit can do something about it to get it fixed,

Ard

 
 
 Jorg
 
 
 


RE: [Summary] [Vote] Ard Schrijvers as a new Cocoon committer

2006-08-21 Thread Ard Schrijvers

 [ snip ]
 
   Ard, if you accept your nomination, please get familiar with 
   http://www.apache.org/dev/new-committers-guide.html and send 
   your signed CLA to the ASF Secretary.
 
 The secretary recorded your CLA late last week, so now
 ready to go to the next phase. 

Great!

 Your name is recorded
 as Ard Schryuers ... is that correct?

Is it hard to change is into Ard Schrijvers :-) 

 
   Please also let us know which user id you prefer and what 
   your forwarding email  address is.
 
 Please provide that info to private AT cocoon.apache.org
 Alternative committer IDs too please.

Shoul I do this after my name is corrected, or does this not matter?

Thx for the info,

Regards Ard

 
 -David
 


FW: [cs.uu.nl #4642] [Fwd: watch out with mirror http://apache.cs.uu.nl/]

2006-08-21 Thread Ard Schrijvers
About the broken mirror http://apache.cs.uu.nl/, I was premature: The flaw 
seems to be way more serious, and seems to be apache-dist/jakarta/.htaccess in 
general! 

 Ard,
 
 you are spreading disinformation.
 
 In dist/jakarta there is a .htaccess file that does the redirects.
 You can check this by retrievig the file from the source site.
 
   rsync rsync.apache.org::apache-dist/jakarta/.htaccess .
 
 -- the mirror on cs.uu.nl is up-to-date, and
 -- it is doing what it is supposed to do

It is not. In my former mail, I do not state that the problem is at 
http://apache.cs.uu.nl/. It is not doing what it is supposed to, but the 
problem originates somewhere else, so the problem is *not* at 
http://apache.cs.uu.nl/. 

 -- you could and should have checked this before you posted
 
 -- stop spreading disinformation about apache.cs.uu.nl
 -- retract your false information on the 'dev@cocoon.apache.org' list
 -- file a bug report with jakarta people, if you think something is
wrong.

Yes I do think there is something terribly wrong! Maven should garantuee 
correct jars. If due to some configuration error, wrong jars end up in my 
project, then something is wrong. So I do not understand apache member Henk 
Penning to shoot the messenger, instead of seeing the severity of the issue, 
and try helping to get it out of the world. 

So, anyhow, who knows how I can address the correct person to file this problem 
that it gets resolved.

Regards Ard

 
 regards,
 
 Henk Penning
 


RE: FW: [cs.uu.nl #4642] [Fwd: watch out with mirror http://apache.cs.uu.nl/]

2006-08-21 Thread Ard Schrijvers
/snip
 And, in defense, our building needs to be empty by tomorrow 
 afternoon. 
 Renovation, reconstruction, network upgrade etc. It's a mess 
 over here 
 right now! I was surprised he found the time to respond, and so soon!!

I did not know all this, and addressing it in this way to the dev list might 
not be the most appropriate way :S. All I need to know now, is how I can reach 
the correct person, from infrastructure I suppose, that the issue can be 
resolved.

 
 I'm sure you'd understand he's not in the mood to address 
 Apache issues 
 on remote sites, if you could look only into our offices right now :)

Acceptee!! :-) 

 
 cheers,
 Sandor


watch out with mirror http://apache.cs.uu.nl/

2006-08-20 Thread Ard Schrijvers
Hello,

everybody using mirror http://apache.cs.uu.nl/ in his maven build.properties 
should be aware of downloading wrong jars. There seem to be strange rewrite 
rules configured, which rewrites you a couple of times when you want to 
download certain jakarta jars, and then just downloads 
http://jakarta.apache.org/site/downloads/index.html. You end up with hard to 
solve problems because of nonsense jars just containing an html page.

For example:
http://apache.cs.uu.nl/dist/jakarta/jakarta-slide
http://apache.cs.uu.nl/dist/jakarta/jakarta-bcel
http://apache.cs.uu.nl/dist/jakarta/jakarta-bsf

I did not test further, but it seems that jakarta projects, from which the jars 
are not present in apache.cs.uu.nl, are redirected, instead of returning a 404. 
If you for example have the dependency 

dependency
  idjakarta-slide:slide-stores/id
  versiona-random-version-here/version
/dependency

and you have the mirror http://apache.cs.uu.nl/, it will downlad the jar 
because you are redirected. I told everybody with whom I am developing to 
directly remove this mirror.

Regards Ard

Ps: How can we inform the right people that this flaw gets fixed? If you can't 
rely on mirrors to return the right jars, we really end up in unmanageable 
situations.


-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / http://www.hippo.nl
-- 


ehcache in branch_2_1_X

2006-08-04 Thread Ard Schrijvers
Hello,

the ehcache-1.2 version currently in branch_2_1_X needs to be updated to 
ehcache-1.2.2. The 1.2 version has a problem that the spool thread dies 
(happened for me every few days, leaving your ehcache effectively being a 
memoryStore only: it is as if overflow-to-disk=false).

Updating to 1.2.2 according the ehcache mailinglist solves this annoying bug, 
which seemed to be in ehcache-1.1 as well. 

http://sourceforge.net/project/showfiles.php?group_id=93232

Hopefully I am allowed to update myself in short notice :-) 

Which version does the cocoon-trunk use. I cannot find it.

Regards Ard


-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / http://www.hippo.nl
-- 


RE: [Summary] [Vote] Ard Schrijvers as a new Cocoon committer

2006-08-01 Thread Ard Schrijvers
Hello All,

Everybody thanks a lot for the unanimous +1 voting from everybody :-) And 
Reinhard, great you proposed me! I have send in my CLA, so hopefully everything 
is arranged in short notice and I actually acquire those committers only 
privileges :-)

So, probably I now have to say a little about myself for those who haven't met 
me before. I have studied theoretical physics in Amsterdam (uva), and graduaded 
after 10 years (sadly enough it could never really graps my attention, so I had 
some delays here and there..) in the direction of computational physics. 
Computational physics was newly added as a graduation direction, and mainly 
focussed on computer algorithms to simulate physical stuff (Quantification of 
spatiotemporal phenomena resulting from reaction-diffusion processes was my 
project). Basically, I only programmed a little in c and mathematica during my 
study (of which the latter has always astonished me completely: think of 
whatever complex integral and mathematica will answer it for you in a 
nano-second, brilliant! ), very basic, and that was about it. 

Since a few years I have been working now for hippo, and have been involved in 
building many many many sites with cocoon. Since we have moved to sites which 
are ever getting larger and larger, I have been really focussing on getting the 
best performance out of cocoon. It is so easy to make a mistake, one typo and 
your performance is gone. But, when you just keep track of your performance, 
cocoon can be really really fast.

But, you have to know quite a lot how to get cocoon being your number 1 
framework. I have been helping out on the userlist whenever I thought I could 
add some knowledge, and apparently this added up to 240 emails this year 
alone...since I always think a lot for each answer this counts up to about 240 
times 4 hours ~ 7 months of work :-). Clearly, I have been experiencing cocoon 
mainly as a user, but since about a year have been diving into the code, 
whenever I had time for it. I do realize there is tons of code in cocoon, and I 
still have to learn very much about it, but, when there is a will there is a 
road (I always like to translate dutch expressions directly in english :-) ), 
so I think I will manage!

Anyway, I am looking forward to work together with all of you! 

Regards Ard and hopefully meet you ALL at the Cocoon gettogether in Amsterdam!


 Reinhard Poetz wrote:
  
  I want to propose Ard Schrijvers as a new Cocoon committer. 
 People who 
  followed our mailing lists will find _many_ quality answers 
 to users 
  questions. But that's not all: Ard is an expert for 
 well-cached Cocoon 
  applications. His latest work on caching and the way how he is 
  approaching the problem (discuss issues with people on the 
 mailing list 
  and then provide patches) will certainly help us to finally 
 solve our 
  problems in this field.
  
  But that's not all. According to many, many mails (~ 240 in 
 the last 12 
  months) he knows a lot about Cocoon Forms and Cocoon in general too.
  
  I met Ard in Dublin and I remember that I had some 
 interesting talks 
  with him. I got the sense that he is able to deal with a 
 problem without 
  forgetting the big picture around.
  
  Having said enough about his functional skills I also want 
 to mention 
  that he is a great guy with a good sense of humor. I'm sure 
 that we as 
  community will be stronger in every respect with Ard being 
 a committer 
  than without him.
 
 We got 23 positive votes and no negative one. Therefore, 
 congratulations Ard and 
 welcome abord!
 
 Ard, if you accept your nomination, please get familiar with 
 http://www.apache.org/dev/new-committers-guide.html and send 
 your signed CLA to 
 the ASF Secretary.
 
 Please also let us know which user id you prefer and what 
 your forwarding email 
 address is.
 
 -- 
 Reinhard Pötz   Independent Consultant, Trainer  (IT)-Coach 
 
 {Software Engineering, Open Source, Web Applications, Apache Cocoon}
 
 web(log): http://www.poetz.cc
 
 
   
 ___ 
 Telefonate ohne weitere Kosten vom PC zum PC: 
http://messenger.yahoo.de


RE: RE: [Summary] [Vote] Ard Schrijvers as a new Cocoon committer

2006-08-01 Thread Ard Schrijvers

 
 
 On 8/1/06, Ard Schrijvers [EMAIL PROTECTED] wrote:
 
  (Quantification of spatiotemporal phenomena resulting from
  reaction-diffusion processes was my project...
 
 Wow! Looks like we'll have to keep you busy to prevent you from
 writing a QuantificationOfSpatiotemporalPhenomenaTransformer ;-)

Now you mention it... :-) 

 
 Welcome Ard, it's really good to have you onboard!
 
 -Bertrand
 


RE: Continuations consume ram, possible solutions?

2006-07-31 Thread Ard Schrijvers
Hello,

 
 +1, much simpler to implement, much simpler to use, no hidden 
 side effects.
 
 Ard, you carrying this out? :) :)
 

Yes, I will try, but I have some doubts/questions/ideas that I would like to 
share and find the best solution to implement this new janitor:

1) I will try to implement it initially for the cocoon-2.1.X version.
2) In my opinion, it doesn't make sense to see the excalibur StoreJanitor and 
the ContinuationJanitor as two seperate janitors trying to free memory when 
the JVM is low on memory. The StoreJanitor and the ContinuationJanitor are in 
my opinion one and the same, because I think depending on some choices, the 
janitor should either try to free memory from cache, or from the continuations 
(perhaps in a special case, from both.)

I have been thinking about changing the StoreJanitor anyway, to be able to have 
stores defined that are never tried to be cleared (like the 
defaultTransientStore containing for example 30 compiled xsl's. I never want 
the StoreJanitor removing these)

As we will also have the continuations to be managed, the name StoreJanitor 
seems inappropriate to me. Perhaps just CocoonJanitor? 

Anyway, stored are registered to the CocoonJanitor (call it from now on 
CocoonJanitor) already. I am not sure wether this is possible for 
continuations, but otherwise I would have to do a 
manager.hasService(ContinuationsManager.role), in the CocoonJanitorImpl, which 
is a bad solution, right? Does anybody have an idea how to register the 
ContinuationManager to the CocoonJanitor.

3) When the maxcontinuations for continuations is reached, we have to options: 
* hardlimit: adding one to the ContinuationManagerImpl means at the 
same moment actively removing one from it according LRU. 
* softlimit: betweem concurrent CocoonJanitor runs, the continuations 
are allowed to exceed the maximum. When this maximum is exceeded, the 
CocoonJanitor will prune it back to maxcontinuations according LRU.

I am in favor of the softlimit, to keep insertion of new continuations as lean 
and fast as possible. WDYT? 

4) When the CocoonJanitor runs, and JVM turns out to be low on memory, we have 
multiple choices
* Try to free from caches which are configured to be freeable (a 
nicer word I hope to find for this). I also want the StoreJanitor to free cache 
of all configured caches which are freeable at once, and do not do it like it 
does now: free from one cache, and jump to the next one in the next round (10 
sec for example later)
* Try to free from continuations
* Try to free from both.

We have to find some strategy on this one, but I think it might result in a 
quite heuristic solution: For example:

1) If maxcontinuations is exceeded, try freeing memory from continuations
2) If both maxobjects (for all freeable caches) AND maxcontinuations is not 
reached, where should we free from? Free from both? This does not always make 
sense. Free according to the largest occupied percentage? I am not sure about 
this, anybody? 
3) when (2), should we try freeing from continuation or cache according 
absolute numbers (which has the largest items in use) or relative? Should we 
make it configurable? This will be very hard for common users to grasp or tune. 
Of course, well documented configuration might largely help. But it remains 
quite heuristi. 

I think eviction policiy according LRU is very easy, because already the 
continuations are present sorted by expiring time (which gets increased when it 
is used again (I think at least it works like this)).

WDYT?

Regards Ard

 Simone
 
 Torsten Curdt wrote:
 
  snip/
 
  IMO the only way to solve this transparently is to more accressively
  expire and limit the number of continuations. It would make sense to
  come up with a LRU list of continuations per session. This 
 list has a
  maximum size that can defined. So the required maximum can is
  predictable. Generating more continuations means using free slots or
  throwing away the oldest ones in that LRU list. The janitor would
  basically only go through the list and expire to free the slots in
  that list.
 
  cheers
  -- 
  Torsten
 
 


RE: Continuations consume ram, possible solutions?

2006-07-27 Thread Ard Schrijvers
 with
 boxes or something similar.
 
 The idea here is NOT saving anything in the continuation, instead when
 the continuation is recalled, the function is executed again from the
 beginning, but skipping the first sendPage (since that page 
 has already
 been sent). This way we don't have any serialization issue, 
 the form is
 created, displayed and then garbage collected, and recreated when the
 user clicks the button, not displayed again since the first 
 sendPage is
 skipped, but only populated from the request.
 
 This is maybe the simplest one to implement, but is quite 
 untidy because
 if the code in the first lines execute something heavy or 
 some business
 logic, it could not be clear why it's getting executed twice. 
 But since
 it's aimed to solve the problem of simple forms generated many times
 (like login boxes, polls, subscribe here boxes and so on), it's highly
 possible that the first lines of the flow are simply instantiating a
 bean or a document, creating a form, doing some binding and displaying
 it, and if it's really a box aside the pages of the side that code is
 already executed at every get.

This might be a lightweight sollution indeed. I suppose that you can tell the 
continuation then that it is a lightweight or something? For example, 
lightSendPageAndWait. I think implementing it in the original 
sendPageAndWait/continuation might not be backwardscompatible: the 
implementation could have business logic before the continuation that actually 
stores something, that would be called again in the lightweight continuation. 
 
 Do you (plural, rest of cocoon community) have any other idea about
 this? Continuation pollution is actually a problem in flow
 implementations.

I do not have any ideas that you haven't pictured above :-)

Regards Ard

 
 Simone
 
 
 Ard Schrijvers wrote:
 
 Hello Simone,
 
 talking about continuations, did you already find a solid 
 way to handle high traffic sites with many concurrent 
 continuations and memory useage? It bothers me a little that 
 when building sites, we have to keep track of the number of 
 continuations (we had a large site with many visitors and a 
 poll on the homepage having a continuation. This brought the 
 site down a few times).
 
 You mentioned serializing continuations to diskStore 
 (ofcourse, the flowscript/javaflow writer should make sure al 
 things in the cont are serializable then). Is this feasible?
 
 The other day I also thought about the cocoon caches having 
 this StoreJanitor trying to free memory from it when JVM is 
 low on memory. It just does not make sense to me, that this 
 is only tried regarding the cache, while currently, also 
 continuations might be the reason for a low on memory JVM. 
 
 Suppose, I have a healthy cache, nicely populated, and a 
 high cachehit range, but it happened to be that many many 
 continuations have been created, all long lived (5 hours), 
 and all quite large (1 continuation can be very large in 
 memory). Now, due to this continuations, JMV is low on 
 memory, implying the StoreJanitor to run, removing my cache, 
 and certainly not solving any problem.
 
 So I was wondering if you had some new ideas on this 
 subject...though, perhaps the dev-list is more appropriate for it.
 
 WDYT?
 
 Regards Ard  
 
   
 
 Hi Toby,
 I think you are right. What a continuation does (should do) 
 is dump the
 local variables and restore them before restarting the flow. 
 This means
 that if you write var a = 1; then create a continuation, when 
 you return
 to that continuation a should be 1 again, even if in other 
 continuations
 it has been changed to 2,3 or 4.
 
 There is surely one known limitation to this : if you say var 
 bean = new
 MyBean(); bean.setX(1); then produce a continuation, then after the
 continuation you call bean.setX(2), even if you go back to 
 the previous
 continuation you will find that bean.getX() == 2, because the LOCAL
 VARIABLE is still your bean, but it's internal state is not 
 manageable
 by the continuation (more formally, your local variable is a 
 pointer to
 a bean, which is correctly restored when you go back to the
 continuation, but the data it points to is not 
 serialized/deserialized
 by the continuation).
 
 But this is not your case, in this case you are setting a simple
 javascript variable, so it should work as you say, at least AFAIK :)
 
 Please, file a bug about it.
 
 Simone
 
 Toby wrote:
 
 
 
 Jason Johnston wrote:
  
 
   
 
 First you assign the 'useless' variable a String value, then you
 create the continuation.  When you resume the 
 continuation the first
 time, you re-assign the 'useless' variable so that it now holds an
 Array value (String.split() returns an Array).  When you 
 resume the
 continuation again, you try to call .split() on the 'useless' var,
 which is now an Array, and the error is appropriately 
 
 
 thrown since an
 
 
 Array has no such method.

 
 
 
 When I resume the continuation again

RE: Continuations consume ram, possible solutions?

2006-07-27 Thread Ard Schrijvers

  
  The third option aims to solve the common situation of having a flow
  that initializes some variables, sends a form or a page (creating a
  continuation) and then waits for the user to click on a 
 button, that 90%
  of time never gets pushed. This is quite common in 
 aggregated pages with
  boxes or something similar.
 
 Why don't you use stateless forms? If you don't expect button 
 to be clicked 
 often, don't use stateful forms.

Indeed, for a poll on a homepage this is very easy. But there are cases where 
this is not so easy, and there is tons of code having this problem, and 
rewriting it is not an option. Indeed, I am trying to avoid continuations on 
hightraffic pages. But IMHO, I don't want to think about it :-) 

Wether we use a stateless form does not solve the actual problem. It should be 
possible with Cocoon + continuations + cforms to for example build a high 
traffic forum in my opinion. At the moment you are likely to run out of memory 
because of continuations (and then apart from the StoreJanitor trying to free 
your cache while continuations are the actual problem of low JVM)

Regards Ard 

 
 Vadim
 


RE: my doubts about the current CocoonStoreJanitor/StoreJanitorImpl

2006-07-27 Thread Ard Schrijvers
 snip/
 
  So, in the StoreJanitor, I check now wether the store is 
  instanceof EHDefaultStore,
 
  That's a really bad idea, in and by itself. Never rely on concrete 
  implementations, always work against contracts (interfaces).
  
  Ok. I try and see if there is another way to get this in. I 
 still have to do the following changes then
  
  1) StoreJanitor to only free cache/memory from stores configured to
  2) Change the EHDefaultStore the public int size()  to 
 return the memoryStoreSize()
  3) Change the free() in EHDefaultStore according to the 
 correct eviction policy. This might need changes in the ehcache. 
  
  I think this meets your requirements as well, right?

Hello, I have been thinking about a solution for point (1), to have the 
StoreJanitor only free memory from caches you did not define as not to free 
from. It is not hard at all to implement, but I am not sure about the 
politics, so here are my concerns:

Ideally I would like:

1) To change the StoreJanitor, to free a percentage of all caches at ones, and 
not one after another. Also, only from caches that have a parameter that says 
that they a freeable or something (and for backwardscompatibility of course 
also when the param is missing)

2) To change the Store interface that store implementations should have a 
method isFreeable()
3) Then all store implementations should implement isFreeable().

Now, I suppose this is of course NOT possibleat least I think. I suppose a 
contract is a contract, right? And if we would like to change it, it is an 
excalibur project.

How can I achieve this?

Regards Ard

 
 Looks good.
 
 Vadim
 


RE: RE: Continuations consume ram, possible solutions?

2006-07-27 Thread Ard Schrijvers

 snip/
 
 IMO the only way to solve this transparently is to more accressively
 expire and limit the number of continuations. It would make sense to
 come up with a LRU list of continuations per session. This list has a
 maximum size that can defined. So the required maximum can is
 predictable. Generating more continuations means using free slots or
 throwing away the oldest ones in that LRU list. The janitor would
 basically only go through the list and expire to free the slots in
 that list.

With the janitor, do you mean the StoreJanitor? Therefor, a 
ContinuationManagerImpl should somehow be registered in the StoreJanitor, or do 
I miss your point. Then, if low on memory, the StoreJanitor (perhaps its name 
is a little awkward when it also manages the continuations), should somehow 
figure out wether to free memory from cache or from continuations (or both), 
right? 

Expiring them aggresivily is not always possible (people might need an hour to 
fill in a form, this means by default all cont live for an hour. I think though 
you can specify a ttl for each cont, isn't? ). Also setting a limit might be 
quite tricky, because it sometimes is quite difficult to know in advance how 
large your cont in memory will be (we had in flow a closure that did a 
cocoon.processPipelineTo before a handleform() that took about 100Mb for 200 
continuations!). You can't expect users to really know all this and know in 
advance with X memory they have Y continuations. 

Anyway, a janitor freeing continuation according LRU to prevent OOM is good to 
have. 

Regards Ard 

 
 cheers
 --
 Torsten
 


RE: [jira] Created: (COCOON-1885) The EHDefaultStore returns in the size() method the wrong number of keys

2006-07-26 Thread Ard Schrijvers

 
 Ard Schrijvers escribió:
  /snip

  If there are better alternatives to ehcache we should 
 consider them of
  course. Personally I would like that this work will be 
 done in trunk
  only. We could build an own maven project for each cache 
  implementation
  which will reduce the dependencies for a user and makes
  switching/choosing fairly easy.
 
  But if we provide alternatives we should have at least 
 some guidelines
  explaining when to choose which implementation.
  
 
  I will try to see if whirlycache meets our goals better 
 (especially I want to look at the way the diskStore behaves 
 (I want to be able to limit the diskStore keys), and wether 
 we can access the eviction policy from within our classes, to 
 be able to free some memory from cache in a sensible way). I 
 think those guidelines should be in the document I am 
 planning (sry...still in planning phase) to write on caching 
 and best practices. Also the many store configurations, in 
 which errors are easily made should be (will be I hope) 
 documented transparently. 

  From your provided link [1], the last section said:
 
 JCS limits the number of keys that can be kept for the disk 
 cache. EH 
 cannot do this.
 
 
  I am not sure if supporting many cache implementation is 
 good practice. If there is a large difference between the 
 caches, where cache1 performs much better in memoryStore, and 
 cache2 much better in diskStore, and cache3 is avergae in 
 both, then I suppose supporting different caches might be a 
 good option (though, an easy lookup of which cache impl suits 
 your app best should be available. Then again, this 
 documentation ofcourse is outdated after every cache impl release)

 Dunno too, but from a practicl POV, supporting different caches would 
 make easier to use the same cache in combination with other 
 libraries. 
 ie: Apache ojb or hibernate. For this reason, I am +1 to support 
 different caches. ;-)

Ok, by me. As long as people know/can find out easily, which one to use. 
Furthermore I saw there is a cocoon howto for the whirlycache [1] (Eric Meyer 
implemented this one). Though, the things I was looking for in whirlycache are 
not implemented in the WhirlycacheStore, which implements the excalibur store 
[2].

Current shortcomings are:

1)boolean containsKey(java.lang.Object key) is not implemented: is says I 
can't imagine why anybody would rely on this method.. Probably they do not 
think anybody ever wants to use this one, though I quite frequently use it. I 
use it when I am not interested in the cached reponse, but only wether the 
cachekey is present. Why would you ever need that? : when you use event caching 
it is often enough to know wether the cachekey exists.
2) java.util.Enumeration Keys() says: we don't support keys. Our 
StatusGenerator relies on it, and looking at a sites cachekeys in a Status 
overview is the very first thing I do when sites are having problems. A lot can 
be seen in cachekeys, also implementation errors and flaws (like repo uris that 
cannot invalidate, search query results that are cached, and time/date things 
in cache keys)
3) void free() says: This method is not supported by WhirlyCache. This one 
makes our StoreJanitor totally useless in trying to free some cache when low on 
JVM.

I haven't checked out the code yet, so am not sure wether it is easy to 
implement number 1,2 and 3. If it is not possible, I think I will stick to 
ehcache or JCS. For whirlycache I still have to check wether the removing keys 
according the eviction policy from the outside is possible.

I have one last remark/question that I am really curious about: In all cache 
implementations (ehcache, JCS, wirlycache), I haven't found one single cache 
having public methods to easily manually move cachekeys from memoryStore to 
diskStore according the eviction policy, delete number X memoryStore 
cachekeys+response and also for diskStore. Without these methods, there is no 
single way we can have a real proper working StoreJanitor. 

Do you think there is a specific reason why the implementations don't 
facilitate these quite simple (and in my opinion important) methods? Is it very 
strange to have this requirement? The JCS developers said it shouldn't be that 
hard to implement it in JCS. I will try if whirlycache makes it easy to 
implement these requirements

Regards Ard

[1] https://whirlycache.dev.java.net/cocoon-howto.html
[2] 
https://whirlycache.dev.java.net/nonav/api/com/whirlycott/cache/component/store/WhirlycacheStore.html

 
 Best Regards,
 
 Antonio Gallardo.
 
 [1] http://jakarta.apache.org/jcs/JCSvsEHCache.html

 
 


[jira] Created: (COCOON-1885) The EHDefaultStore returns in the size() method the wrong number of keys

2006-07-25 Thread Ard Schrijvers (JIRA)
The EHDefaultStore returns in the size() method the wrong number of keys


 Key: COCOON-1885
 URL: http://issues.apache.org/jira/browse/COCOON-1885
 Project: Cocoon
  Issue Type: Bug
  Components: * Cocoon Core
Affects Versions: 2.1.9
Reporter: Ard Schrijvers
Priority: Critical


The excalibut store interface defines a size() method for a store: 

/**
 * Returns count of the objects in the store, or -1 if could not be
 * obtained.
 */
int size();

What it not explicitely says, is that it is the number of keys in memoryStore 
(so not the diskStore) is needed. The StoreJanitor uses this size() to free 
some memory from cache when the JVM is low on memory. Since the current 
EHDefaultStore returns with size() ALL cachekeys (memoryStoreSize + 
diskStoreSize), it is quite likely when having a large cache that the 
StoreJanitor removes all cachekeys in memoryStore. Simply changing the size() 
method of EHDefaultStore to return the number of keys in memoryStore is 
sufficient.  The JCSDefaultStore did implement it correctly already (though I 
do not see it in the cocoon trunk anymore..?)





-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (COCOON-1885) The EHDefaultStore returns in the size() method the wrong number of keys

2006-07-25 Thread Ard Schrijvers (JIRA)
 [ http://issues.apache.org/jira/browse/COCOON-1885?page=all ]

Ard Schrijvers updated COCOON-1885:
---

Attachment: EHDefaultStore.patch

Fix for EHDefaultStore returning wrong number in the size() method

 The EHDefaultStore returns in the size() method the wrong number of keys
 

 Key: COCOON-1885
 URL: http://issues.apache.org/jira/browse/COCOON-1885
 Project: Cocoon
  Issue Type: Bug
  Components: * Cocoon Core
Affects Versions: 2.1.9
Reporter: Ard Schrijvers
Priority: Critical
 Attachments: EHDefaultStore.patch


 The excalibut store interface defines a size() method for a store: 
 /**
  * Returns count of the objects in the store, or -1 if could not be
  * obtained.
  */
 int size();
 What it not explicitely says, is that it is the number of keys in memoryStore 
 (so not the diskStore) is needed. The StoreJanitor uses this size() to free 
 some memory from cache when the JVM is low on memory. Since the current 
 EHDefaultStore returns with size() ALL cachekeys (memoryStoreSize + 
 diskStoreSize), it is quite likely when having a large cache that the 
 StoreJanitor removes all cachekeys in memoryStore. Simply changing the size() 
 method of EHDefaultStore to return the number of keys in memoryStore is 
 sufficient.  The JCSDefaultStore did implement it correctly already (though I 
 do not see it in the cocoon trunk anymore..?)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >