[
https://issues.apache.org/jira/browse/JSPWIKI-809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858469#comment-13858469
]
Juan Pablo Santos RodrÃguez commented on JSPWIKI-809:
-----------------------------------------------------
Hi,
also had a look and noticed several things regarding this issue:
- the ehcache.xml file is missing an opening comment symbol, thus xml not being
wellformed (also misses AL header)
- MassiveRepositoryTest, on setUp(), removes all caches from cacheManager
(CacheManager.getInstance().removalAll()).
-- This is done in order to not pollute one test with the result from another
(as there is a testMassiveRepository1(), surely we've had more test on that
class)
-- Since we've called CacheManager.getInstance(), the cache singleton is
created, meaning further calls to CacheManager.getInstance() (i.e. when
creating the WikiEngine) will return the same instance, without the caches
(they've been removed)
-- fixable by changing CacheManager.getInstance().removalAll() with
CacheManager.getInstance().clearAll()
fixed that, the same issue keeps happening, due to caches set to 1000 elements
(except rss, which is set to 250) on ehcache.xml. The problem seems to lie on
CachingProvider.getAllPages(), which returns elements only from the cache, so
if you request more elements than the cache is able to hold, you'll get the
maximum elements present on the cache, hence the error.
Possible fixes/workarounds:
The academic fix should return CachingProvider.getAllPages() + subList of
m_provider.getAllPages(), but this is easier said than done, as implies
modifications on all other providers. Also, something in the lines of
getAllPagesExcept(WikiPages...) seems weird..
For 2.10.0, I'd check CachingProvider.getAllPages() size() against
m_cache.getMemoryStoreSize(); if equals it probably means cache full, so we
return m_provider.getAllPages(), logging a warning to revisit & increase values
on ehcache.xml
I've tested on local, and seems fine, but prefer to ask first on proposed
solution, WDYT?
br,
juan pablo
> PageCache has hardcoded limit of 1000 and doesn't fail gracefully
> -----------------------------------------------------------------
>
> Key: JSPWIKI-809
> URL: https://issues.apache.org/jira/browse/JSPWIKI-809
> Project: JSPWiki
> Issue Type: Bug
> Components: Core & storage, Unit Testing
> Affects Versions: 2.10
> Environment: Linux, java 1.7.0_45
> Reporter: Marco Roeland
> Assignee: Harry Metske
> Priority: Minor
>
> Summary: if you have "too many" (1000) pages in your Wiki at least
> in Tomcat 7.0.47 the JSPWiki application fails and stops. This is
> also reproducible in the unit tests, although there it is not noticed
> normally at the moment because exactly the (working) boundary condition of
> 1000 is used.
> The default for caching is "jspwiki.usePageCache = true". Since
> 2.10.0-svn-45 a new or at least changed CachingProvider is used.
> It seems that in
> jspwiki-war/src/main/java/org/apache/wiki/providers/CachingProvider.java
> the default "public static final int DEFAULT_CACHECAPACITY = 1000; //
> Good most wikis"
> is always used, even if lower or higher values are used in ehcache.xml. Or
> perhaps
> the default ehcache.xml is in the wrong location to be used.
> Steps to reproduce (version 2.10.0-svn-60 is used):
> mvn -e test -Dtest=MassiveRepositoryTest -DfailIfNoTests=false
> The "-DfailIfNoTests=false" is necessary in this case because otherwise we
> get a test failure for jspwiki-pages-de. Possibly a bug in the test setup but
> otherwise uninteresting.
> The test succeeds.
> Now edit
> jspwiki-war/src/test/java/org/apache/wiki/stress/MassiveRepositoryTest.java
> and on line 78 edit the line "int numPages = 1000;" and set numPages to a
> value of 1001.
> Rerun the test which now fails.
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> Running org.apache.wiki.stress.MassiveRepositoryTest
> Creating 1001 pages
> .....................................................................................................
> Took 0:00:08.858, which is 113.00519304583428 adds/second
> Checking in 1000 revisions
> ....................................................................................................
> Took 0:00:06.360, which is 157.2327044025157 adds/second
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 16.262 sec
> <<< FAILURE! - in org.apache.wiki.stress.MassiveRepositoryTest
> testMassiveRepository1(org.apache.wiki.stress.MassiveRepositoryTest) Time
> elapsed: 16.198 sec <<< FAILURE!
> junit.framework.AssertionFailedError: Right number of pages expected:<1001>
> but was:<1000>
> at junit.framework.Assert.fail(Assert.java:57)
> at junit.framework.Assert.failNotEquals(Assert.java:329)
> at junit.framework.Assert.assertEquals(Assert.java:78)
> at junit.framework.Assert.assertEquals(Assert.java:234)
> at junit.framework.TestCase.assertEquals(TestCase.java:401)
> at
> org.apache.wiki.stress.MassiveRepositoryTest.testMassiveRepository1(MassiveRepositoryTest.java:134)
> Results :
> Failed tests:
> MassiveRepositoryTest.testMassiveRepository1:134 Right number of pages
> expected:<1001> but was:<1000>
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> Perhaps the number 1000 was hardcoded in the test itself? Nope, if we change
> numPages to 999 and rerun the test it succeeds.
> Editing jspwiki-war/src/main/resources/ehcache.xml and changing all values of
> 1000 there to 100 still makes the test succeed with numPages set to 1000 and
> fails with 1001.
> If we however in
> jspwiki-war/src/main/java/org/apache/wiki/providers/CachingProvider.java edit
> line 92, shown below
> public static final int DEFAULT_CACHECAPACITY = 1000; // Good most
> wikis
> to 999, then the test succeeds with numPages at 999 (or lower) and fails with
> numPages at 1000 (or higher).
> Disabling the caching in
> jspwiki-war/src/test/resources/jspwiki-vers-custom.properties by setting
> jspwiki.usePageCache = false
> makes the tests succeed, even beyond the limit at 1000. And this is also the
> workaround if your Wiki has a large number of pages. In reality what exactly
> is the limit is less clear. I have a single Tomcat instance with eight
> JSPWiki applications and also eight separate pageDir repositories.
> The larger ones "crash" (get stopped by Tomcat), but some crash even with
> "only" 345 pages. The following is the only thing that gets logged in
> catalina.out.
> dec 29, 2013 10:00:53 AM org.apache.catalina.loader.WebappClassLoader
> loadClass
> INFO: Illegal access: this web application instance has been stopped already.
> Could not load net.sf.ehcache.util.ProductInfo. The eventual following
> stack trace is caused by an error thrown for debugging purposes as well as to
> attempt to terminate the thread which caused the illegal access, and has no
> functional impact.
> java.lang.IllegalStateException
> at
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1588)
> at
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1547)
> at
> net.sf.ehcache.util.UpdateChecker.buildParamsString(UpdateChecker.java:133)
> at
> net.sf.ehcache.util.UpdateChecker.buildUpdateCheckUrl(UpdateChecker.java:123)
> at net.sf.ehcache.util.UpdateChecker.doCheck(UpdateChecker.java:68)
> at
> net.sf.ehcache.util.UpdateChecker.checkForUpdate(UpdateChecker.java:60)
> at net.sf.ehcache.util.UpdateChecker.run(UpdateChecker.java:51)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
> These same repositories had no caching problems with version 2.10.0-svn-15.
> As an easy workaround exists (disabling cache) this is in my very humble
> opinion no showstopper. Perhaps the fact a limit exists and especially that
> the failure if you happen to stumble beyond isn't very clearly reported as an
> error message, could be documented somewhere.
> Also it might be nice to be able to configure the limit as a property. This
> doesn't seem to be possible at the moment?
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)