Taking a second look at this, this definitely has something to do with jackrabbit shutting down. Although I'm not sure what. If I start with a fresh repository I can repeatedly run my test from my IDE without any problems. However, when I run the test via maven I get the error below. I have verified that I am calling shutdown on the repository correctly. Any ideas about places to look for something that might be corrupting the index? Dan
On Mon, Mar 23, 2009 at 11:48 AM, Dan Diephouse < [email protected]> wrote: > I'm getting the following exception with 1.4.x: > java.lang.NullPointerException > at > org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer$2.collect(CachingIndexReader.java:362) > at > org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer.collectTermDocs(CachingIndexReader.java:426) > at > org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer.initializeParents(CachingIndexReader.java:356) > at > org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer.run(CachingIndexReader.java:306) > at > org.apache.jackrabbit.core.query.lucene.CachingIndexReader.<init>(CachingIndexReader.java:109) > at > org.apache.jackrabbit.core.query.lucene.AbstractIndex.getReadOnlyIndexReader(AbstractIndex.java:276) > at > org.apache.jackrabbit.core.query.lucene.MultiIndex.getIndexReader(MultiIndex.java:731) > at > org.apache.jackrabbit.core.query.lucene.MultiIndex.<init>(MultiIndex.java:303) > at > org.apache.jackrabbit.core.query.lucene.SearchIndex.doInit(SearchIndex.java:454) > at > org.apache.jackrabbit.core.query.AbstractQueryHandler.init(AbstractQueryHandler.java:53) > at > org.apache.jackrabbit.core.SearchManager.initializeQueryHandler(SearchManager.java:583) > at > org.apache.jackrabbit.core.SearchManager.<init>(SearchManager.java:265) > at > org.apache.jackrabbit.core.RepositoryImpl.getSystemSearchManager(RepositoryImpl.java:625) > at > org.apache.jackrabbit.core.RepositoryImpl.access$300(RepositoryImpl.java:104) > at > org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.getSearchManager(RepositoryImpl.java:1600) > at > org.apache.jackrabbit.core.RepositoryImpl.initWorkspace(RepositoryImpl.java:606) > at > org.apache.jackrabbit.core.RepositoryImpl.initStartupWorkspaces(RepositoryImpl.java:415) > at > org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java:305) > at > org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:557) > > The code is: > > collectTermDocs(reader, new Term(FieldNames.PARENT, "0"), new > TermDocsCollector() { > public void collect(Term term, TermDocs tDocs) throws > IOException { > while (tDocs.next()) { > UUID uuid = UUID.fromString(term.text()); > Integer docId = new Integer(tDocs.doc()); > NodeInfo info = (NodeInfo) docs.get(docId); > info.parent = uuid; > docs.remove(docId); > docs.put(info.uuid, info); > } > } > }); > > Correct me if I'm wrong, but it seems to me that the code should not assume > that there is already a NodeInfo in the docs cache. For instance, if a node > was added between when we gathered all the UUIDs and when we searched the > parent IDs, then docs.get(docId) could return null. The whole in that > explanation though would be that this is happening on initialization and > there should be anything/anyone adding/removing nodes. > > Thoughts? > > Dan > -- > Dan Diephouse > http://mulesource.com | http://netzooid.com/blog > -- Dan Diephouse http://mulesource.com | http://netzooid.com/blog
