'get_next_page' actually returns a ListRepresentation of ListRepresentations. 
The number of items in the outer list is what I call page size. I have varied 
this parameter and it seems that the page size by itself does not really 
matter. I have queries which consistently get returned correctly where each 
JSON string (i.e., page) is about 200k characters. A smaller page size also 
works for these queries (e.g. page size of about 20k chars). If I query for 
slightly larger (~2x) result sets and try various page sizes, again, page size 
does not seem to matter but they consistently fail after having returned 
several pages (initial pages do get returned both for 20k och 200k char 
strings). It also does not seem to matter if the consumer application slows 
down and queries for each page at a slower pace.
I return 'new ListRepresentation("x", rows)' where 'rows' is a 
List<ListRepresentation> in 'get_next_page'. Each row is built via 
'ListRepresentation.strings(values)' where values is a String[].
If I instead try, as you suggested, to return a constant ListRepresentation, it 
seems to work fine for any result set. So perhaps the REST string building is 
the problem? Maybe the JVM has trouble GC:ing all ListRepresentations between 
the calls?

> From: peter.neuba...@neotechnology.com
> Date: Tue, 15 Nov 2011 14:51:30 +0100
> To: user@lists.neo4j.org
> Subject: Re: [Neo4j] Server plugin running into memory limits
> 
> Anders,
> how many items are you holding in that representation list? Also, it could
> be that the deserialization is taking up memory. Can you test just
> returning a constant string (not building up memory) and check if that
> changes anything? In that case we could track it down to the REST String
> building ...
> 
> Cheers,
> 
> /peter neubauer
> 
> GTalk:      neubauer.peter
> Skype       peter.neubauer
> Phone       +46 704 106975
> LinkedIn   http://www.linkedin.com/in/neubauer
> Twitter      http://twitter.com/peterneubauer
> 
> http://www.neo4j.org              - NOSQL for the Enterprise.
> http://startupbootcamp.org/    - Öresund - Innovation happens HERE.
> 
> 
> 2011/11/15 Anders Lindström <andli...@hotmail.com>
> 
> >
> > Hi all,
> > I'm currently writing a server plugin. I need it to make some specialized
> > queries that are not supported by the standard REST API. The important
> > methods I expose are 'query' and 'get_next_page', the latter to support
> > results pagination (i.e. the plugin is stateful).
> > In 'query', I run my query against the Neo4j backend, and store a Node
> > iterator to the query results (this is either an iterator originating from
> > 'getAllNodes', or a Lucene IndexHits<Node> instance). In 'get_next_page', I
> > run through the next N items of the iterator and return these as a
> > ListRepresentation. The same iterator object is kept across all page
> > retrievals, but of course stepped forward N steps for every invocation.
> > After having gone through all pages, the reference to the Node iterator is
> > removed.
> > Now, as I understand it, all the heap space I should be concerned about
> > using, is the one I allocate locally in my methods, since the referenced
> > stored to the iterator object is just a tiny reference, and iterator
> > results are fetched lazily (i.e., even though the iterator covers a result
> > set greater than the allotted heap size, I shall be able to page through it
> > within given heap space if the page size is small enough). But when I run
> > my plugin, this does not seem to be the case. I can make several successful
> > calls in a row to 'get_next_page', but then after a while bump into "GC
> > overhead limit exceeded" which I cannot quite understand. I am rather
> > certain the size of each page returned is within the allotted heap size.
> > For some reason the heap usage seems to grow with the calls to
> > 'get_next_page' which I cannot understand, given my understanding of the
> > Node iterators from Neo4j.
> > How do I avoid hitting this GC overhead limit? Am I missing something?
> > (And yes, I've tried using different values of the allowed heap space by
> > fiddling in the conf-files, and sure I can give tons of memory to the
> > instance, and then it works, but I shouldn't have to give more heap space
> > than what Neo4j "needs", plus my page size).
> > Thanks!
> > Regards,Anders
> >
> > _______________________________________________
> > Neo4j mailing list
> > User@lists.neo4j.org
> > https://lists.neo4j.org/mailman/listinfo/user
> >
> _______________________________________________
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
                                          
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to