2:12:13 AM
Subject: Re: [Neo4j] neo4j REST server configuration
Is this resolved? Take a look at
http://wiki.neo4j.org/content/Getting_Started_REST#Configure_amount_of_memoryotherwise
2010/8/7 Mohit Vazirani mohi...@yahoo.com
Hi,
I'm running the standalone neo4j REST server on a 64 bit linux
for adding it to the wiki. Any ideas as to
why the JMX doesn't show the info when attached?
~Mohit
- Original Message
From: Mattias Persson matt...@neotechnology.com
To: Neo4j user discussions user@lists.neo4j.org
Sent: Tue, September 14, 2010 2:12:13 AM
Subject: Re: [Neo4j] neo4j REST
Is this resolved? Take a look at
http://wiki.neo4j.org/content/Getting_Started_REST#Configure_amount_of_memoryotherwise
2010/8/7 Mohit Vazirani mohi...@yahoo.com
Hi,
I'm running the standalone neo4j REST server on a 64 bit linux machine with
64GB
RAM and am trying to configure the following
Hi Brock,
My guess is that the high load makes requests pile up, thus also keeping
transactions open. Are there many threads running too? Because if you have
249 open transactions, and only, say, 20 threads processing requests, then
something sounds fishy. Could be something to look into.
I told
Hi David, Brock,
I wonder if there's any scope for caching in Brock's domain? It'd be pretty
cool if we could semi-automatically put caching headers on any retrieved
representations.
Jim
On 13 Aug 2010, at 21:21, David Montag wrote:
Hi Brock,
My guess is that the high load makes
Hey David,
I was mistaken, jetty-6.1.25 was in the lib, but here's the error:
INFO | jvm 1| 2010/08/12 05:19:38 | WrapperSimpleApp: Encountered an
error running main: java.lang.NoClassDefFoundError:
org/mortbay/jetty/HandlerContainer
INFO | jvm 1| 2010/08/12 05:19:38 |
David hopped on gchat and we discovered that error was due to my failure to
deploy the updated wrapper.conf. Overall the Jetty version seems much more
stable. It never crashed but there was one issue that eventually caused it
to start rejecting most requests.
As it runs, it appears to be
Mmh,
seems we should stresstest the server and Grizzly with e.g.
http://www.soapui.org and see if we can reproduce the scenario, if
there is no obvious hint to this. Will try to set it up ...
Cheers,
/peter neubauer
COO and Sales, Neo Technology
GTalk: neubauer.peter
Skype
Thanks Peter. Let us know if there is anything else we can provide in the
way of logs or diagnosis from our server.
-Brock
On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer
peter.neuba...@neotechnology.com wrote:
Mmh,
seems we should stresstest the server and Grizzly with e.g.
Perhaps something as simple as a Grinder script might help?
Jim
On 11 Aug 2010, at 17:57, Brock Rousseau wrote:
Thanks Peter. Let us know if there is anything else we can provide in the
way of logs or diagnosis from our server.
-Brock
On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer
Nice,
will try that out Jim! Grinder seems cool.
Cheers,
/peter neubauer
COO and Sales, Neo Technology
GTalk: neubauer.peter
Skype peter.neubauer
Phone +46 704 106975
LinkedIn http://www.linkedin.com/in/neubauer
Twitter http://twitter.com/peterneubauer
So the current status is that David has got neo4j REST running on Jetty with
all tests passing. We've also searched through the code, and found that
there are no interrupt() calls in the jersey source, while there are a few
on the grizzly side. There is one in particular that we have been looking
Hi Jacob,
Would you be able to email me that patch? It's probably easier for me to
throw it on our server and let you know how it goes rather than you guys
having to try and reproduce it.
Rough data for our server:
~1.5 billion relationships
~400 million nodes
~1,200 transactions per minute
Hi Brock,
If you svn update to the latest version of the REST component, apply the
patch I'll send to you, and rebuild it as per Jacob's previous instructions,
then it should use Jetty instead.
Keep in mind that this was a quick fix done today, so it might break down
for the same or other
Hey David,
No worries about the disclaimer. I am getting a runtime error on startup
though due to the lack of the Jetty libraries. Any special instructions
there or should I just grab them from Jetty's website?
Also, would any of you be available via gchat some time in the next 24 hours
so I can
Hi Brock,
Ok, that should have been taken care of by Maven, let me have a look at
that. It should of course work to just mvn install:install-file them
yourself into your repository. But I'll have a look at that.
I'm free for gchat any time today if you want.
David
On Thu, Aug 12, 2010 at 12:29
Hi Brock,
Sorry, I misread your e-mail, I thought you said compile time. I should at
least have breakfast before answering any e-mails :)
So, a runtime error. What library/class is missing? Could you provide us
with the error, it would help.
You can grab Jetty 6.1.25 and put it in lib, if
Brock,
I've written a patch that allows setting the max threads via the neo4j
configuration file. Applying it fixes the problem you mention, but leads to
some kind of resource starvation higher up. This is not as critical, at 400
concurrent requests it will make the server alternate between 12ms
Hey Jacob,
The patch sounds perfect, but I didn't see an attachement. Did i miss it
somehow?
The standalone project built fine for me, but we're getting these test
failures for mvn install on the rest component:
Failed tests:
Apparently the email list deletes attachments it does not recognize, I'll
send it directly to your email.
As far as the test failures go, I don't know what causes that. I ran and
passed all tests with the patch, but the trunk version should do that too.
It might be something platform specific..
I got the patch Jacob, thanks!
We're running Centos 5.5, Java 1.6.0, and Maven 2.2.1
If it's fine for you guys then as you said, likely some configuration
difference with our system, so i'll just run it with-Dmaven.test.skip=true
Applying the patch now, i'll let you know how it goes. Thanks
The patched trunk code is working fine after dropping it over our existing
deployment. I'm going to wait until we have the support of our site-speed
engineers in the morning before testing the transaction limit under
full production load, but I'll post the results as soon as we have them.
Thanks,
The problem with the resource starvation turned out to be fairly simple, and
was due to a missing connection:close header in the neo4j REST client I
was using.
It would be interesting to look into the connection:keep-alive behaviour
of Grizzly, using HTTP 1.1 pipelining would be an excellent way
The patch worked perfectly for increasing the concurrent transaction cap,
but unfortunately exposed another issue.
After increasing the load hitting our rest server, it performs smoothly for
10-15 minutes then begins issuing 500 responses on all transactions. When it
happens, the number of open
Hi!
We were able to increase the heap size by removing the
wrapper.java.initmemory and wrapper.java.maxmemory settings from
wrapper.conf and instead used this:
The Java Service Wrapper version used has a known bug not allowing more
than four digits in the maxmemory setting. Thanks for
Brock,
I've been digging into this, and was able to replicate your problem with a
concurrency cap at 5 transactions. It appears it is the Grizzly server
running REST that imposes this limit by not expanding its thread pool the
way it's supposed to. Increasing the initial number of threads
Hey Jacob,
Thanks for the quick response!
We saw your post on the grizzly mailing list about the transaction limit
fix. Is that something we'd be able to implement on our end today? We've had
to throttle back the traffic significantly and are eager to see it in action
at full volume.
-Brock
This is an update to the issue previously reported by Mohit.
We were able to increase the heap size by removing the
wrapper.java.initmemory and wrapper.java.maxmemory settings from
wrapper.conf and instead used this:
wrapper.java.additional.1=-d64
wrapper.java.additional.2=-server
Hi,
I'm running the standalone neo4j REST server on a 64 bit linux machine with
64GB
RAM and am trying to configure the following memory settings through the
wrapper.conf file:
wrapper.java.initmemory=16144
wrapper.java.maxmemory=16144
However when I restart the server, JMX shows me the
29 matches
Mail list logo