Re: [Neo4j] neo4j REST server configuration

2010-09-15 Thread Mohit Vazirani
Hi,


The heap part was resolved. Thanks for adding it to the wiki. Any ideas as to 
why the JMX doesn't show the info when attached?

~Mohit


- Original Message 
From: Mattias Persson matt...@neotechnology.com
To: Neo4j user discussions user@lists.neo4j.org
Sent: Tue, September 14, 2010 2:12:13 AM
Subject: Re: [Neo4j] neo4j REST server configuration

Is this resolved? Take a look at
http://wiki.neo4j.org/content/Getting_Started_REST#Configure_amount_of_memoryotherwise


2010/8/7 Mohit Vazirani mohi...@yahoo.com

 Hi,

 I'm running the standalone neo4j REST server on a 64 bit linux machine with
 64GB
 RAM and am trying to configure the following memory settings through the
 wrapper.conf file:

 wrapper.java.initmemory=16144
 wrapper.java.maxmemory=16144

 However when I restart the server, JMX shows me the following VM arguments:

 -Dcom.sun.management.jmxremote -Xms4096m -Xmx4096m -Djava.library.path=lib
 -Dwrapper.key=q8W6vP8LS9mj0ekz -Dwrapper.port=32000
 -Dwrapper.jvm.port.min=31000
 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=27943 -Dwrapper.version=3.2.3
 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE
 -Dwrapper.cpu.timeout=10
 -Dwrapper.jvmid=1

 Another unrelated issue is that JMX Mbeans shows configuration attributes
 as
 unavailable when I attach to the REST wrapper.

 The reason I am looking into modifying the configuration is that my client
 servers seem to be timing out. The server cannot handle more than 5
 concurrent
 transactions, so I want to tweak the heap size and see if that helps.

 Thanks,
 ~Mohit




 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user



  
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-09-15 Thread Peter Neubauer
Mohit,
are you connecting via JConsole to the running process to see the JMX data?

Cheers,

/peter neubauer

VP Product Development, Neo Technology

GTalk:      neubauer.peter
Skype       peter.neubauer
Phone       +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter      http://twitter.com/peterneubauer

http://www.neo4j.org               - Your high performance graph database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



On Wed, Sep 15, 2010 at 11:03 AM, Mohit Vazirani mohi...@yahoo.com wrote:
 Hi,


 The heap part was resolved. Thanks for adding it to the wiki. Any ideas as to
 why the JMX doesn't show the info when attached?

 ~Mohit


 - Original Message 
 From: Mattias Persson matt...@neotechnology.com
 To: Neo4j user discussions user@lists.neo4j.org
 Sent: Tue, September 14, 2010 2:12:13 AM
 Subject: Re: [Neo4j] neo4j REST server configuration

 Is this resolved? Take a look at
 http://wiki.neo4j.org/content/Getting_Started_REST#Configure_amount_of_memoryotherwise


 2010/8/7 Mohit Vazirani mohi...@yahoo.com

 Hi,

 I'm running the standalone neo4j REST server on a 64 bit linux machine with
 64GB
 RAM and am trying to configure the following memory settings through the
 wrapper.conf file:

 wrapper.java.initmemory=16144
 wrapper.java.maxmemory=16144

 However when I restart the server, JMX shows me the following VM arguments:

 -Dcom.sun.management.jmxremote -Xms4096m -Xmx4096m -Djava.library.path=lib
 -Dwrapper.key=q8W6vP8LS9mj0ekz -Dwrapper.port=32000
 -Dwrapper.jvm.port.min=31000
 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=27943 -Dwrapper.version=3.2.3
 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE
 -Dwrapper.cpu.timeout=10
 -Dwrapper.jvmid=1

 Another unrelated issue is that JMX Mbeans shows configuration attributes
 as
 unavailable when I attach to the REST wrapper.

 The reason I am looking into modifying the configuration is that my client
 servers seem to be timing out. The server cannot handle more than 5
 concurrent
 transactions, so I want to tweak the heap size and see if that helps.

 Thanks,
 ~Mohit




 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




 --
 Mattias Persson, [matt...@neotechnology.com]
 Hacker, Neo Technology
 www.neotechnology.com
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user

___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-09-14 Thread Mattias Persson
Is this resolved? Take a look at
http://wiki.neo4j.org/content/Getting_Started_REST#Configure_amount_of_memoryotherwise

2010/8/7 Mohit Vazirani mohi...@yahoo.com

 Hi,

 I'm running the standalone neo4j REST server on a 64 bit linux machine with
 64GB
 RAM and am trying to configure the following memory settings through the
 wrapper.conf file:

 wrapper.java.initmemory=16144
 wrapper.java.maxmemory=16144

 However when I restart the server, JMX shows me the following VM arguments:

 -Dcom.sun.management.jmxremote -Xms4096m -Xmx4096m -Djava.library.path=lib
 -Dwrapper.key=q8W6vP8LS9mj0ekz -Dwrapper.port=32000
 -Dwrapper.jvm.port.min=31000
 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=27943 -Dwrapper.version=3.2.3
 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE
 -Dwrapper.cpu.timeout=10
 -Dwrapper.jvmid=1

 Another unrelated issue is that JMX Mbeans shows configuration attributes
 as
 unavailable when I attach to the REST wrapper.

 The reason I am looking into modifying the configuration is that my client
 servers seem to be timing out. The server cannot handle more than 5
 concurrent
 transactions, so I want to tweak the heap size and see if that helps.

 Thanks,
 ~Mohit




 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-13 Thread David Montag
Hi Brock,

My guess is that the high load makes requests pile up, thus also keeping
transactions open. Are there many threads running too? Because if you have
249 open transactions, and only, say, 20 threads processing requests, then
something sounds fishy. Could be something to look into.

I told you off-list yesterday that we support reads without transactions.
This is true, however, having investigated the matter a little bit, you
don't actually get any performance benefit by not starting a transaction for
read operations. From a developer's perspective, it's very nice not to have
to open transactions all the time when doing read operations, but it won't
boost your performance.

What *would* boost your performance would be to implement a domain-specific
REST API. I'm curious as to what kind of requests you are serving with the
REST component. You could potentially make the REST communication a lot less
chatty, depending on how you use the API. And that would naturally boost
your performance, as you can do smarter operations on the machine, instead
of round-tripping information over HTTP.

Also, if all requests are reads, you could scale them out horizontally by
using the new high availability (HA) feature, once it is completed.

David

On Thu, Aug 12, 2010 at 7:51 PM, Brock Rousseau bro...@gmail.com wrote:

 Updates:
 The issue of transactions remaining open until the cap is hit does not
 happen under 50% load. I also couldn't get any single type of request to
 hang a connection open.

 The non-clean shutdown and recovery isn't much of an issue - it only took a
 few minutes, there just isn't a log entry for when it completes.

 Thanks,
 Brock

 On Thu, Aug 12, 2010 at 7:34 AM, Brock Rousseau bro...@gmail.com wrote:

  David hopped on gchat and we discovered that error was due to my failure
 to
  deploy the updated wrapper.conf. Overall the Jetty version seems much
 more
  stable. It never crashed but there was one issue that eventually caused
 it
  to start rejecting most requests.
 
  As it runs, it appears to be collecting open transactions. Under full
 load
  the number of open transactions steadily rose until capping at 249. I'll
  have to run another test to see if it does the same thing more slowly
 under
  lesser load. There didn't seem to be any impact on system performance or
  response time at all until the cap was hit and it had to start rejecting
  requests.
 
  This error in wrapper.conf started almost exactly when the 249 number
 hit:
 
  INFO   | jvm 1| 2010/08/12 06:48:52 | Aug 12, 2010 6:48:52 AM
  sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
  INFO   | jvm 1| 2010/08/12 06:48:52 | WARNING: RMI TCP Accept-0:
 accept
  loop for ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=44656]
 throws
  INFO   | jvm 1| 2010/08/12 06:48:52 | java.net.SocketException: Too
  many open files
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
  java.net.PlainSocketImpl.socketAccept(Native Method)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
  java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:358)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
  java.net.ServerSocket.implAccept(ServerSocket.java:470)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
  java.net.ServerSocket.accept(ServerSocket.java:438)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 
 sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
  sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
  INFO   | jvm 1| 2010/08/12 06:48:52 |   at
  java.lang.Thread.run(Thread.java:636)
 
  I thought about setting up scheduled restarts for the rest server to
 avoid
  that cap for now, but this shows upon startup:
 
  INFO   | jvm 1| 2010/08/12 07:01:14 | Aug 12, 2010 7:01:14 AM
  org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog
  doInternalRecovery
  INFO   | jvm 1| 2010/08/12 07:01:14 | INFO: Non clean shutdown
 detected
  on log [neo4j-rest-db/nioneo_logical.log.1]. Recovery started ...
 
  It's still running so I'm not sure how long it will take, but i'll post a
  reply when it completes.
 
  Thanks,
  Brock
 
 
 
 
  On Thu, Aug 12, 2010 at 5:34 AM, Brock Rousseau bro...@gmail.com
 wrote:
 
  Hey David,
 
  I was mistaken, jetty-6.1.25 was in the lib, but here's the error:
 
  INFO   | jvm 1| 2010/08/12 05:19:38 | WrapperSimpleApp: Encountered
 an
  error running main: java.lang.NoClassDefFoundError:
  org/mortbay/jetty/HandlerContainer
  INFO   | jvm 1| 2010/08/12 05:19:38 |
 java.lang.NoClassDefFoundError:
  org/mortbay/jetty/HandlerContainer
  INFO   | jvm 1| 2010/08/12 05:19:38 |   at
 
 org.neo4j.rest.WebServerFactory.getDefaultWebServer(WebServerFactory.java:9)
  ...
 
  -Brock
 
 
 
 
  On Wed, Aug 

Re: [Neo4j] neo4j REST server configuration

2010-08-13 Thread Jim Webber
Hi David, Brock,

I wonder if there's any scope for caching in Brock's domain? It'd be pretty 
cool if we could semi-automatically put caching headers on any retrieved 
representations.

Jim


On 13 Aug 2010, at 21:21, David Montag wrote:

 Hi Brock,
 
 My guess is that the high load makes requests pile up, thus also keeping
 transactions open. Are there many threads running too? Because if you have
 249 open transactions, and only, say, 20 threads processing requests, then
 something sounds fishy. Could be something to look into.
 
 I told you off-list yesterday that we support reads without transactions.
 This is true, however, having investigated the matter a little bit, you
 don't actually get any performance benefit by not starting a transaction for
 read operations. From a developer's perspective, it's very nice not to have
 to open transactions all the time when doing read operations, but it won't
 boost your performance.
 
 What *would* boost your performance would be to implement a domain-specific
 REST API. I'm curious as to what kind of requests you are serving with the
 REST component. You could potentially make the REST communication a lot less
 chatty, depending on how you use the API. And that would naturally boost
 your performance, as you can do smarter operations on the machine, instead
 of round-tripping information over HTTP.
 
 Also, if all requests are reads, you could scale them out horizontally by
 using the new high availability (HA) feature, once it is completed.
 
 David
 
 On Thu, Aug 12, 2010 at 7:51 PM, Brock Rousseau bro...@gmail.com wrote:
 
 Updates:
 The issue of transactions remaining open until the cap is hit does not
 happen under 50% load. I also couldn't get any single type of request to
 hang a connection open.
 
 The non-clean shutdown and recovery isn't much of an issue - it only took a
 few minutes, there just isn't a log entry for when it completes.
 
 Thanks,
 Brock
 
 On Thu, Aug 12, 2010 at 7:34 AM, Brock Rousseau bro...@gmail.com wrote:
 
 David hopped on gchat and we discovered that error was due to my failure
 to
 deploy the updated wrapper.conf. Overall the Jetty version seems much
 more
 stable. It never crashed but there was one issue that eventually caused
 it
 to start rejecting most requests.
 
 As it runs, it appears to be collecting open transactions. Under full
 load
 the number of open transactions steadily rose until capping at 249. I'll
 have to run another test to see if it does the same thing more slowly
 under
 lesser load. There didn't seem to be any impact on system performance or
 response time at all until the cap was hit and it had to start rejecting
 requests.
 
 This error in wrapper.conf started almost exactly when the 249 number
 hit:
 
 INFO   | jvm 1| 2010/08/12 06:48:52 | Aug 12, 2010 6:48:52 AM
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
 INFO   | jvm 1| 2010/08/12 06:48:52 | WARNING: RMI TCP Accept-0:
 accept
 loop for ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=44656]
 throws
 INFO   | jvm 1| 2010/08/12 06:48:52 | java.net.SocketException: Too
 many open files
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 java.net.PlainSocketImpl.socketAccept(Native Method)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:358)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 java.net.ServerSocket.accept(ServerSocket.java:438)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 
 sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 INFO   | jvm 1| 2010/08/12 06:48:52 |   at
 java.lang.Thread.run(Thread.java:636)
 
 I thought about setting up scheduled restarts for the rest server to
 avoid
 that cap for now, but this shows upon startup:
 
 INFO   | jvm 1| 2010/08/12 07:01:14 | Aug 12, 2010 7:01:14 AM
 org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog
 doInternalRecovery
 INFO   | jvm 1| 2010/08/12 07:01:14 | INFO: Non clean shutdown
 detected
 on log [neo4j-rest-db/nioneo_logical.log.1]. Recovery started ...
 
 It's still running so I'm not sure how long it will take, but i'll post a
 reply when it completes.
 
 Thanks,
 Brock
 
 
 
 
 On Thu, Aug 12, 2010 at 5:34 AM, Brock Rousseau bro...@gmail.com
 wrote:
 
 Hey David,
 
 I was mistaken, jetty-6.1.25 was in the lib, but here's the error:
 
 INFO   | jvm 1| 2010/08/12 05:19:38 | WrapperSimpleApp: Encountered
 an
 error running main: java.lang.NoClassDefFoundError:
 org/mortbay/jetty/HandlerContainer
 INFO   | jvm 1| 2010/08/12 05:19:38 |
 

Re: [Neo4j] neo4j REST server configuration

2010-08-12 Thread Brock Rousseau
Hey David,

I was mistaken, jetty-6.1.25 was in the lib, but here's the error:

INFO   | jvm 1| 2010/08/12 05:19:38 | WrapperSimpleApp: Encountered an
error running main: java.lang.NoClassDefFoundError:
org/mortbay/jetty/HandlerContainer
INFO   | jvm 1| 2010/08/12 05:19:38 | java.lang.NoClassDefFoundError:
org/mortbay/jetty/HandlerContainer
INFO   | jvm 1| 2010/08/12 05:19:38 |   at
org.neo4j.rest.WebServerFactory.getDefaultWebServer(WebServerFactory.java:9)
...

-Brock



On Wed, Aug 11, 2010 at 10:55 PM, David Montag 
david.mon...@neotechnology.com wrote:

 Hi Brock,

 Sorry, I misread your e-mail, I thought you said compile time. I should at
 least have breakfast before answering any e-mails :)

 So, a runtime error. What library/class is missing? Could you provide us
 with the error, it would help.

 You can grab Jetty 6.1.25 and put it in lib, if they're not there. But they
 should be, if everything was installed correctly. mvn clean install in
 the
 REST component, and mvn clean package in the standalone component should
 do it.

 Please keep us updated on your progress.

 David

 On Thu, Aug 12, 2010 at 7:40 AM, David Montag 
 david.mon...@neotechnology.com wrote:

  Hi Brock,
 
  Ok, that should have been taken care of by Maven, let me have a look at
  that. It should of course work to just mvn install:install-file them
  yourself into your repository. But I'll have a look at that.
 
  I'm free for gchat any time today if you want.
 
  David
 
 
  On Thu, Aug 12, 2010 at 12:29 AM, Brock Rousseau bro...@gmail.com
 wrote:
 
  Hey David,
 
  No worries about the disclaimer. I am getting a runtime error on startup
  though due to the lack of the Jetty libraries. Any special instructions
  there or should I just grab them from Jetty's website?
 
  Also, would any of you be available via gchat some time in the next 24
  hours
  so I can relay the results of load testing? I can adjust my schedule
 since
  you guys are CEST if I'm not mistaken, just let me know.
 
  Thanks,
  Brock
 
  On Wed, Aug 11, 2010 at 2:46 PM, David Montag 
  david.mon...@neotechnology.com wrote:
 
   Hi Brock,
  
   If you svn update to the latest version of the REST component, apply
 the
   patch I'll send to you, and rebuild it as per Jacob's previous
   instructions,
   then it should use Jetty instead.
  
   Keep in mind that this was a quick fix done today, so it might break
  down
   for the same or other reasons, especially as we haven't been able to
   reproduce the error you're seeing, and hence test that it actually
 fixes
   anything. Just a disclaimer.
  
   David
  
   On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com
  wrote:
  
Hi Jacob,
   
Would you be able to email me that patch? It's probably easier for
 me
  to
throw it on our server and let you know how it goes rather than you
  guys
having to try and reproduce it.
   
Rough data for our server:
 ~1.5 billion relationships
 ~400 million nodes
 ~1,200 transactions per minute
 ~90% are lookups, 10% inserts
   
Not sure if you're still around due to the time difference, but if
 you
could
provide that patch today I can test it right away.
   
Thanks,
Brock
   
On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson 
 ja...@voltvoodoo.com
wrote:
   
 So the current status is that David has got neo4j REST running on
  Jetty
 with
 all tests passing. We've also searched through the code, and found
  that
 there are no interrupt() calls in the jersey source, while there
 are
  a
few
 on the grizzly side. There is one in particular that we have been
   looking
 at, related to keep-alive timeouts, that may be the culprit. If
 that
   was
 the
 problem, we've got a fix for it.

 We have, however, been unable to recreate the problem so far, so
 we
   can't
 tell if we've solved it or not :) Brock: could you give us an idea
  of
what
 types of requests you were throwing at the server, and a rough
  estimate
of
 how many?

 /Jacob

 On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson 
  ja...@voltvoodoo.com
 wrote:

  Hi all!
 
  Johan took a look at the stack trace, and explained the problem.
  What
  happens is that something, either the Grizzly server or the
 jersey
 wrapper
  calls Thread.interrupt() on one of the neo4j threads (which
 should
  be
  considered a bug in whichever one of them does that). This
  triggers
   an
  IOError deep down in neo4j, which in turn causes the rest of the
 problems.
 
  I'm working on recreating the situation, and David is working on
 switching
  the REST system over to run on Jetty instead of Grizzly. We'll
  keep
   you
  posted on the progress.
 
  /Jacob
 
 
  On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
  peter.neuba...@neotechnology.com wrote:
 
  Nice,
  will try that out 

Re: [Neo4j] neo4j REST server configuration

2010-08-12 Thread Brock Rousseau
David hopped on gchat and we discovered that error was due to my failure to
deploy the updated wrapper.conf. Overall the Jetty version seems much more
stable. It never crashed but there was one issue that eventually caused it
to start rejecting most requests.

As it runs, it appears to be collecting open transactions. Under full load
the number of open transactions steadily rose until capping at 249. I'll
have to run another test to see if it does the same thing more slowly under
lesser load. There didn't seem to be any impact on system performance or
response time at all until the cap was hit and it had to start rejecting
requests.

This error in wrapper.conf started almost exactly when the 249 number hit:

INFO   | jvm 1| 2010/08/12 06:48:52 | Aug 12, 2010 6:48:52 AM
sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
INFO   | jvm 1| 2010/08/12 06:48:52 | WARNING: RMI TCP Accept-0: accept
loop for ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=44656] throws
INFO   | jvm 1| 2010/08/12 06:48:52 | java.net.SocketException: Too many
open files
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
java.net.PlainSocketImpl.socketAccept(Native Method)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:358)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
java.net.ServerSocket.implAccept(ServerSocket.java:470)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
java.net.ServerSocket.accept(ServerSocket.java:438)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
INFO   | jvm 1| 2010/08/12 06:48:52 |   at
java.lang.Thread.run(Thread.java:636)

I thought about setting up scheduled restarts for the rest server to avoid
that cap for now, but this shows upon startup:

INFO   | jvm 1| 2010/08/12 07:01:14 | Aug 12, 2010 7:01:14 AM
org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog
doInternalRecovery
INFO   | jvm 1| 2010/08/12 07:01:14 | INFO: Non clean shutdown detected
on log [neo4j-rest-db/nioneo_logical.log.1]. Recovery started ...

It's still running so I'm not sure how long it will take, but i'll post a
reply when it completes.

Thanks,
Brock



On Thu, Aug 12, 2010 at 5:34 AM, Brock Rousseau bro...@gmail.com wrote:

 Hey David,

 I was mistaken, jetty-6.1.25 was in the lib, but here's the error:

 INFO   | jvm 1| 2010/08/12 05:19:38 | WrapperSimpleApp: Encountered an
 error running main: java.lang.NoClassDefFoundError:
 org/mortbay/jetty/HandlerContainer
 INFO   | jvm 1| 2010/08/12 05:19:38 | java.lang.NoClassDefFoundError:
 org/mortbay/jetty/HandlerContainer
 INFO   | jvm 1| 2010/08/12 05:19:38 |   at
 org.neo4j.rest.WebServerFactory.getDefaultWebServer(WebServerFactory.java:9)
 ...

 -Brock




 On Wed, Aug 11, 2010 at 10:55 PM, David Montag 
 david.mon...@neotechnology.com wrote:

 Hi Brock,

 Sorry, I misread your e-mail, I thought you said compile time. I should at
 least have breakfast before answering any e-mails :)

 So, a runtime error. What library/class is missing? Could you provide us
 with the error, it would help.

 You can grab Jetty 6.1.25 and put it in lib, if they're not there. But
 they
 should be, if everything was installed correctly. mvn clean install in
 the
 REST component, and mvn clean package in the standalone component should
 do it.

 Please keep us updated on your progress.

 David

 On Thu, Aug 12, 2010 at 7:40 AM, David Montag 
 david.mon...@neotechnology.com wrote:

  Hi Brock,
 
  Ok, that should have been taken care of by Maven, let me have a look at
  that. It should of course work to just mvn install:install-file them
  yourself into your repository. But I'll have a look at that.
 
  I'm free for gchat any time today if you want.
 
  David
 
 
  On Thu, Aug 12, 2010 at 12:29 AM, Brock Rousseau bro...@gmail.com
 wrote:
 
  Hey David,
 
  No worries about the disclaimer. I am getting a runtime error on
 startup
  though due to the lack of the Jetty libraries. Any special instructions
  there or should I just grab them from Jetty's website?
 
  Also, would any of you be available via gchat some time in the next 24
  hours
  so I can relay the results of load testing? I can adjust my schedule
 since
  you guys are CEST if I'm not mistaken, just let me know.
 
  Thanks,
  Brock
 
  On Wed, Aug 11, 2010 at 2:46 PM, David Montag 
  david.mon...@neotechnology.com wrote:
 
   Hi Brock,
  
   If you svn update to the latest version of the REST component, apply
 the
   patch I'll send to you, and rebuild it as per Jacob's previous
   instructions,
   then it should use Jetty instead.
  
   Keep in mind that this was a quick fix done 

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Peter Neubauer
Mmh,
seems we should stresstest the server and Grizzly with e.g.
http://www.soapui.org and see if we can reproduce the scenario, if
there is no obvious hint to this. Will try to set it up ...

Cheers,

/peter neubauer

COO and Sales, Neo Technology

GTalk:      neubauer.peter
Skype       peter.neubauer
Phone       +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter      http://twitter.com/peterneubauer

http://www.neo4j.org               - Your high performance graph database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote:
 The patch worked perfectly for increasing the concurrent transaction cap,
 but unfortunately exposed another issue.

 After increasing the load hitting our rest server, it performs smoothly for
 10-15 minutes then begins issuing 500 responses on all transactions. When it
 happens, the number of open transactions freezes in JMX and the heap size
 essentially remains static. Below are the two stack traces we see in the
 wrapper.log. Here are what i think to be the relevant configuration lines:

 wrapper.conf:
 wrapper.java.additional.1=-d64
 wrapper.java.additional.2=-server
 wrapper.java.additional.4=-Xmx8192m
 wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
 wrapper.java.additional.4=-Dcom.sun.management.jmxremote

 neo4j.properties:

 rest_min_grizzly_threads=4
 rest_max_grizzly_threads=128

 neostore.nodestore.db.mapped_memory=4000M
 neostore.relationshipstore.db.mapped_memory=4M
 neostore.propertystore.db.mapped_memory=1800M
 neostore.propertystore.db.index.mapped_memory=100M
 neostore.propertystore.db.index.keys.mapped_memory=100M
 neostore.propertystore.db.strings.mapped_memory=3G
 neostore.propertystore.db.arrays.mapped_memory=0M

 The server has 64Gb of total RAM so there should be a little over 6 left for
 the system.


 At the initial time of failure there are several of this error:

 INFO   | jvm 1    | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
 com.sun.grizzly.http.KeepAliveThreadAttachment timedOut
 INFO   | jvm 1    | 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread:
 Grizzly-9555-WorkerThread(1)
 INFO   | jvm 1    | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
 com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException
 INFO   | jvm 1    | 2010/08/10 13:00:33 | SEVERE: The RuntimeException could
 not be mapped to a response, re-throwing to the HTTP container
 INFO   | jvm 1    | 2010/08/10 13:00:33 |
 org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to
 load position[7280476] @[968303308]
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.rest.domain.StorageActions$TraverserReturnType$1.toRepresentation(StorageActions.java:421)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.rest.domain.StorageActions.traverseAndCollect(StorageActions.java:403)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.rest.web.GenericWebService.traverse(GenericWebService.java:725)
 INFO   | jvm 1    | 2010/08/10 13:00:33 

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Brock Rousseau
Thanks Peter. Let us know if there is anything else we can provide in the
way of logs or diagnosis from our server.

-Brock

On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
peter.neuba...@neotechnology.com wrote:

 Mmh,
 seems we should stresstest the server and Grizzly with e.g.
 http://www.soapui.org and see if we can reproduce the scenario, if
 there is no obvious hint to this. Will try to set it up ...

 Cheers,

 /peter neubauer

 COO and Sales, Neo Technology

 GTalk:  neubauer.peter
 Skype   peter.neubauer
 Phone   +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter  http://twitter.com/peterneubauer

 http://www.neo4j.org   - Your high performance graph database.
 http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



 On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote:
  The patch worked perfectly for increasing the concurrent transaction cap,
  but unfortunately exposed another issue.
 
  After increasing the load hitting our rest server, it performs smoothly
 for
  10-15 minutes then begins issuing 500 responses on all transactions. When
 it
  happens, the number of open transactions freezes in JMX and the heap size
  essentially remains static. Below are the two stack traces we see in the
  wrapper.log. Here are what i think to be the relevant configuration
 lines:
 
  wrapper.conf:
  wrapper.java.additional.1=-d64
  wrapper.java.additional.2=-server
  wrapper.java.additional.4=-Xmx8192m
  wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
  wrapper.java.additional.4=-Dcom.sun.management.jmxremote
 
  neo4j.properties:
 
  rest_min_grizzly_threads=4
  rest_max_grizzly_threads=128
 
  neostore.nodestore.db.mapped_memory=4000M
  neostore.relationshipstore.db.mapped_memory=4M
  neostore.propertystore.db.mapped_memory=1800M
  neostore.propertystore.db.index.mapped_memory=100M
  neostore.propertystore.db.index.keys.mapped_memory=100M
  neostore.propertystore.db.strings.mapped_memory=3G
  neostore.propertystore.db.arrays.mapped_memory=0M
 
  The server has 64Gb of total RAM so there should be a little over 6 left
 for
  the system.
 
 
  At the initial time of failure there are several of this error:
 
  INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
  com.sun.grizzly.http.KeepAliveThreadAttachment timedOut
  INFO   | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle
 Thread:
  Grizzly-9555-WorkerThread(1)
  INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
  com.sun.jersey.spi.container.ContainerResponse
 mapMappableContainerException
  INFO   | jvm 1| 2010/08/10 13:00:33 | SEVERE: The RuntimeException
 could
  not be mapped to a response, re-throwing to the HTTP container
  INFO   | jvm 1| 2010/08/10 13:00:33 |
  org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to
  load position[7280476] @[968303308]
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
  org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
  org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
  org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
  org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20)
  INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Jim Webber
Perhaps something as simple as a Grinder script might help?

Jim


On 11 Aug 2010, at 17:57, Brock Rousseau wrote:

 Thanks Peter. Let us know if there is anything else we can provide in the
 way of logs or diagnosis from our server.
 
 -Brock
 
 On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
 peter.neuba...@neotechnology.com wrote:
 
 Mmh,
 seems we should stresstest the server and Grizzly with e.g.
 http://www.soapui.org and see if we can reproduce the scenario, if
 there is no obvious hint to this. Will try to set it up ...
 
 Cheers,
 
 /peter neubauer
 
 COO and Sales, Neo Technology
 
 GTalk:  neubauer.peter
 Skype   peter.neubauer
 Phone   +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter  http://twitter.com/peterneubauer
 
 http://www.neo4j.org   - Your high performance graph database.
 http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.
 
 
 
 On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote:
 The patch worked perfectly for increasing the concurrent transaction cap,
 but unfortunately exposed another issue.
 
 After increasing the load hitting our rest server, it performs smoothly
 for
 10-15 minutes then begins issuing 500 responses on all transactions. When
 it
 happens, the number of open transactions freezes in JMX and the heap size
 essentially remains static. Below are the two stack traces we see in the
 wrapper.log. Here are what i think to be the relevant configuration
 lines:
 
 wrapper.conf:
 wrapper.java.additional.1=-d64
 wrapper.java.additional.2=-server
 wrapper.java.additional.4=-Xmx8192m
 wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
 wrapper.java.additional.4=-Dcom.sun.management.jmxremote
 
 neo4j.properties:
 
 rest_min_grizzly_threads=4
 rest_max_grizzly_threads=128
 
 neostore.nodestore.db.mapped_memory=4000M
 neostore.relationshipstore.db.mapped_memory=4M
 neostore.propertystore.db.mapped_memory=1800M
 neostore.propertystore.db.index.mapped_memory=100M
 neostore.propertystore.db.index.keys.mapped_memory=100M
 neostore.propertystore.db.strings.mapped_memory=3G
 neostore.propertystore.db.arrays.mapped_memory=0M
 
 The server has 64Gb of total RAM so there should be a little over 6 left
 for
 the system.
 
 
 At the initial time of failure there are several of this error:
 
 INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
 com.sun.grizzly.http.KeepAliveThreadAttachment timedOut
 INFO   | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle
 Thread:
 Grizzly-9555-WorkerThread(1)
 INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
 com.sun.jersey.spi.container.ContainerResponse
 mapMappableContainerException
 INFO   | jvm 1| 2010/08/10 13:00:33 | SEVERE: The RuntimeException
 could
 not be mapped to a response, re-throwing to the HTTP container
 INFO   | jvm 1| 2010/08/10 13:00:33 |
 org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to
 load position[7280476] @[968303308]
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   at
 
 org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20)
 INFO   | jvm 1| 2010/08/10 13:00:33 |   

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Peter Neubauer
Nice,
will try that out Jim! Grinder seems cool.

Cheers,

/peter neubauer

COO and Sales, Neo Technology

GTalk:      neubauer.peter
Skype       peter.neubauer
Phone       +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter      http://twitter.com/peterneubauer

http://www.neo4j.org               - Your high performance graph database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote:
 Perhaps something as simple as a Grinder script might help?

 Jim


 On 11 Aug 2010, at 17:57, Brock Rousseau wrote:

 Thanks Peter. Let us know if there is anything else we can provide in the
 way of logs or diagnosis from our server.

 -Brock

 On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
 peter.neuba...@neotechnology.com wrote:

 Mmh,
 seems we should stresstest the server and Grizzly with e.g.
 http://www.soapui.org and see if we can reproduce the scenario, if
 there is no obvious hint to this. Will try to set it up ...

 Cheers,

 /peter neubauer

 COO and Sales, Neo Technology

 GTalk:      neubauer.peter
 Skype       peter.neubauer
 Phone       +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter      http://twitter.com/peterneubauer

 http://www.neo4j.org               - Your high performance graph database.
 http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



 On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote:
 The patch worked perfectly for increasing the concurrent transaction cap,
 but unfortunately exposed another issue.

 After increasing the load hitting our rest server, it performs smoothly
 for
 10-15 minutes then begins issuing 500 responses on all transactions. When
 it
 happens, the number of open transactions freezes in JMX and the heap size
 essentially remains static. Below are the two stack traces we see in the
 wrapper.log. Here are what i think to be the relevant configuration
 lines:

 wrapper.conf:
 wrapper.java.additional.1=-d64
 wrapper.java.additional.2=-server
 wrapper.java.additional.4=-Xmx8192m
 wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
 wrapper.java.additional.4=-Dcom.sun.management.jmxremote

 neo4j.properties:

 rest_min_grizzly_threads=4
 rest_max_grizzly_threads=128

 neostore.nodestore.db.mapped_memory=4000M
 neostore.relationshipstore.db.mapped_memory=4M
 neostore.propertystore.db.mapped_memory=1800M
 neostore.propertystore.db.index.mapped_memory=100M
 neostore.propertystore.db.index.keys.mapped_memory=100M
 neostore.propertystore.db.strings.mapped_memory=3G
 neostore.propertystore.db.arrays.mapped_memory=0M

 The server has 64Gb of total RAM so there should be a little over 6 left
 for
 the system.


 At the initial time of failure there are several of this error:

 INFO   | jvm 1    | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
 com.sun.grizzly.http.KeepAliveThreadAttachment timedOut
 INFO   | jvm 1    | 2010/08/10 13:00:33 | WARNING: Interrupting idle
 Thread:
 Grizzly-9555-WorkerThread(1)
 INFO   | jvm 1    | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
 com.sun.jersey.spi.container.ContainerResponse
 mapMappableContainerException
 INFO   | jvm 1    | 2010/08/10 13:00:33 | SEVERE: The RuntimeException
 could
 not be mapped to a response, re-throwing to the HTTP container
 INFO   | jvm 1    | 2010/08/10 13:00:33 |
 org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to
 load position[7280476] @[968303308]
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at

 org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |   at
 org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546)
 INFO   | jvm 1    | 2010/08/10 13:00:33 |  

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Jacob Hansson
So the current status is that David has got neo4j REST running on Jetty with
all tests passing. We've also searched through the code, and found that
there are no interrupt() calls in the jersey source, while there are a few
on the grizzly side. There is one in particular that we have been looking
at, related to keep-alive timeouts, that may be the culprit. If that was the
problem, we've got a fix for it.

We have, however, been unable to recreate the problem so far, so we can't
tell if we've solved it or not :) Brock: could you give us an idea of what
types of requests you were throwing at the server, and a rough estimate of
how many?

/Jacob

On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote:

 Hi all!

 Johan took a look at the stack trace, and explained the problem. What
 happens is that something, either the Grizzly server or the jersey wrapper
 calls Thread.interrupt() on one of the neo4j threads (which should be
 considered a bug in whichever one of them does that). This triggers an
 IOError deep down in neo4j, which in turn causes the rest of the problems.

 I'm working on recreating the situation, and David is working on switching
 the REST system over to run on Jetty instead of Grizzly. We'll keep you
 posted on the progress.

 /Jacob


 On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
 peter.neuba...@neotechnology.com wrote:

 Nice,
 will try that out Jim! Grinder seems cool.

 Cheers,

 /peter neubauer

 COO and Sales, Neo Technology

 GTalk:  neubauer.peter
 Skype   peter.neubauer
 Phone   +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter  http://twitter.com/peterneubauer

 http://www.neo4j.org   - Your high performance graph
 database.
 http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



 On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote:
  Perhaps something as simple as a Grinder script might help?
 
  Jim
 
 
  On 11 Aug 2010, at 17:57, Brock Rousseau wrote:
 
  Thanks Peter. Let us know if there is anything else we can provide in
 the
  way of logs or diagnosis from our server.
 
  -Brock
 
  On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
  peter.neuba...@neotechnology.com wrote:
 
  Mmh,
  seems we should stresstest the server and Grizzly with e.g.
  http://www.soapui.org and see if we can reproduce the scenario, if
  there is no obvious hint to this. Will try to set it up ...
 
  Cheers,
 
  /peter neubauer
 
  COO and Sales, Neo Technology
 
  GTalk:  neubauer.peter
  Skype   peter.neubauer
  Phone   +46 704 106975
  LinkedIn   http://www.linkedin.com/in/neubauer
  Twitter  http://twitter.com/peterneubauer
 
  http://www.neo4j.org   - Your high performance graph
 database.
  http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
 party.
 
 
 
  On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com
 wrote:
  The patch worked perfectly for increasing the concurrent transaction
 cap,
  but unfortunately exposed another issue.
 
  After increasing the load hitting our rest server, it performs
 smoothly
  for
  10-15 minutes then begins issuing 500 responses on all transactions.
 When
  it
  happens, the number of open transactions freezes in JMX and the heap
 size
  essentially remains static. Below are the two stack traces we see in
 the
  wrapper.log. Here are what i think to be the relevant configuration
  lines:
 
  wrapper.conf:
  wrapper.java.additional.1=-d64
  wrapper.java.additional.2=-server
  wrapper.java.additional.4=-Xmx8192m
  wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
  wrapper.java.additional.4=-Dcom.sun.management.jmxremote
 
  neo4j.properties:
 
  rest_min_grizzly_threads=4
  rest_max_grizzly_threads=128
 
  neostore.nodestore.db.mapped_memory=4000M
  neostore.relationshipstore.db.mapped_memory=4M
  neostore.propertystore.db.mapped_memory=1800M
  neostore.propertystore.db.index.mapped_memory=100M
  neostore.propertystore.db.index.keys.mapped_memory=100M
  neostore.propertystore.db.strings.mapped_memory=3G
  neostore.propertystore.db.arrays.mapped_memory=0M
 
  The server has 64Gb of total RAM so there should be a little over 6
 left
  for
  the system.
 
 
  At the initial time of failure there are several of this error:
 
  INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
  com.sun.grizzly.http.KeepAliveThreadAttachment timedOut
  INFO   | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle
  Thread:
  Grizzly-9555-WorkerThread(1)
  INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
  com.sun.jersey.spi.container.ContainerResponse
  mapMappableContainerException
  INFO   | jvm 1| 2010/08/10 13:00:33 | SEVERE: The
 RuntimeException
  could
  not be mapped to a response, re-throwing to the HTTP container
  INFO   | jvm 1| 2010/08/10 13:00:33 |
  org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable
 to
  load position[7280476] 

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Brock Rousseau
Hi Jacob,

Would you be able to email me that patch? It's probably easier for me to
throw it on our server and let you know how it goes rather than you guys
having to try and reproduce it.

Rough data for our server:
 ~1.5 billion relationships
 ~400 million nodes
 ~1,200 transactions per minute
 ~90% are lookups, 10% inserts

Not sure if you're still around due to the time difference, but if you could
provide that patch today I can test it right away.

Thanks,
Brock

On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com wrote:

 So the current status is that David has got neo4j REST running on Jetty
 with
 all tests passing. We've also searched through the code, and found that
 there are no interrupt() calls in the jersey source, while there are a few
 on the grizzly side. There is one in particular that we have been looking
 at, related to keep-alive timeouts, that may be the culprit. If that was
 the
 problem, we've got a fix for it.

 We have, however, been unable to recreate the problem so far, so we can't
 tell if we've solved it or not :) Brock: could you give us an idea of what
 types of requests you were throwing at the server, and a rough estimate of
 how many?

 /Jacob

 On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com
 wrote:

  Hi all!
 
  Johan took a look at the stack trace, and explained the problem. What
  happens is that something, either the Grizzly server or the jersey
 wrapper
  calls Thread.interrupt() on one of the neo4j threads (which should be
  considered a bug in whichever one of them does that). This triggers an
  IOError deep down in neo4j, which in turn causes the rest of the
 problems.
 
  I'm working on recreating the situation, and David is working on
 switching
  the REST system over to run on Jetty instead of Grizzly. We'll keep you
  posted on the progress.
 
  /Jacob
 
 
  On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
  peter.neuba...@neotechnology.com wrote:
 
  Nice,
  will try that out Jim! Grinder seems cool.
 
  Cheers,
 
  /peter neubauer
 
  COO and Sales, Neo Technology
 
  GTalk:  neubauer.peter
  Skype   peter.neubauer
  Phone   +46 704 106975
  LinkedIn   http://www.linkedin.com/in/neubauer
  Twitter  http://twitter.com/peterneubauer
 
  http://www.neo4j.org   - Your high performance graph
  database.
  http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.
 
 
 
  On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote:
   Perhaps something as simple as a Grinder script might help?
  
   Jim
  
  
   On 11 Aug 2010, at 17:57, Brock Rousseau wrote:
  
   Thanks Peter. Let us know if there is anything else we can provide in
  the
   way of logs or diagnosis from our server.
  
   -Brock
  
   On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
   peter.neuba...@neotechnology.com wrote:
  
   Mmh,
   seems we should stresstest the server and Grizzly with e.g.
   http://www.soapui.org and see if we can reproduce the scenario, if
   there is no obvious hint to this. Will try to set it up ...
  
   Cheers,
  
   /peter neubauer
  
   COO and Sales, Neo Technology
  
   GTalk:  neubauer.peter
   Skype   peter.neubauer
   Phone   +46 704 106975
   LinkedIn   http://www.linkedin.com/in/neubauer
   Twitter  http://twitter.com/peterneubauer
  
   http://www.neo4j.org   - Your high performance graph
  database.
   http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
  party.
  
  
  
   On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com
  wrote:
   The patch worked perfectly for increasing the concurrent
 transaction
  cap,
   but unfortunately exposed another issue.
  
   After increasing the load hitting our rest server, it performs
  smoothly
   for
   10-15 minutes then begins issuing 500 responses on all
 transactions.
  When
   it
   happens, the number of open transactions freezes in JMX and the
 heap
  size
   essentially remains static. Below are the two stack traces we see
 in
  the
   wrapper.log. Here are what i think to be the relevant configuration
   lines:
  
   wrapper.conf:
   wrapper.java.additional.1=-d64
   wrapper.java.additional.2=-server
   wrapper.java.additional.4=-Xmx8192m
   wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
   wrapper.java.additional.4=-Dcom.sun.management.jmxremote
  
   neo4j.properties:
  
   rest_min_grizzly_threads=4
   rest_max_grizzly_threads=128
  
   neostore.nodestore.db.mapped_memory=4000M
   neostore.relationshipstore.db.mapped_memory=4M
   neostore.propertystore.db.mapped_memory=1800M
   neostore.propertystore.db.index.mapped_memory=100M
   neostore.propertystore.db.index.keys.mapped_memory=100M
   neostore.propertystore.db.strings.mapped_memory=3G
   neostore.propertystore.db.arrays.mapped_memory=0M
  
   The server has 64Gb of total RAM so there should be a little over 6
  left
   for
   the system.
  
  
   At the initial time of failure there are several of this 

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread David Montag
Hi Brock,

If you svn update to the latest version of the REST component, apply the
patch I'll send to you, and rebuild it as per Jacob's previous instructions,
then it should use Jetty instead.

Keep in mind that this was a quick fix done today, so it might break down
for the same or other reasons, especially as we haven't been able to
reproduce the error you're seeing, and hence test that it actually fixes
anything. Just a disclaimer.

David

On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com wrote:

 Hi Jacob,

 Would you be able to email me that patch? It's probably easier for me to
 throw it on our server and let you know how it goes rather than you guys
 having to try and reproduce it.

 Rough data for our server:
  ~1.5 billion relationships
  ~400 million nodes
  ~1,200 transactions per minute
  ~90% are lookups, 10% inserts

 Not sure if you're still around due to the time difference, but if you
 could
 provide that patch today I can test it right away.

 Thanks,
 Brock

 On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com
 wrote:

  So the current status is that David has got neo4j REST running on Jetty
  with
  all tests passing. We've also searched through the code, and found that
  there are no interrupt() calls in the jersey source, while there are a
 few
  on the grizzly side. There is one in particular that we have been looking
  at, related to keep-alive timeouts, that may be the culprit. If that was
  the
  problem, we've got a fix for it.
 
  We have, however, been unable to recreate the problem so far, so we can't
  tell if we've solved it or not :) Brock: could you give us an idea of
 what
  types of requests you were throwing at the server, and a rough estimate
 of
  how many?
 
  /Jacob
 
  On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com
  wrote:
 
   Hi all!
  
   Johan took a look at the stack trace, and explained the problem. What
   happens is that something, either the Grizzly server or the jersey
  wrapper
   calls Thread.interrupt() on one of the neo4j threads (which should be
   considered a bug in whichever one of them does that). This triggers an
   IOError deep down in neo4j, which in turn causes the rest of the
  problems.
  
   I'm working on recreating the situation, and David is working on
  switching
   the REST system over to run on Jetty instead of Grizzly. We'll keep you
   posted on the progress.
  
   /Jacob
  
  
   On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
   peter.neuba...@neotechnology.com wrote:
  
   Nice,
   will try that out Jim! Grinder seems cool.
  
   Cheers,
  
   /peter neubauer
  
   COO and Sales, Neo Technology
  
   GTalk:  neubauer.peter
   Skype   peter.neubauer
   Phone   +46 704 106975
   LinkedIn   http://www.linkedin.com/in/neubauer
   Twitter  http://twitter.com/peterneubauer
  
   http://www.neo4j.org   - Your high performance graph
   database.
   http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
 party.
  
  
  
   On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote:
Perhaps something as simple as a Grinder script might help?
   
Jim
   
   
On 11 Aug 2010, at 17:57, Brock Rousseau wrote:
   
Thanks Peter. Let us know if there is anything else we can provide
 in
   the
way of logs or diagnosis from our server.
   
-Brock
   
On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
peter.neuba...@neotechnology.com wrote:
   
Mmh,
seems we should stresstest the server and Grizzly with e.g.
http://www.soapui.org and see if we can reproduce the scenario,
 if
there is no obvious hint to this. Will try to set it up ...
   
Cheers,
   
/peter neubauer
   
COO and Sales, Neo Technology
   
GTalk:  neubauer.peter
Skype   peter.neubauer
Phone   +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter  http://twitter.com/peterneubauer
   
http://www.neo4j.org   - Your high performance graph
   database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
   party.
   
   
   
On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com
 
   wrote:
The patch worked perfectly for increasing the concurrent
  transaction
   cap,
but unfortunately exposed another issue.
   
After increasing the load hitting our rest server, it performs
   smoothly
for
10-15 minutes then begins issuing 500 responses on all
  transactions.
   When
it
happens, the number of open transactions freezes in JMX and the
  heap
   size
essentially remains static. Below are the two stack traces we see
  in
   the
wrapper.log. Here are what i think to be the relevant
 configuration
lines:
   
wrapper.conf:
wrapper.java.additional.1=-d64
wrapper.java.additional.2=-server
wrapper.java.additional.4=-Xmx8192m
wrapper.java.additional.3=-XX:+UseConcMarkSweepGC

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread Brock Rousseau
Hey David,

No worries about the disclaimer. I am getting a runtime error on startup
though due to the lack of the Jetty libraries. Any special instructions
there or should I just grab them from Jetty's website?

Also, would any of you be available via gchat some time in the next 24 hours
so I can relay the results of load testing? I can adjust my schedule since
you guys are CEST if I'm not mistaken, just let me know.

Thanks,
Brock

On Wed, Aug 11, 2010 at 2:46 PM, David Montag 
david.mon...@neotechnology.com wrote:

 Hi Brock,

 If you svn update to the latest version of the REST component, apply the
 patch I'll send to you, and rebuild it as per Jacob's previous
 instructions,
 then it should use Jetty instead.

 Keep in mind that this was a quick fix done today, so it might break down
 for the same or other reasons, especially as we haven't been able to
 reproduce the error you're seeing, and hence test that it actually fixes
 anything. Just a disclaimer.

 David

 On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com wrote:

  Hi Jacob,
 
  Would you be able to email me that patch? It's probably easier for me to
  throw it on our server and let you know how it goes rather than you guys
  having to try and reproduce it.
 
  Rough data for our server:
   ~1.5 billion relationships
   ~400 million nodes
   ~1,200 transactions per minute
   ~90% are lookups, 10% inserts
 
  Not sure if you're still around due to the time difference, but if you
  could
  provide that patch today I can test it right away.
 
  Thanks,
  Brock
 
  On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com
  wrote:
 
   So the current status is that David has got neo4j REST running on Jetty
   with
   all tests passing. We've also searched through the code, and found that
   there are no interrupt() calls in the jersey source, while there are a
  few
   on the grizzly side. There is one in particular that we have been
 looking
   at, related to keep-alive timeouts, that may be the culprit. If that
 was
   the
   problem, we've got a fix for it.
  
   We have, however, been unable to recreate the problem so far, so we
 can't
   tell if we've solved it or not :) Brock: could you give us an idea of
  what
   types of requests you were throwing at the server, and a rough estimate
  of
   how many?
  
   /Jacob
  
   On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com
   wrote:
  
Hi all!
   
Johan took a look at the stack trace, and explained the problem. What
happens is that something, either the Grizzly server or the jersey
   wrapper
calls Thread.interrupt() on one of the neo4j threads (which should be
considered a bug in whichever one of them does that). This triggers
 an
IOError deep down in neo4j, which in turn causes the rest of the
   problems.
   
I'm working on recreating the situation, and David is working on
   switching
the REST system over to run on Jetty instead of Grizzly. We'll keep
 you
posted on the progress.
   
/Jacob
   
   
On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
peter.neuba...@neotechnology.com wrote:
   
Nice,
will try that out Jim! Grinder seems cool.
   
Cheers,
   
/peter neubauer
   
COO and Sales, Neo Technology
   
GTalk:  neubauer.peter
Skype   peter.neubauer
Phone   +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter  http://twitter.com/peterneubauer
   
http://www.neo4j.org   - Your high performance graph
database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
  party.
   
   
   
On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name
 wrote:
 Perhaps something as simple as a Grinder script might help?

 Jim


 On 11 Aug 2010, at 17:57, Brock Rousseau wrote:

 Thanks Peter. Let us know if there is anything else we can
 provide
  in
the
 way of logs or diagnosis from our server.

 -Brock

 On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
 peter.neuba...@neotechnology.com wrote:

 Mmh,
 seems we should stresstest the server and Grizzly with e.g.
 http://www.soapui.org and see if we can reproduce the scenario,
  if
 there is no obvious hint to this. Will try to set it up ...

 Cheers,

 /peter neubauer

 COO and Sales, Neo Technology

 GTalk:  neubauer.peter
 Skype   peter.neubauer
 Phone   +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter  http://twitter.com/peterneubauer

 http://www.neo4j.org   - Your high performance
 graph
database.
 http://www.thoughtmade.com - Scandinavia's coolest
 Bring-a-Thing
party.



 On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau 
 bro...@gmail.com
  
wrote:
 The patch worked perfectly for increasing the concurrent
   transaction
   

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread David Montag
Hi Brock,

Ok, that should have been taken care of by Maven, let me have a look at
that. It should of course work to just mvn install:install-file them
yourself into your repository. But I'll have a look at that.

I'm free for gchat any time today if you want.

David

On Thu, Aug 12, 2010 at 12:29 AM, Brock Rousseau bro...@gmail.com wrote:

 Hey David,

 No worries about the disclaimer. I am getting a runtime error on startup
 though due to the lack of the Jetty libraries. Any special instructions
 there or should I just grab them from Jetty's website?

 Also, would any of you be available via gchat some time in the next 24
 hours
 so I can relay the results of load testing? I can adjust my schedule since
 you guys are CEST if I'm not mistaken, just let me know.

 Thanks,
 Brock

 On Wed, Aug 11, 2010 at 2:46 PM, David Montag 
 david.mon...@neotechnology.com wrote:

  Hi Brock,
 
  If you svn update to the latest version of the REST component, apply the
  patch I'll send to you, and rebuild it as per Jacob's previous
  instructions,
  then it should use Jetty instead.
 
  Keep in mind that this was a quick fix done today, so it might break down
  for the same or other reasons, especially as we haven't been able to
  reproduce the error you're seeing, and hence test that it actually fixes
  anything. Just a disclaimer.
 
  David
 
  On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com
 wrote:
 
   Hi Jacob,
  
   Would you be able to email me that patch? It's probably easier for me
 to
   throw it on our server and let you know how it goes rather than you
 guys
   having to try and reproduce it.
  
   Rough data for our server:
~1.5 billion relationships
~400 million nodes
~1,200 transactions per minute
~90% are lookups, 10% inserts
  
   Not sure if you're still around due to the time difference, but if you
   could
   provide that patch today I can test it right away.
  
   Thanks,
   Brock
  
   On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com
   wrote:
  
So the current status is that David has got neo4j REST running on
 Jetty
with
all tests passing. We've also searched through the code, and found
 that
there are no interrupt() calls in the jersey source, while there are
 a
   few
on the grizzly side. There is one in particular that we have been
  looking
at, related to keep-alive timeouts, that may be the culprit. If that
  was
the
problem, we've got a fix for it.
   
We have, however, been unable to recreate the problem so far, so we
  can't
tell if we've solved it or not :) Brock: could you give us an idea of
   what
types of requests you were throwing at the server, and a rough
 estimate
   of
how many?
   
/Jacob
   
On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com
 
wrote:
   
 Hi all!

 Johan took a look at the stack trace, and explained the problem.
 What
 happens is that something, either the Grizzly server or the jersey
wrapper
 calls Thread.interrupt() on one of the neo4j threads (which should
 be
 considered a bug in whichever one of them does that). This triggers
  an
 IOError deep down in neo4j, which in turn causes the rest of the
problems.

 I'm working on recreating the situation, and David is working on
switching
 the REST system over to run on Jetty instead of Grizzly. We'll keep
  you
 posted on the progress.

 /Jacob


 On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
 peter.neuba...@neotechnology.com wrote:

 Nice,
 will try that out Jim! Grinder seems cool.

 Cheers,

 /peter neubauer

 COO and Sales, Neo Technology

 GTalk:  neubauer.peter
 Skype   peter.neubauer
 Phone   +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter  http://twitter.com/peterneubauer

 http://www.neo4j.org   - Your high performance graph
 database.
 http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
   party.



 On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name
  wrote:
  Perhaps something as simple as a Grinder script might help?
 
  Jim
 
 
  On 11 Aug 2010, at 17:57, Brock Rousseau wrote:
 
  Thanks Peter. Let us know if there is anything else we can
  provide
   in
 the
  way of logs or diagnosis from our server.
 
  -Brock
 
  On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer 
  peter.neuba...@neotechnology.com wrote:
 
  Mmh,
  seems we should stresstest the server and Grizzly with e.g.
  http://www.soapui.org and see if we can reproduce the
 scenario,
   if
  there is no obvious hint to this. Will try to set it up ...
 
  Cheers,
 
  /peter neubauer
 
  COO and Sales, Neo Technology
 
  GTalk:  neubauer.peter
  Skype 

Re: [Neo4j] neo4j REST server configuration

2010-08-11 Thread David Montag
Hi Brock,

Sorry, I misread your e-mail, I thought you said compile time. I should at
least have breakfast before answering any e-mails :)

So, a runtime error. What library/class is missing? Could you provide us
with the error, it would help.

You can grab Jetty 6.1.25 and put it in lib, if they're not there. But they
should be, if everything was installed correctly. mvn clean install in the
REST component, and mvn clean package in the standalone component should
do it.

Please keep us updated on your progress.

David

On Thu, Aug 12, 2010 at 7:40 AM, David Montag 
david.mon...@neotechnology.com wrote:

 Hi Brock,

 Ok, that should have been taken care of by Maven, let me have a look at
 that. It should of course work to just mvn install:install-file them
 yourself into your repository. But I'll have a look at that.

 I'm free for gchat any time today if you want.

 David


 On Thu, Aug 12, 2010 at 12:29 AM, Brock Rousseau bro...@gmail.com wrote:

 Hey David,

 No worries about the disclaimer. I am getting a runtime error on startup
 though due to the lack of the Jetty libraries. Any special instructions
 there or should I just grab them from Jetty's website?

 Also, would any of you be available via gchat some time in the next 24
 hours
 so I can relay the results of load testing? I can adjust my schedule since
 you guys are CEST if I'm not mistaken, just let me know.

 Thanks,
 Brock

 On Wed, Aug 11, 2010 at 2:46 PM, David Montag 
 david.mon...@neotechnology.com wrote:

  Hi Brock,
 
  If you svn update to the latest version of the REST component, apply the
  patch I'll send to you, and rebuild it as per Jacob's previous
  instructions,
  then it should use Jetty instead.
 
  Keep in mind that this was a quick fix done today, so it might break
 down
  for the same or other reasons, especially as we haven't been able to
  reproduce the error you're seeing, and hence test that it actually fixes
  anything. Just a disclaimer.
 
  David
 
  On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com
 wrote:
 
   Hi Jacob,
  
   Would you be able to email me that patch? It's probably easier for me
 to
   throw it on our server and let you know how it goes rather than you
 guys
   having to try and reproduce it.
  
   Rough data for our server:
~1.5 billion relationships
~400 million nodes
~1,200 transactions per minute
~90% are lookups, 10% inserts
  
   Not sure if you're still around due to the time difference, but if you
   could
   provide that patch today I can test it right away.
  
   Thanks,
   Brock
  
   On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com
   wrote:
  
So the current status is that David has got neo4j REST running on
 Jetty
with
all tests passing. We've also searched through the code, and found
 that
there are no interrupt() calls in the jersey source, while there are
 a
   few
on the grizzly side. There is one in particular that we have been
  looking
at, related to keep-alive timeouts, that may be the culprit. If that
  was
the
problem, we've got a fix for it.
   
We have, however, been unable to recreate the problem so far, so we
  can't
tell if we've solved it or not :) Brock: could you give us an idea
 of
   what
types of requests you were throwing at the server, and a rough
 estimate
   of
how many?
   
/Jacob
   
On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson 
 ja...@voltvoodoo.com
wrote:
   
 Hi all!

 Johan took a look at the stack trace, and explained the problem.
 What
 happens is that something, either the Grizzly server or the jersey
wrapper
 calls Thread.interrupt() on one of the neo4j threads (which should
 be
 considered a bug in whichever one of them does that). This
 triggers
  an
 IOError deep down in neo4j, which in turn causes the rest of the
problems.

 I'm working on recreating the situation, and David is working on
switching
 the REST system over to run on Jetty instead of Grizzly. We'll
 keep
  you
 posted on the progress.

 /Jacob


 On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer 
 peter.neuba...@neotechnology.com wrote:

 Nice,
 will try that out Jim! Grinder seems cool.

 Cheers,

 /peter neubauer

 COO and Sales, Neo Technology

 GTalk:  neubauer.peter
 Skype   peter.neubauer
 Phone   +46 704 106975
 LinkedIn   http://www.linkedin.com/in/neubauer
 Twitter  http://twitter.com/peterneubauer

 http://www.neo4j.org   - Your high performance graph
 database.
 http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing
   party.



 On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name
  wrote:
  Perhaps something as simple as a Grinder script might help?
 
  Jim
 
 
  On 11 Aug 2010, at 17:57, Brock Rousseau wrote:
 
  

Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Jacob Hansson
Brock,

I've written a patch that allows setting the max threads via the neo4j
configuration file. Applying it fixes the problem you mention, but leads to
some kind of resource starvation higher up. This is not as critical, at 400
concurrent requests it will make the server alternate between 12ms read
response times (which should be the norm) and 300ms. It's still bad, so
before this patch is applied I'd like to dig into where that comes from.

If you want, though, you can apply the patch yourself. To do that, you would
download the REST component and the REST standalone project, avaliable here:

svn co https://svn.neo4j.org/laboratory/components/rest/
svn co https://svn.neo4j.org/assemblies/rest-standalone/trunk/

Apply the patch (attached to this email) to the REST component.

Run mvn install on the REST component, and then mvn package on the
standalone project. This will give you a patched version with a default max
thread count of 128 (compared to 5 before).


To increase it more, go to the neo4j-rest-db-folder in the deployed
system, open (or create) the file neo4j.properties, and use the following
two settings to modify the thread count:

rest_min_grizzly_threads=16
rest_max_grizzly_threads=48

I can't give you a perfect number, but it makes a big difference in
response times, in both directions, to change these settings.

/Jacob

On Mon, Aug 9, 2010 at 8:51 PM, Brock Rousseau bro...@gmail.com wrote:

 Hey Jacob,

 Thanks for the quick response!

 We saw your post on the grizzly mailing list about the transaction limit
 fix. Is that something we'd be able to implement on our end today? We've
 had
 to throttle back the traffic significantly and are eager to see it in
 action
 at full volume.

 -Brock
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




-- 
Jacob Hansson
Phone: +46 (0) 763503395
Twitter: @jakewins
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Brock Rousseau
Hey Jacob,

The patch sounds perfect, but I didn't see an attachement. Did i miss it
somehow?

The standalone project built fine for me, but we're getting these test
failures for mvn install on the rest component:
Failed tests:
  shouldBeAbleToDescribeTraverser(org.neo4j.rest.web.JsonWebServiceTest)
Tests in error:
  shouldBeAbleToTraverseEverything(org.neo4j.rest.domain.StorageActionsTest)

shouldBeAbleToUseCustomReturnFilter(org.neo4j.rest.domain.StorageActionsTest)

shouldBeAbleToTraverseWithMaxDepthAndPruneEvaluatorCombined(org.neo4j.rest.domain.StorageActionsTest)

shouldGetExpectedHitsWhenTraversingWithDescription(org.neo4j.rest.functional.TraverserFunctionalTest)
Should we be running the install by ignoring tests or should those be
passing?

Thanks,
Brock
On Tue, Aug 10, 2010 at 12:33 AM, Jacob Hansson ja...@voltvoodoo.comwrote:

 Brock,

 I've written a patch that allows setting the max threads via the neo4j
 configuration file. Applying it fixes the problem you mention, but leads to
 some kind of resource starvation higher up. This is not as critical, at 400
 concurrent requests it will make the server alternate between 12ms read
 response times (which should be the norm) and 300ms. It's still bad, so
 before this patch is applied I'd like to dig into where that comes from.

 If you want, though, you can apply the patch yourself. To do that, you
 would
 download the REST component and the REST standalone project, avaliable
 here:

 svn co https://svn.neo4j.org/laboratory/components/rest/
 svn co https://svn.neo4j.org/assemblies/rest-standalone/trunk/

 Apply the patch (attached to this email) to the REST component.

 Run mvn install on the REST component, and then mvn package on the
 standalone project. This will give you a patched version with a default max
 thread count of 128 (compared to 5 before).


 To increase it more, go to the neo4j-rest-db-folder in the deployed
 system, open (or create) the file neo4j.properties, and use the following
 two settings to modify the thread count:

 rest_min_grizzly_threads=16
 rest_max_grizzly_threads=48

 I can't give you a perfect number, but it makes a big difference in
 response times, in both directions, to change these settings.

 /Jacob

 On Mon, Aug 9, 2010 at 8:51 PM, Brock Rousseau bro...@gmail.com wrote:

   Hey Jacob,
 
  Thanks for the quick response!
 
  We saw your post on the grizzly mailing list about the transaction limit
  fix. Is that something we'd be able to implement on our end today? We've
  had
  to throttle back the traffic significantly and are eager to see it in
  action
  at full volume.
 
  -Brock
   ___
  Neo4j mailing list
  User@lists.neo4j.org
  https://lists.neo4j.org/mailman/listinfo/user
 



 --
 Jacob Hansson
 Phone: +46 (0) 763503395
 Twitter: @jakewins

 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user


___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Jacob Hansson
Apparently the email list deletes attachments it does not recognize, I'll
send it directly to your email.

As far as the test failures go, I don't know what causes that. I ran and
passed all tests with the patch, but the trunk version should do that too.
It might be something platform specific.. what platform are you running on?

/Jacob

On Tue, Aug 10, 2010 at 10:06 AM, Brock Rousseau bro...@gmail.com wrote:

 Hey Jacob,

 The patch sounds perfect, but I didn't see an attachement. Did i miss it
 somehow?

 The standalone project built fine for me, but we're getting these test
 failures for mvn install on the rest component:
 Failed tests:
  shouldBeAbleToDescribeTraverser(org.neo4j.rest.web.JsonWebServiceTest)
 Tests in error:
  shouldBeAbleToTraverseEverything(org.neo4j.rest.domain.StorageActionsTest)


 shouldBeAbleToUseCustomReturnFilter(org.neo4j.rest.domain.StorageActionsTest)


 shouldBeAbleToTraverseWithMaxDepthAndPruneEvaluatorCombined(org.neo4j.rest.domain.StorageActionsTest)


 shouldGetExpectedHitsWhenTraversingWithDescription(org.neo4j.rest.functional.TraverserFunctionalTest)
 Should we be running the install by ignoring tests or should those be
 passing?

 Thanks,
 Brock
 On Tue, Aug 10, 2010 at 12:33 AM, Jacob Hansson ja...@voltvoodoo.com
 wrote:

  Brock,
 
  I've written a patch that allows setting the max threads via the neo4j
  configuration file. Applying it fixes the problem you mention, but leads
 to
  some kind of resource starvation higher up. This is not as critical, at
 400
  concurrent requests it will make the server alternate between 12ms read
  response times (which should be the norm) and 300ms. It's still bad, so
  before this patch is applied I'd like to dig into where that comes from.
 
  If you want, though, you can apply the patch yourself. To do that, you
  would
  download the REST component and the REST standalone project, avaliable
  here:
 
  svn co https://svn.neo4j.org/laboratory/components/rest/
  svn co https://svn.neo4j.org/assemblies/rest-standalone/trunk/
 
  Apply the patch (attached to this email) to the REST component.
 
  Run mvn install on the REST component, and then mvn package on the
  standalone project. This will give you a patched version with a default
 max
  thread count of 128 (compared to 5 before).
 
 
  To increase it more, go to the neo4j-rest-db-folder in the deployed
  system, open (or create) the file neo4j.properties, and use the following
  two settings to modify the thread count:
 
  rest_min_grizzly_threads=16
  rest_max_grizzly_threads=48
 
  I can't give you a perfect number, but it makes a big difference in
  response times, in both directions, to change these settings.
 
  /Jacob
 
  On Mon, Aug 9, 2010 at 8:51 PM, Brock Rousseau bro...@gmail.com wrote:
 
Hey Jacob,
  
   Thanks for the quick response!
  
   We saw your post on the grizzly mailing list about the transaction
 limit
   fix. Is that something we'd be able to implement on our end today?
 We've
   had
   to throttle back the traffic significantly and are eager to see it in
   action
   at full volume.
  
   -Brock
___
   Neo4j mailing list
   User@lists.neo4j.org
   https://lists.neo4j.org/mailman/listinfo/user
  
 
 
 
  --
  Jacob Hansson
  Phone: +46 (0) 763503395
  Twitter: @jakewins
 
  ___
  Neo4j mailing list
  User@lists.neo4j.org
  https://lists.neo4j.org/mailman/listinfo/user
 
 
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




-- 
Jacob Hansson
Phone: +46 (0) 763503395
Twitter: @jakewins
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Brock Rousseau
I got the patch Jacob, thanks!

We're running Centos 5.5, Java 1.6.0, and Maven 2.2.1

If it's fine for you guys then as you said, likely some configuration
difference with our system, so i'll just run it with-Dmaven.test.skip=true

Applying the patch now, i'll let you know how it goes. Thanks again for all
the support.

-Brock

On Tue, Aug 10, 2010 at 1:37 AM, Anders Nawroth and...@neotechnology.comwrote:

 hi!

 On 08/10/2010 10:06 AM, Brock Rousseau wrote:
  The standalone project built fine for me, but we're getting these test
  failures for mvn install on the rest component:

 Try using Java 6 or skip the tests.

  From the pom.xml:
 !-- NOTICE: Tests will not run on standard Java 5!
  This is due to the fact that there is no easily available
  implementation of the javax.script API for Java 5 in any
  maven repositories. For building on Java 5, either skip
  running the tests, or include the javax.script js
 reference
  implementation from http://jcp.org/en/jsr/detail?id=223
  in the classpath of the JVM.
   --

 /anders
  ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user

___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Brock Rousseau
The patched trunk code is working fine after dropping it over our existing
deployment. I'm going to wait until we have the support of our site-speed
engineers in the morning before testing the transaction limit under
full production load, but I'll post the results as soon as we have them.

Thanks,
Brock


On Tue, Aug 10, 2010 at 1:47 AM, Brock Rousseau bro...@gmail.com wrote:

 I got the patch Jacob, thanks!

 We're running Centos 5.5, Java 1.6.0, and Maven 2.2.1

 If it's fine for you guys then as you said, likely some configuration
 difference with our system, so i'll just run it with-Dmaven.test.skip=true

 Applying the patch now, i'll let you know how it goes. Thanks again for all
 the support.

 -Brock

   On Tue, Aug 10, 2010 at 1:37 AM, Anders Nawroth 
 and...@neotechnology.com wrote:

 hi!

 On 08/10/2010 10:06 AM, Brock Rousseau wrote:
  The standalone project built fine for me, but we're getting these test
  failures for mvn install on the rest component:

 Try using Java 6 or skip the tests.

  From the pom.xml:
 !-- NOTICE: Tests will not run on standard Java 5!
  This is due to the fact that there is no easily available
  implementation of the javax.script API for Java 5 in any
  maven repositories. For building on Java 5, either skip
  running the tests, or include the javax.script js
 reference
  implementation from http://jcp.org/en/jsr/detail?id=223
  in the classpath of the JVM.
   --

 /anders
  ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user



___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Jacob Hansson
The problem with the resource starvation turned out to be fairly simple, and
was due to a missing connection:close header in the neo4j REST client I
was using.

It would be interesting to look into the connection:keep-alive behaviour
of Grizzly, using HTTP 1.1 pipelining would be an excellent way to increase
performance in the clients, but the few things I've read so far suggests
Grizzly does not yet support that.

/Jacob

On Tue, Aug 10, 2010 at 11:19 AM, Brock Rousseau bro...@gmail.com wrote:

 The patched trunk code is working fine after dropping it over our existing
 deployment. I'm going to wait until we have the support of our site-speed
 engineers in the morning before testing the transaction limit under
 full production load, but I'll post the results as soon as we have them.

 Thanks,
 Brock


 On Tue, Aug 10, 2010 at 1:47 AM, Brock Rousseau bro...@gmail.com wrote:

  I got the patch Jacob, thanks!
 
  We're running Centos 5.5, Java 1.6.0, and Maven 2.2.1
 
  If it's fine for you guys then as you said, likely some configuration
  difference with our system, so i'll just run it
 with-Dmaven.test.skip=true
 
  Applying the patch now, i'll let you know how it goes. Thanks again for
 all
  the support.
 
  -Brock
 
On Tue, Aug 10, 2010 at 1:37 AM, Anders Nawroth 
  and...@neotechnology.com wrote:
 
  hi!
 
  On 08/10/2010 10:06 AM, Brock Rousseau wrote:
   The standalone project built fine for me, but we're getting these test
   failures for mvn install on the rest component:
 
  Try using Java 6 or skip the tests.
 
   From the pom.xml:
  !-- NOTICE: Tests will not run on standard Java 5!
   This is due to the fact that there is no easily
 available
   implementation of the javax.script API for Java 5 in
 any
   maven repositories. For building on Java 5, either skip
   running the tests, or include the javax.script js
  reference
   implementation from
 http://jcp.org/en/jsr/detail?id=223
   in the classpath of the JVM.
--
 
  /anders
   ___
  Neo4j mailing list
  User@lists.neo4j.org
  https://lists.neo4j.org/mailman/listinfo/user
 
 
 
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




-- 
Jacob Hansson
Phone: +46 (0) 763503395
Twitter: @jakewins
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-10 Thread Brock Rousseau
The patch worked perfectly for increasing the concurrent transaction cap,
but unfortunately exposed another issue.

After increasing the load hitting our rest server, it performs smoothly for
10-15 minutes then begins issuing 500 responses on all transactions. When it
happens, the number of open transactions freezes in JMX and the heap size
essentially remains static. Below are the two stack traces we see in the
wrapper.log. Here are what i think to be the relevant configuration lines:

wrapper.conf:
wrapper.java.additional.1=-d64
wrapper.java.additional.2=-server
wrapper.java.additional.4=-Xmx8192m
wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
wrapper.java.additional.4=-Dcom.sun.management.jmxremote

neo4j.properties:

rest_min_grizzly_threads=4
rest_max_grizzly_threads=128

neostore.nodestore.db.mapped_memory=4000M
neostore.relationshipstore.db.mapped_memory=4M
neostore.propertystore.db.mapped_memory=1800M
neostore.propertystore.db.index.mapped_memory=100M
neostore.propertystore.db.index.keys.mapped_memory=100M
neostore.propertystore.db.strings.mapped_memory=3G
neostore.propertystore.db.arrays.mapped_memory=0M

The server has 64Gb of total RAM so there should be a little over 6 left for
the system.


At the initial time of failure there are several of this error:

INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
com.sun.grizzly.http.KeepAliveThreadAttachment timedOut
INFO   | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread:
Grizzly-9555-WorkerThread(1)
INFO   | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM
com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException
INFO   | jvm 1| 2010/08/10 13:00:33 | SEVERE: The RuntimeException could
not be mapped to a response, re-throwing to the HTTP container
INFO   | jvm 1| 2010/08/10 13:00:33 |
org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to
load position[7280476] @[968303308]
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.rest.domain.StorageActions$TraverserReturnType$1.toRepresentation(StorageActions.java:421)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.rest.domain.StorageActions.traverseAndCollect(StorageActions.java:403)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.rest.web.GenericWebService.traverse(GenericWebService.java:725)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
org.neo4j.rest.web.JsonAndHtmlWebService.jsonTraverse(JsonAndHtmlWebService.java:324)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
java.lang.reflect.Method.invoke(Method.java:616)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:184)
INFO   | jvm 1| 2010/08/10 13:00:33 |   at

Re: [Neo4j] neo4j REST server configuration

2010-08-09 Thread Anders Nawroth
Hi!

 We were able to increase the heap size by removing the
 wrapper.java.initmemory and wrapper.java.maxmemory settings from
 wrapper.conf and instead used this:

The Java Service Wrapper version used has a known bug not allowing more 
than four digits in the maxmemory setting. Thanks for reporting the issue!

/anders


 wrapper.java.additional.1=-d64

 wrapper.java.additional.2=-server

 wrapper.java.additional.3=-Dcom.sun.management.jmxremote

 wrapper.java.additional.4=-Xmx16144m

 wrapper.java.additional.5=-Xms16144m



 The rest server still seems to be capping itself at 5 concurrent
 transactions though despite the larger heap. When hit with the full load of
 the client servers we observed that the both the
 PeakNumberOfConcurrentTransactions and NumberOfOpenTransactions attributes
 in the Transactions section of JMX remain exactly at 5. Under lesser load,
 the client servers rarely time out and the NumberOfOpenTransactions hovers
 between 0 and 2.



 Is there a way to explicitly increase the maximum number of concurrent
 transactions?



 As a side note, we initially explored the heap size after reading this:
 “Having a larger heap space will mean that Neo4j can handle larger
 transactions and more concurrent transactions” on the Neo4j wiki (
 http://wiki.neo4j.org/content/Configuration_Settings)



 Thanks,

 Brock
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-09 Thread Jacob Hansson
Brock,

I've been digging into this, and was able to replicate your problem with a
concurrency cap at 5 transactions. It appears it is the Grizzly server
running REST that imposes this limit by not expanding its thread pool the
way it's supposed to. Increasing the initial number of threads (default is
5) leads to a proportional increase in the peak number of concurrent
transactions reported. If I bring the minimum number of threads up to 128, I
get 128 concurrent transactions too, so the culprit definitely is the web
tier.

By hammering the server really hard I can get grizzly to bring 13 threads
online, still far from the several hundred that is the default maximum.

I don't know yet why the thread pool does not grow as expected, I'm not very
familiar with grizzly, but I've posted a message to the grizzly mailing list
to see if anyone there has any input.

/Jacob

On Mon, Aug 9, 2010 at 11:03 AM, Anders Nawroth and...@neotechnology.comwrote:

 Hi!

  We were able to increase the heap size by removing the
  wrapper.java.initmemory and wrapper.java.maxmemory settings from
  wrapper.conf and instead used this:

 The Java Service Wrapper version used has a known bug not allowing more
 than four digits in the maxmemory setting. Thanks for reporting the issue!

 /anders

 
  wrapper.java.additional.1=-d64
 
  wrapper.java.additional.2=-server
 
  wrapper.java.additional.3=-Dcom.sun.management.jmxremote
 
  wrapper.java.additional.4=-Xmx16144m
 
  wrapper.java.additional.5=-Xms16144m
 
 
 
  The rest server still seems to be capping itself at 5 concurrent
  transactions though despite the larger heap. When hit with the full load
 of
  the client servers we observed that the both the
  PeakNumberOfConcurrentTransactions and NumberOfOpenTransactions
 attributes
  in the Transactions section of JMX remain exactly at 5. Under lesser
 load,
  the client servers rarely time out and the NumberOfOpenTransactions
 hovers
  between 0 and 2.
 
 
 
  Is there a way to explicitly increase the maximum number of concurrent
  transactions?
 
 
 
  As a side note, we initially explored the heap size after reading this:
  “Having a larger heap space will mean that Neo4j can handle larger
  transactions and more concurrent transactions” on the Neo4j wiki (
  http://wiki.neo4j.org/content/Configuration_Settings)
 
 
 
  Thanks,
 
  Brock
  ___
  Neo4j mailing list
  User@lists.neo4j.org
  https://lists.neo4j.org/mailman/listinfo/user
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user




-- 
Jacob Hansson
Phone: +46 (0) 763503395
Twitter: @jakewins
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] neo4j REST server configuration

2010-08-09 Thread Brock Rousseau
Hey Jacob,

Thanks for the quick response!

We saw your post on the grizzly mailing list about the transaction limit
fix. Is that something we'd be able to implement on our end today? We've had
to throttle back the traffic significantly and are eager to see it in action
at full volume.

-Brock
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] neo4j REST server configuration

2010-08-08 Thread Brock Rousseau
This is an update to the issue previously reported by Mohit.



We were able to increase the heap size by removing the
wrapper.java.initmemory and wrapper.java.maxmemory settings from
wrapper.conf and instead used this:

wrapper.java.additional.1=-d64

wrapper.java.additional.2=-server

wrapper.java.additional.3=-Dcom.sun.management.jmxremote

wrapper.java.additional.4=-Xmx16144m

wrapper.java.additional.5=-Xms16144m



The rest server still seems to be capping itself at 5 concurrent
transactions though despite the larger heap. When hit with the full load of
the client servers we observed that the both the
PeakNumberOfConcurrentTransactions and NumberOfOpenTransactions attributes
in the Transactions section of JMX remain exactly at 5. Under lesser load,
the client servers rarely time out and the NumberOfOpenTransactions hovers
between 0 and 2.



Is there a way to explicitly increase the maximum number of concurrent
transactions?



As a side note, we initially explored the heap size after reading this:
“Having a larger heap space will mean that Neo4j can handle larger
transactions and more concurrent transactions” on the Neo4j wiki (
http://wiki.neo4j.org/content/Configuration_Settings)



Thanks,

Brock
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] neo4j REST server configuration

2010-08-06 Thread Mohit Vazirani
Hi,

I'm running the standalone neo4j REST server on a 64 bit linux machine with 
64GB 
RAM and am trying to configure the following memory settings through the 
wrapper.conf file:

wrapper.java.initmemory=16144
wrapper.java.maxmemory=16144

However when I restart the server, JMX shows me the following VM arguments:

-Dcom.sun.management.jmxremote -Xms4096m -Xmx4096m -Djava.library.path=lib 
-Dwrapper.key=q8W6vP8LS9mj0ekz -Dwrapper.port=32000 
-Dwrapper.jvm.port.min=31000 
-Dwrapper.jvm.port.max=31999 -Dwrapper.pid=27943 -Dwrapper.version=3.2.3 
-Dwrapper.native_library=wrapper -Dwrapper.service=TRUE 
-Dwrapper.cpu.timeout=10 
-Dwrapper.jvmid=1

Another unrelated issue is that JMX Mbeans shows configuration attributes as 
unavailable when I attach to the REST wrapper.

The reason I am looking into modifying the configuration is that my client 
servers seem to be timing out. The server cannot handle more than 5 concurrent 
transactions, so I want to tweak the heap size and see if that helps.

Thanks,
~Mohit



  
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user