Re: [Neo4j] neo4j REST server configuration
Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC wrapper.java.additional.4=-Dcom.sun.management.jmxremote neo4j.properties: rest_min_grizzly_threads=4 rest_max_grizzly_threads=128 neostore.nodestore.db.mapped_memory=4000M neostore.relationshipstore.db.mapped_memory=4M neostore.propertystore.db.mapped_memory=1800M neostore.propertystore.db.index.mapped_memory=100M neostore.propertystore.db.index.keys.mapped_memory=100M neostore.propertystore.db.strings.mapped_memory=3G neostore.propertystore.db.arrays.mapped_memory=0M The server has 64Gb of total RAM so there should be a little over 6 left for the system. At the initial time of failure there are several of this error: INFO | jvm 1 | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.grizzly.http.KeepAliveThreadAttachment timedOut INFO | jvm 1 | 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread: Grizzly-9555-WorkerThread(1) INFO | jvm 1 | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException INFO | jvm 1 | 2010/08/10 13:00:33 | SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container INFO | jvm 1 | 2010/08/10 13:00:33 | org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to load position[7280476] @[968303308] INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.rest.domain.StorageActions$TraverserReturnType$1.toRepresentation(StorageActions.java:421) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.rest.domain.StorageActions.traverseAndCollect(StorageActions.java:403) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.rest.web.GenericWebService.traverse(GenericWebService.java:725) INFO | jvm 1 | 2010/08/10 13:00:33
Re: [Neo4j] neo4j REST server configuration
Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC wrapper.java.additional.4=-Dcom.sun.management.jmxremote neo4j.properties: rest_min_grizzly_threads=4 rest_max_grizzly_threads=128 neostore.nodestore.db.mapped_memory=4000M neostore.relationshipstore.db.mapped_memory=4M neostore.propertystore.db.mapped_memory=1800M neostore.propertystore.db.index.mapped_memory=100M neostore.propertystore.db.index.keys.mapped_memory=100M neostore.propertystore.db.strings.mapped_memory=3G neostore.propertystore.db.arrays.mapped_memory=0M The server has 64Gb of total RAM so there should be a little over 6 left for the system. At the initial time of failure there are several of this error: INFO | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.grizzly.http.KeepAliveThreadAttachment timedOut INFO | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread: Grizzly-9555-WorkerThread(1) INFO | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException INFO | jvm 1| 2010/08/10 13:00:33 | SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container INFO | jvm 1| 2010/08/10 13:00:33 | org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to load position[7280476] @[968303308] INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20) INFO | jvm 1| 2010/08/10 13:00:33 | at
Re: [Neo4j] neo4j REST server configuration
Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC wrapper.java.additional.4=-Dcom.sun.management.jmxremote neo4j.properties: rest_min_grizzly_threads=4 rest_max_grizzly_threads=128 neostore.nodestore.db.mapped_memory=4000M neostore.relationshipstore.db.mapped_memory=4M neostore.propertystore.db.mapped_memory=1800M neostore.propertystore.db.index.mapped_memory=100M neostore.propertystore.db.index.keys.mapped_memory=100M neostore.propertystore.db.strings.mapped_memory=3G neostore.propertystore.db.arrays.mapped_memory=0M The server has 64Gb of total RAM so there should be a little over 6 left for the system. At the initial time of failure there are several of this error: INFO | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.grizzly.http.KeepAliveThreadAttachment timedOut INFO | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread: Grizzly-9555-WorkerThread(1) INFO | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException INFO | jvm 1| 2010/08/10 13:00:33 | SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container INFO | jvm 1| 2010/08/10 13:00:33 | org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to load position[7280476] @[968303308] INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getProperty(Primitive.java:167) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeProxy.getProperty(NodeProxy.java:134) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.rest.domain.PropertiesMap.init(PropertiesMap.java:20) INFO | jvm 1| 2010/08/10 13:00:33 | at org.neo4j.rest.domain.NodeRepresentation.init(NodeRepresentation.java:20) INFO | jvm 1| 2010/08/10 13:00:33 |
Re: [Neo4j] neo4j REST server configuration
Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC wrapper.java.additional.4=-Dcom.sun.management.jmxremote neo4j.properties: rest_min_grizzly_threads=4 rest_max_grizzly_threads=128 neostore.nodestore.db.mapped_memory=4000M neostore.relationshipstore.db.mapped_memory=4M neostore.propertystore.db.mapped_memory=1800M neostore.propertystore.db.index.mapped_memory=100M neostore.propertystore.db.index.keys.mapped_memory=100M neostore.propertystore.db.strings.mapped_memory=3G neostore.propertystore.db.arrays.mapped_memory=0M The server has 64Gb of total RAM so there should be a little over 6 left for the system. At the initial time of failure there are several of this error: INFO | jvm 1 | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.grizzly.http.KeepAliveThreadAttachment timedOut INFO | jvm 1 | 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread: Grizzly-9555-WorkerThread(1) INFO | jvm 1 | 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException INFO | jvm 1 | 2010/08/10 13:00:33 | SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container INFO | jvm 1 | 2010/08/10 13:00:33 | org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to load position[7280476] @[968303308] INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceRow.readPosition(PersistenceRow.java:101) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:152) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:474) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.getLightRecords(AbstractDynamicStore.java:375) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:324) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.propertyGetValue(ReadTransaction.java:237) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.nioneo.xa.NioNeoDbPersistenceSource$ReadOnlyResourceConnection.loadPropertyValue(NioNeoDbPersistenceSource.java:216) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.persistence.PersistenceManager.loadPropertyValue(PersistenceManager.java:79) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.NodeManager.loadPropertyValue(NodeManager.java:579) INFO | jvm 1 | 2010/08/10 13:00:33 | at org.neo4j.kernel.impl.core.Primitive.getPropertyValue(Primitive.java:546) INFO | jvm 1 | 2010/08/10 13:00:33 |
Re: [Neo4j] neo4j REST server configuration
So the current status is that David has got neo4j REST running on Jetty with all tests passing. We've also searched through the code, and found that there are no interrupt() calls in the jersey source, while there are a few on the grizzly side. There is one in particular that we have been looking at, related to keep-alive timeouts, that may be the culprit. If that was the problem, we've got a fix for it. We have, however, been unable to recreate the problem so far, so we can't tell if we've solved it or not :) Brock: could you give us an idea of what types of requests you were throwing at the server, and a rough estimate of how many? /Jacob On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote: Hi all! Johan took a look at the stack trace, and explained the problem. What happens is that something, either the Grizzly server or the jersey wrapper calls Thread.interrupt() on one of the neo4j threads (which should be considered a bug in whichever one of them does that). This triggers an IOError deep down in neo4j, which in turn causes the rest of the problems. I'm working on recreating the situation, and David is working on switching the REST system over to run on Jetty instead of Grizzly. We'll keep you posted on the progress. /Jacob On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC wrapper.java.additional.4=-Dcom.sun.management.jmxremote neo4j.properties: rest_min_grizzly_threads=4 rest_max_grizzly_threads=128 neostore.nodestore.db.mapped_memory=4000M neostore.relationshipstore.db.mapped_memory=4M neostore.propertystore.db.mapped_memory=1800M neostore.propertystore.db.index.mapped_memory=100M neostore.propertystore.db.index.keys.mapped_memory=100M neostore.propertystore.db.strings.mapped_memory=3G neostore.propertystore.db.arrays.mapped_memory=0M The server has 64Gb of total RAM so there should be a little over 6 left for the system. At the initial time of failure there are several of this error: INFO | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.grizzly.http.KeepAliveThreadAttachment timedOut INFO | jvm 1| 2010/08/10 13:00:33 | WARNING: Interrupting idle Thread: Grizzly-9555-WorkerThread(1) INFO | jvm 1| 2010/08/10 13:00:33 | Aug 10, 2010 1:00:33 PM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException INFO | jvm 1| 2010/08/10 13:00:33 | SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container INFO | jvm 1| 2010/08/10 13:00:33 | org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Unable to load position[7280476]
Re: [Neo4j] neo4j REST server configuration
Hi Jacob, Would you be able to email me that patch? It's probably easier for me to throw it on our server and let you know how it goes rather than you guys having to try and reproduce it. Rough data for our server: ~1.5 billion relationships ~400 million nodes ~1,200 transactions per minute ~90% are lookups, 10% inserts Not sure if you're still around due to the time difference, but if you could provide that patch today I can test it right away. Thanks, Brock On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com wrote: So the current status is that David has got neo4j REST running on Jetty with all tests passing. We've also searched through the code, and found that there are no interrupt() calls in the jersey source, while there are a few on the grizzly side. There is one in particular that we have been looking at, related to keep-alive timeouts, that may be the culprit. If that was the problem, we've got a fix for it. We have, however, been unable to recreate the problem so far, so we can't tell if we've solved it or not :) Brock: could you give us an idea of what types of requests you were throwing at the server, and a rough estimate of how many? /Jacob On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote: Hi all! Johan took a look at the stack trace, and explained the problem. What happens is that something, either the Grizzly server or the jersey wrapper calls Thread.interrupt() on one of the neo4j threads (which should be considered a bug in whichever one of them does that). This triggers an IOError deep down in neo4j, which in turn causes the rest of the problems. I'm working on recreating the situation, and David is working on switching the REST system over to run on Jetty instead of Grizzly. We'll keep you posted on the progress. /Jacob On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC wrapper.java.additional.4=-Dcom.sun.management.jmxremote neo4j.properties: rest_min_grizzly_threads=4 rest_max_grizzly_threads=128 neostore.nodestore.db.mapped_memory=4000M neostore.relationshipstore.db.mapped_memory=4M neostore.propertystore.db.mapped_memory=1800M neostore.propertystore.db.index.mapped_memory=100M neostore.propertystore.db.index.keys.mapped_memory=100M neostore.propertystore.db.strings.mapped_memory=3G neostore.propertystore.db.arrays.mapped_memory=0M The server has 64Gb of total RAM so there should be a little over 6 left for the system. At the initial time of failure there are several of this
[Neo4j] GSoC work result, Featuring Gephi Neo4j
Hi all, My name is Martin Škurla and this summer I was working on GSoC project called Adding support for Neo4j in Gephi. I would like to introduce to all of you an article summarizing all implemented features including these under the hood, pictures of dialogs, common use cases and future plans. Very important part of the article is a Questionnaire which is a very valuable source of informations for further Gephi Neo4j cooperation. Thanks for any response. The article can be found here: http://gephi.org/2010/gsoc-2010-mid-term-adding-support-for-neo4j-in-gephi/ Thanks, Martin Škurla ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
Re: [Neo4j] neo4j REST server configuration
Hi Brock, If you svn update to the latest version of the REST component, apply the patch I'll send to you, and rebuild it as per Jacob's previous instructions, then it should use Jetty instead. Keep in mind that this was a quick fix done today, so it might break down for the same or other reasons, especially as we haven't been able to reproduce the error you're seeing, and hence test that it actually fixes anything. Just a disclaimer. David On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com wrote: Hi Jacob, Would you be able to email me that patch? It's probably easier for me to throw it on our server and let you know how it goes rather than you guys having to try and reproduce it. Rough data for our server: ~1.5 billion relationships ~400 million nodes ~1,200 transactions per minute ~90% are lookups, 10% inserts Not sure if you're still around due to the time difference, but if you could provide that patch today I can test it right away. Thanks, Brock On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com wrote: So the current status is that David has got neo4j REST running on Jetty with all tests passing. We've also searched through the code, and found that there are no interrupt() calls in the jersey source, while there are a few on the grizzly side. There is one in particular that we have been looking at, related to keep-alive timeouts, that may be the culprit. If that was the problem, we've got a fix for it. We have, however, been unable to recreate the problem so far, so we can't tell if we've solved it or not :) Brock: could you give us an idea of what types of requests you were throwing at the server, and a rough estimate of how many? /Jacob On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote: Hi all! Johan took a look at the stack trace, and explained the problem. What happens is that something, either the Grizzly server or the jersey wrapper calls Thread.interrupt() on one of the neo4j threads (which should be considered a bug in whichever one of them does that). This triggers an IOError deep down in neo4j, which in turn causes the rest of the problems. I'm working on recreating the situation, and David is working on switching the REST system over to run on Jetty instead of Grizzly. We'll keep you posted on the progress. /Jacob On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction cap, but unfortunately exposed another issue. After increasing the load hitting our rest server, it performs smoothly for 10-15 minutes then begins issuing 500 responses on all transactions. When it happens, the number of open transactions freezes in JMX and the heap size essentially remains static. Below are the two stack traces we see in the wrapper.log. Here are what i think to be the relevant configuration lines: wrapper.conf: wrapper.java.additional.1=-d64 wrapper.java.additional.2=-server wrapper.java.additional.4=-Xmx8192m wrapper.java.additional.3=-XX:+UseConcMarkSweepGC
Re: [Neo4j] neo4j REST server configuration
Hey David, No worries about the disclaimer. I am getting a runtime error on startup though due to the lack of the Jetty libraries. Any special instructions there or should I just grab them from Jetty's website? Also, would any of you be available via gchat some time in the next 24 hours so I can relay the results of load testing? I can adjust my schedule since you guys are CEST if I'm not mistaken, just let me know. Thanks, Brock On Wed, Aug 11, 2010 at 2:46 PM, David Montag david.mon...@neotechnology.com wrote: Hi Brock, If you svn update to the latest version of the REST component, apply the patch I'll send to you, and rebuild it as per Jacob's previous instructions, then it should use Jetty instead. Keep in mind that this was a quick fix done today, so it might break down for the same or other reasons, especially as we haven't been able to reproduce the error you're seeing, and hence test that it actually fixes anything. Just a disclaimer. David On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com wrote: Hi Jacob, Would you be able to email me that patch? It's probably easier for me to throw it on our server and let you know how it goes rather than you guys having to try and reproduce it. Rough data for our server: ~1.5 billion relationships ~400 million nodes ~1,200 transactions per minute ~90% are lookups, 10% inserts Not sure if you're still around due to the time difference, but if you could provide that patch today I can test it right away. Thanks, Brock On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com wrote: So the current status is that David has got neo4j REST running on Jetty with all tests passing. We've also searched through the code, and found that there are no interrupt() calls in the jersey source, while there are a few on the grizzly side. There is one in particular that we have been looking at, related to keep-alive timeouts, that may be the culprit. If that was the problem, we've got a fix for it. We have, however, been unable to recreate the problem so far, so we can't tell if we've solved it or not :) Brock: could you give us an idea of what types of requests you were throwing at the server, and a rough estimate of how many? /Jacob On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote: Hi all! Johan took a look at the stack trace, and explained the problem. What happens is that something, either the Grizzly server or the jersey wrapper calls Thread.interrupt() on one of the neo4j threads (which should be considered a bug in whichever one of them does that). This triggers an IOError deep down in neo4j, which in turn causes the rest of the problems. I'm working on recreating the situation, and David is working on switching the REST system over to run on Jetty instead of Grizzly. We'll keep you posted on the progress. /Jacob On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 4:14 AM, Brock Rousseau bro...@gmail.com wrote: The patch worked perfectly for increasing the concurrent transaction
[Neo4j] Use of LuceneFulltextIndexBatchInserter
Hi, I would like to use the LuceneFulltextIndexBatchInserter instead of the LuceneIndexBatchInserter to allow fulltext lucene indexing. How can I initialize the LuceneFulltextIndexBatchInserter ? the example on http://wiki.neo4j.org/content/Batch_Insert gives an example for the LuceneIndexBatchInserter but not for LuceneFulltextIndexBatchInserter LuceneIndexBatchInserter indexService = new LuceneIndexBatchInserterImpl( inserter ); thanks Paddy ___ Neo4j mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user
Re: [Neo4j] neo4j REST server configuration
Hi Brock, Ok, that should have been taken care of by Maven, let me have a look at that. It should of course work to just mvn install:install-file them yourself into your repository. But I'll have a look at that. I'm free for gchat any time today if you want. David On Thu, Aug 12, 2010 at 12:29 AM, Brock Rousseau bro...@gmail.com wrote: Hey David, No worries about the disclaimer. I am getting a runtime error on startup though due to the lack of the Jetty libraries. Any special instructions there or should I just grab them from Jetty's website? Also, would any of you be available via gchat some time in the next 24 hours so I can relay the results of load testing? I can adjust my schedule since you guys are CEST if I'm not mistaken, just let me know. Thanks, Brock On Wed, Aug 11, 2010 at 2:46 PM, David Montag david.mon...@neotechnology.com wrote: Hi Brock, If you svn update to the latest version of the REST component, apply the patch I'll send to you, and rebuild it as per Jacob's previous instructions, then it should use Jetty instead. Keep in mind that this was a quick fix done today, so it might break down for the same or other reasons, especially as we haven't been able to reproduce the error you're seeing, and hence test that it actually fixes anything. Just a disclaimer. David On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com wrote: Hi Jacob, Would you be able to email me that patch? It's probably easier for me to throw it on our server and let you know how it goes rather than you guys having to try and reproduce it. Rough data for our server: ~1.5 billion relationships ~400 million nodes ~1,200 transactions per minute ~90% are lookups, 10% inserts Not sure if you're still around due to the time difference, but if you could provide that patch today I can test it right away. Thanks, Brock On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com wrote: So the current status is that David has got neo4j REST running on Jetty with all tests passing. We've also searched through the code, and found that there are no interrupt() calls in the jersey source, while there are a few on the grizzly side. There is one in particular that we have been looking at, related to keep-alive timeouts, that may be the culprit. If that was the problem, we've got a fix for it. We have, however, been unable to recreate the problem so far, so we can't tell if we've solved it or not :) Brock: could you give us an idea of what types of requests you were throwing at the server, and a rough estimate of how many? /Jacob On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote: Hi all! Johan took a look at the stack trace, and explained the problem. What happens is that something, either the Grizzly server or the jersey wrapper calls Thread.interrupt() on one of the neo4j threads (which should be considered a bug in whichever one of them does that). This triggers an IOError deep down in neo4j, which in turn causes the rest of the problems. I'm working on recreating the situation, and David is working on switching the REST system over to run on Jetty instead of Grizzly. We'll keep you posted on the progress. /Jacob On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: Thanks Peter. Let us know if there is anything else we can provide in the way of logs or diagnosis from our server. -Brock On Tue, Aug 10, 2010 at 11:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Mmh, seems we should stresstest the server and Grizzly with e.g. http://www.soapui.org and see if we can reproduce the scenario, if there is no obvious hint to this. Will try to set it up ... Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype
Re: [Neo4j] neo4j REST server configuration
Hi Brock, Sorry, I misread your e-mail, I thought you said compile time. I should at least have breakfast before answering any e-mails :) So, a runtime error. What library/class is missing? Could you provide us with the error, it would help. You can grab Jetty 6.1.25 and put it in lib, if they're not there. But they should be, if everything was installed correctly. mvn clean install in the REST component, and mvn clean package in the standalone component should do it. Please keep us updated on your progress. David On Thu, Aug 12, 2010 at 7:40 AM, David Montag david.mon...@neotechnology.com wrote: Hi Brock, Ok, that should have been taken care of by Maven, let me have a look at that. It should of course work to just mvn install:install-file them yourself into your repository. But I'll have a look at that. I'm free for gchat any time today if you want. David On Thu, Aug 12, 2010 at 12:29 AM, Brock Rousseau bro...@gmail.com wrote: Hey David, No worries about the disclaimer. I am getting a runtime error on startup though due to the lack of the Jetty libraries. Any special instructions there or should I just grab them from Jetty's website? Also, would any of you be available via gchat some time in the next 24 hours so I can relay the results of load testing? I can adjust my schedule since you guys are CEST if I'm not mistaken, just let me know. Thanks, Brock On Wed, Aug 11, 2010 at 2:46 PM, David Montag david.mon...@neotechnology.com wrote: Hi Brock, If you svn update to the latest version of the REST component, apply the patch I'll send to you, and rebuild it as per Jacob's previous instructions, then it should use Jetty instead. Keep in mind that this was a quick fix done today, so it might break down for the same or other reasons, especially as we haven't been able to reproduce the error you're seeing, and hence test that it actually fixes anything. Just a disclaimer. David On Wed, Aug 11, 2010 at 7:30 PM, Brock Rousseau bro...@gmail.com wrote: Hi Jacob, Would you be able to email me that patch? It's probably easier for me to throw it on our server and let you know how it goes rather than you guys having to try and reproduce it. Rough data for our server: ~1.5 billion relationships ~400 million nodes ~1,200 transactions per minute ~90% are lookups, 10% inserts Not sure if you're still around due to the time difference, but if you could provide that patch today I can test it right away. Thanks, Brock On Wed, Aug 11, 2010 at 9:22 AM, Jacob Hansson ja...@voltvoodoo.com wrote: So the current status is that David has got neo4j REST running on Jetty with all tests passing. We've also searched through the code, and found that there are no interrupt() calls in the jersey source, while there are a few on the grizzly side. There is one in particular that we have been looking at, related to keep-alive timeouts, that may be the culprit. If that was the problem, we've got a fix for it. We have, however, been unable to recreate the problem so far, so we can't tell if we've solved it or not :) Brock: could you give us an idea of what types of requests you were throwing at the server, and a rough estimate of how many? /Jacob On Wed, Aug 11, 2010 at 2:35 PM, Jacob Hansson ja...@voltvoodoo.com wrote: Hi all! Johan took a look at the stack trace, and explained the problem. What happens is that something, either the Grizzly server or the jersey wrapper calls Thread.interrupt() on one of the neo4j threads (which should be considered a bug in whichever one of them does that). This triggers an IOError deep down in neo4j, which in turn causes the rest of the problems. I'm working on recreating the situation, and David is working on switching the REST system over to run on Jetty instead of Grizzly. We'll keep you posted on the progress. /Jacob On Wed, Aug 11, 2010 at 1:51 PM, Peter Neubauer peter.neuba...@neotechnology.com wrote: Nice, will try that out Jim! Grinder seems cool. Cheers, /peter neubauer COO and Sales, Neo Technology GTalk: neubauer.peter Skype peter.neubauer Phone +46 704 106975 LinkedIn http://www.linkedin.com/in/neubauer Twitter http://twitter.com/peterneubauer http://www.neo4j.org - Your high performance graph database. http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party. On Wed, Aug 11, 2010 at 12:52 PM, Jim Webber j...@webber.name wrote: Perhaps something as simple as a Grinder script might help? Jim On 11 Aug 2010, at 17:57, Brock Rousseau wrote: