You could configure the lucene data source to auto rotate logs more
frequently. If logs are large recovery takes longer time.
-Johan
On Mon, Oct 17, 2011 at 11:37 PM, Nuo Yan wrote:
> What if in production due to whatever reason the neo4j server died and in
> the case people have to start up a n
On Thu, Sep 22, 2011 at 2:15 PM, st3ven wrote:
>
> Hi Johan,
>
> I changed the settings as you described, but that changed the speed not
> really significantly.
The previous configuration would make the machine use swap and that
will kill performance.
>
> To store the degree as a property on eac
Hi Stephan,
You could try lower the heap size to -Xmx2G and cache_type=weak with
10G memory mapped for relationships. The machine only has 16G RAM and
will not be able to process such a large dataset at in-memory speeds.
Another option is to calculate degree at insertion time and store it
as a pr
Hi,
Did you get any error that caused the shutdown hook to run?
Is there a tm_tx_log.2 that contains data?
Regards,
Johan
On Fri, Sep 9, 2011 at 2:42 PM, skarab77 wrote:
> Hi,
>
> I have the following problem: when my program crash in the middle of the
> transaction, I am not able to start my
uld the db be off if I do that? Etc.
>
> Thanks much!
>
> Aseem
>
> On Tue, Aug 30, 2011 at 2:47 AM, Johan Svensson
> wrote:
>
>> Hi Aseem,
>>
>> This is actually expected behavior when performing file copy of
>> running db and starting up with default c
Hi Aseem,
This is actually expected behavior when performing file copy of
running db and starting up with default configuration. If you remove
the files ending with .id in the db directory on the local snapshot
and start up setting "rebuild_idgenerators_fast=false" you should see
the accurate amou
Hi Dario,
Could you post the error message and stacktrace?
Did the error happen after initial import but still running in batch
inserter mode or normal server/embedded transactional mode?
Regards,
Johan
On Wed, Jun 29, 2011 at 4:30 PM, Dario Rexin wrote:
> Hi all,
>
> Recently i tried import a
Hi,
This may be of interest http://arxiv.org/abs/1004.1001 (The Graph
Traversal Pattern) and
http://markorodriguez.com/2011/02/18/mysql-vs-neo4j-on-a-large-scale-graph-traversal/
Regards,
Johan
2011/6/27 Ian Bussières :
> Hello,
>
> I am using neo4j in a school project. I was wondering if anyone
Paul,
This could be related to the wrapper bug found if you are running the
server. If the server was under heavy load and entered GC trashing
(JVM stopping all threads just running GC) the wrapper thought the
server was unresponsive and restarted it. This problem will be fixed
in the 1.4.M05 rele
Hi,
That is possible (and even recommended). The Java API is thread safe
(with the exception of batch inserter) both for reads and writes.
Each thread may use its own transaction but it is not required to have
a transaction when performing read operations (only for writes).
Reading is lock free a
er-boun...@lists.neo4j.org] On Behalf
> Of Johan Svensson [jo...@neotechnology.com]
> Sent: Thursday, May 26, 2011 3:09 AM
> To: Neo4j user discussions
> Subject: Re: [Neo4j] ClosedChannelExceptions in highly concurrent environment
>
> Hi Jennifier,
>
> Could you apply this p
Hi,
Looks like there was an OOME during commit but commit partially
succeeded (removing the xid branch id association for the xa resource)
causing the subsequent rollback call to fail. To guarantee consistency
the kernel will block all mutating operations after this and a restart
+ recovery has to
You could modify the structure of how the collection is stored so
there are several chains that can be updated in parallel for each
collection. Kind of how ConcurrentHashMap works with several locks.
-Johan
On Fri, Jun 10, 2011 at 12:16 AM, Rick Bullotta
wrote:
> We seem to be encountering a lot
, McKinley wrote:
> Johan,
>
> In that Servlet example is the synchronized get on the graphDb reference
> still necessary on the ServletContextListener?
>
> Thanks,
>
> McKinley
>
> On Tue, Jun 7, 2011 at 1:03 AM, Johan Svensson wrote:
>
>> Hi,
>>
>>
Hi,
Neo4j requires a filesystem and that filesystem may be mounted in RAM
or not. You can only control the location of the graph store through
the API but it has to be on a supported filesystem.
On Linux a in memory graph db can easily be created using /dev/shm:
GraphDatabaseService inMemoryG
Hi,
You can assume all Neo4j APIs are thread safe. If something is not
thread safe it will be explicitly stated in the javadocs.
If you keep all state that has to be shared between threads in the
graph and all other state thread local you don't have to perform any
external (or extra) synchronizat
Hi Jennifier,
Could you apply this patch to the kernel and then see if the problem
still exists? If you want I can send you a jar but then I need to know
what version of Neo4j you are using.
Regards,
Johan
On Mon, May 23, 2011 at 6:50 PM, Jennifer Hickey wrote:
> Hi Tobias,
>
> Looks like the
Hi Jose,
Does http://docs.neo4j.org/chunked/1.3/transactions-delete.html answer
your question?
Regards,
Johan
On Tue, May 24, 2011 at 4:34 AM, Jose Angel Inda Herrera
wrote:
> hello list,
> I wonder when a node will be removed in a transaction, since I have a
> transaction in which I delete a n
Hi,
What version of Neo4j are you running and are there any other error
messages written to console or to messages.log when you start up?
Do you have neo4j-lucene-index component on the classpath? The global
transaction log contains a transaction that included the Lucene data
source (branch id 0x
Hi,
This will depend on types of queries, access patterns and what the
data look like. Could you provide some more information on what the
data look like, specifically relationships traversed and properties
loaded for a query?
Regarding adding another machine to an already active cluster it is
ea
.I need it very urgently.
> Please let me know the possible solutions for this question.
> Thank you.
>
--
Johan Svensson [jo...@neotechnology.com]
Chief Technology Officer, Neo Technology
www.neotechnology.com
___
Neo4j mailing list
User@lists.neo4j.or
Bob,
How much RAM does the machine have?
On Thu, Apr 21, 2011 at 9:53 PM, Bob Hutchison wrote:
>
>>> ./run ../store logfile 33 1000 5000 100
> tx_count[100] records[298245] fdatasyncs[100] read[9.386144 MB]
> wrote[18.772287 MB]
> Time was: 199.116
> 0.5022198 tx/s, 1497.8455 records/s, 0.50221
inning and the end of the "index".
>
> Best,
>
> Rick
>
>
> -Original Message-
> From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
> Behalf Of Johan Svensson
> Sent: Tuesday, March 22, 2011 5:56 AM
> To: Neo4j user discussio
Could you start by verifying it is not GC related. Turn on verbose GC
and see if larger transactions trigger GC pause times.
Another possible cause could be that the relationship store file has
grown so configuration needs to be tweaked. The OS may be flushing
pages to disk when it should not. The
Hi,
I am assuming no manual modifying of log files or store files at
runtime or between shutdowns/crashes and startups has been performed.
What filesystem are you running this on (and with what configuration)?
Massimo since you say it happen more and more if the db grows can you
write a test cas
Mark,
I had a look at this and you try to inject 130M relationships with a
relationship store configured to 700M. That will not be an efficient
insert. If your relationships and data are not sorted the batch
inserter would have to unload and load blocks of data as soon as you
get over around 22M r
Hello,
I am having a hard time to follow what the problems really are since
conversation is split up in several threads.
Pablo, you had a problem with batch inserter throwing an exception
upon shutdown that I suspected was due to not enough available disk
space. Then there was the the "to many op
And if your domain is shardable you can still shard the same way you
would do using a relational database when using a graph database.
-Johan
On Wed, Jan 19, 2011 at 10:17 AM, Jim Webber wrote:
> Hello Luanne,
>
> Right now the only viable approach would be "cache sharding" (i.e. not really
> s
Hi,
Could you provide a list of what jar files are on the classpath.
Information in the /messages.log printed during startup would
also be interesting to see.
-Johan
On Thu, Jan 13, 2011 at 9:23 PM, Andreas Bauer wrote:
> Hi,
>
> now I got neoclipse running from eclipse, finally :) (BTW: adding
;
>
>
> TransactionManager tm = (( EmbeddedGraphDatabase )
> neo).getConfig().getTxModule().getTxManager();
> Transaction currentTx = (Transaction)tm.getTransaction();
> currentTx.success();
> currentTx.finish();
> tm.begin();
>
>
> ===
>
&
You can use the TransactionManager suspend/resume for this. Suspend
the current transaction and start a new one using the underlying TM.
Have a look at
https://svn.neo4j.org/components/rdf-sail/trunk/src/main/java/org/neo4j/rdf/sail/GraphDatabaseSailConnectionImpl.java
to see how this can be done.
On Thu, Dec 23, 2010 at 12:34 PM, George Ciubotaru
wrote:
>
> Taking a second look over that locking mechanism, I've noticed that it uses
> read locks for a delete operation. Should there be write locks instead?
>
Yes, sorry about that it should be write locks. The read locks will
still allow fo
ne release(s).
>
> Thanks,
>
> Rick
>
> Original Message
> Subject: Re: [Neo4j] InvalidRecordException exception
> From: Johan Svensson <[1]jo...@neotechnology.com>
> Date: Wed, December 15, 2010 8:32 am
> To: Neo4j user discussions <
at this is the reason and then I'll just
> accept the exception.
>
> Thank you for your quick and detailed response.
>
> Best regards,
> George
>
>
> -Original Message-
> From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
>
at Graphing.Graph.deleteRelationships(Graph.java:1234)
>
> Thanks,
> George
>
> -Original Message-
> From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
> Behalf Of Johan Svensson
> Sent: 15 December 2010 10:44
> To: Neo4j user discussi
Hi George,
Could you provide the full stacktrace for the exception.
Regards,
Johan
On Wed, Dec 15, 2010 at 11:33 AM, George Ciubotaru
wrote:
> Hi David,
>
> We've build our own REST service in front of Neo4j graph to interact with it
> from a different environment. The operations are simple:
>
t;
> -Original Message-
> From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
> Behalf Of Johan Svensson
> Sent: Tuesday, December 14, 2010 6:59 PM
> To: Neo4j user discussions
> Subject: Re: [Neo4j] Neo4J logs and the Emb
Hi Marko,
On Fri, Dec 10, 2010 at 7:35 PM, Marko Rodriguez wrote:
> Hello.
>
> I have one question and a comment:
>
> QUESTION: Is the reference node always id 0 on a newly created graph?
Yes.
>
> COMMENT: By chance, will you guys remove the concept of a reference node into
> the future. I've
Hi,
There are some logging performed through the java.util.logging.Logger
and the org.neo4j.kernel.impl.util.StringLogger (check
/messages.log). What kind of logging are you interested in?
For normal monitoring and health check you can use JMX
(http://wiki.neo4j.org/content/Monitoring_and_Deployme
Since the small graph works well and it looks like you are performing
writes together with reads a possible cause could be OS writing out
dirty pages to disk when it should not. Have a look at
http://wiki.neo4j.org/content/Linux_Performance_Guide
While running the test execute:
#watch grep -A 1 d
eostore.relationshipstore.db.mapped_memory=1372M
> neostore.propertystore.db.index.mapped_memory=1M
> create=true
> neostore.propertystore.db.mapped_memory=275M
> dump_configuration=true
> neostore.nodestore.db.mapped_memory=91M
> dir=/neo4j_database//lucene-fulltext
>
>
>
t; at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
>> Caused by: java.io.IOException: Operation not permitted
>> at sun.nio.ch.FileChannelImpl.map0(Native Method)
>> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:747)
>> at
>>
>> org.neo4j.kernel.impl.nioneo.store.MappedPersistenceWindow.(Mappe
Hi,
Could you add the following configuration parameter:
dump_configuration=true
and send the output printed to standard out when starting up. Other
useful information would be some thread dumps while executing a query
that takes long time (send kill -3 signal to the process).
Regards,
Johan
This looks great, very good work!
I would like to get this merged into trunk after we releases 1.2 (not
after this iteration but after the next one).
The changes looks minimal to me and hopefully there are no problems
going forward with the current design. Looking forward to the guide so
me and o
Hi Chris,
On Tue, Nov 9, 2010 at 7:34 PM, Chris Gioran wrote:
>> Chris,
>> Awesome! I think the next step would be to start testing things when
>> neo4j needs to recover, rollback etc, I think this is where the
>> problems arise :)
>>
>> Also, any chance of making a maven project out of it and ha
Hi,
If it is a very old store that has been running on pre 1.0 beta
releases download Neo4j 1.0 and perform a startup+shutdown. After that
you will be able to run 1.1 or the current milestone/snapshot
releases.
If you have been running in HA mode and switched back to for example
the 1.1 release y
or Lockable relationship #1620
> Waiting list:
> Locking transactions:
> Transaction(36)[STATUS_MARKED_ROLLBACK,Resources=1](0r,1w)
> Total lock count: readCount=0 writeCount=1 for Relationship[1795]
> Waiting list:
> Locking transactions:
> Transaction(37)[STATUS_ACTIVE,Resourc
ated by servlet requests and are wrapped in Neo transactions.
>
>
>
> Thanks in advance for any suggestions to diagnose.
>
>
>
> Rick
>
> Original Message
> Subject: Re: [Neo4j] Concurrency issue/exception with Neo 1.1
> From: Johan Svensson <[1]j
Hi Rick,
Are you grabbing read locks manually on the relationship? If the
transaction has a read lock on relationship 1795 and wants to upgrade
it to a write lock but it has to wait because of other transactions
having read locks on that relationship. This could lead to deadlock if
one of those ot
Hi,
Upgrading to newer version will work unless specified otherwise in
release notes. Downgrading is not supported out of the box.
So in this case upgrading from 1.0 to 1.1 or 1.2.M02 will work while
downgrading from 1.2.M02 to 1.1 or 1.0 will not work. Make sure you
have a clean shutdown before
Hi,
The documentation is mostly in the source code so I would suggest you
have a look at the org.neo4j.kernel.impl.nioneo.store package at
first.
There are some information about this in the user archives (for
example http://www.mail-archive.com/user@lists.neo4j.org/msg01042.html).
I would also r
The pattern matcher requires a starting node to start the search from.
If the pattern you are trying to match is "find all persons who are
married and live together" you could do something like this:
PatternNode person1 = new PatternNode();
PatternNode person2 = new PatternNode();
PatternNode addr
Hi,
On Thu, Aug 26, 2010 at 11:26 AM, Pierre Fouche wrote:
> Hi,
>
> I have a few questions about transactions and locking in Neo4j.
>
> When I read the 'Isolation' section of the transaction wiki page
> (http://wiki.neo4j.org/content/Transactions), I understand that Neo4j
> provides a unique iso
Todd,
Size of the memory mapped log buffer is final and size is set to 1024
* 1024 * 2 bytes (see MemoryMappedLogBuffer).
What JVM version are you running?
-Johan
On Tue, Aug 24, 2010 at 11:23 AM, David Montag
wrote:
> Hi Todd,
>
> We would really appreciate it if you could file a ticket on
>
For the online-backup based HA the master pushes data to the slaves
(and slaves can not accept writes). In the new version of HA (not
generally available yet) the slaves poll the master for updates (and
can accept writes). Having the master push updates to slaves may be
implemented in a later relea
terImpl.createRelationship(BatchInserterImpl.java:172)
>
>
> at RelationshipInserter.main(RelationshipInserter.java:76)
>
>
> I checked my code and it did a clean shutdown as far as I can tell. I also
> called index.optimize() immediately after instantiating it to make sure.
>
>
Hi,
Thanks for reporting this. The elementCleaned() must be called for
each element in the cache on shutdown so this is a bug.
Regarding performance the batch inserter implementation is there for
convince and will not perform as good as the normal batch inserter
API. There may be a full implement
Hi,
One can use the built in locking in the kernel to synchronize and make
code thread safe. Here is an example of this:
https://svn.neo4j.org/examples/apoc-examples/trunk/src/main/java/org/neo4j/examples/socnet/PersonFactory.java
The "createPerson" method guards against creation of multiple per
1 = index.getNodes("Property1", value1).iterator().next();
> long node2 = index.getNodes("Property1", value2).iterator().next();
>
> inserter.createRelationship(node1, node2, DynamicRelationshipType.withName(
> "REL_TYPE" ), null );
>
> index.shutdown();
&
Hi,
You started up the batch inserter on a store that had not been
shutdown properly. You could try startup in normal non batch inserter
mode and just shutdown:
new EmbeddedGraphDatabase( storeDir ).shutdown();
That will do a "fast" rebuild of the id generators and after that the
batch inser
Hi,
I would not recommend to use large amounts of different (dynamically
created) relationship types. It is better to use well defined
relationship types with an additional property on the relationship
whenever needed. The limit is actually not 64k but 2^31, but having
large amounts of relationshi
What was the output of
"System.out.println(System.getProperty("java.class.path"));" as Tobias
asked you to do?
On Wed, Jun 9, 2010 at 1:56 PM, Batistuta Gabriel
wrote:
> However, I obtain this error :
> java.lang.NoSuchMethodError:
> org.neo4j.onlinebackup.AbstractResource.(Lorg/neo4j/kernel/impl
On Wed, Jun 9, 2010 at 1:37 PM, Batistuta Gabriel
wrote:
> Thanks.
>
> If I understand the tutorial of neo4j and your expaination, this part
> of code is correct :
>
> //create the original graph
> neo = new EmbeddedGraphDatabase(CONSTANTS.GRAPH_PATH);
> graph = ObjectGraphFactory.instance().get(n
The 1.2 release is scheduled to be released in Q4 (most likely in
November). Regarding implementations running on large graphs using
Neo4j there have been several mentions of that on the list so you
could try search the user archives
(http://www.mail-archive.com/user@lists.neo4j.org/). For example:
I just added code in trunk so block size for string and array store
can be configured when the store is created. This will be available in
the 1.1 release but if you want to try it out now use 1.1-SNAPSHOT and
create a new store like this:
Map config = new HashMap();
config.put( "string_block_
Hi,
Maybe we should add a configuration option so that ids are not reused.
Martin, you could try patch the code in
org.neo4j.kernel.impl.nioneo.store.IdGeneratorImpl:
===
--- IdGeneratorImpl.java(revision 4480)
+++ IdGenerat
Hi,
These are the current record sizes in bytes that can be used to
calculate the actual store size:
nodestore: 9
relationshipstore: 33
propertystore: 25
stringstore: 133
arraystore: 133
All properties except strings and arrays will take a single
propertystore record (25 bytes). A string or arra
ums/fi-FI/w7itprogeneral/thread/df935
> a52-a0a9-4f67-ac82-bc39e0585148
>
>
>
> -Original Message-
> From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
> Behalf Of Johan Svensson
> Sent: Thursday, June 03, 2010 1:11 PM
> To: Neo4j user discu
That setup should not be a problem. Anything else you can think of
that was out of the ordinary before the task got terminated or after
(stacktraces, disk full, concurrent process trying to access the same
store files etc)?
You can contact me off-list if it would be possible for me to have a
look
Rick,
There is no ordinary way to NOT run recovery on startup if the system
crashes. The only way for that to happen is if something extraneous to
Neo4j has modified the filesystem in between runs. For example if the
logical files are removed after a crash, then starting up could lead
to "no recov
Alex,
You are correct about the "holes" in the store file and I would
suggest you export the data and then re-import it again. Neo4j is not
optimized for the use case were more data is removed than added over
time.
It would be possible to write a compacting utility but since this is
not a very co
de properties
> in the TransactionData object? Since the transaction is commited (I
> guess finished), shouldn't I get an "NotInTransaction"
> exception?
>
> On 5/20/10 3:38 PM, Johan Svensson wrote:
>> Hi,
>>
>> I have not tried to reproduce t
Hi Tobias,
The problem here is that the machine has to little RAM to handle 244M
relationships without reading from disk.
What type of hard disk are you using? The low CPU usage and continuous
reads from disk indicate that cache misses are to high resulting in
many random reads from disk. I would
act number of bytes of each store file for memory
mapped configuration.
Regards,
Johan
>
> Best regards,
> Lorenzo
>
>
> On Mon, May 24, 2010 at 11:41 AM, Johan Svensson
> wrote:
>> If a run is that long when performing traversals "-server" flag should
>>
which I read about that to have the
> better performance.
>
>
>
> I did not look into previous posts of this emailing list, if you encounter
> such an issue before, please let me know where to look for the
> explanation/solution.
>
> In any case I am
for ( TransactionEventHandler handler : this.handlers )
> @@ -55,6 +56,10 @@
> throw new RuntimeException( t );
> }
> }
> + } catch (Throwable th) {
> + th.printStackTrace();
> + throw new RuntimeException(th);
> + }
>
single
day on that machine.
-Johan
On Fri, May 21, 2010 at 1:15 PM, Lorenzo Livi wrote:
> No, I use only one jvm instance for each run.
> My run usually last something like 1 day or 15 days.
>
> On Fri, May 21, 2010 at 1:10 PM, Johan Svensson
> wrote:
>> Yes, -server is us
x27;m working on a lab environment ...
>
> Best regard,
> Lorenzo
>
>>
>>
>> On Fri, May 21, 2010 at 12:54 PM, Johan Svensson
>> wrote:
>>> Hi,
>>>
>>> If your traversals access properties I would
Hi,
If your traversals access properties I would suggest full memory mapping:
neostore.nodestore.db.mapped_memory=1G
neostore.relationshipstore.db.mapped_memory=2G
neostore.propertystore.db.mapped_memory=1700M
neostore.propertystore.db.strings.mapped_memory=1200M
neostore.propertystore.db.arrays.
uld differ/degrade when using SSDs instead of
> old standard HDDs after RAM is saturated. Anyone have numbers?
>
> On Tue, May 18, 2010 at 8:30 AM, Johan Svensson
> wrote:
>
>> Working with a 250M relationships graph you need better hardware (more
>> RAM) to get good perf
Hi,
I have not tried to reproduce this but just looking at the code I
think it is a bug so thanks for reporting it!
The "synchronization hook" that gathers the transaction data gets
registered in the call to GraphDatabaseService.beginTx() but when
using Spring (with that configuration) UserTransa
Strings and arrays are lazy load so the property data will not be read
until you request it.
-Johan
On Thu, May 20, 2010 at 1:58 PM, Mattias Ask wrote:
> Hi,
>
> I was just wondering one thing. Are properties on Nodes and Relationships
> lazy loaded? I mean, if I have an AccountNode which holds
ter a crash.
Regards,
Johan
>
> Thanks in advance,
> Jawad
>
> On Tue, May 18, 2010 at 1:22 PM, Johan Svensson
> wrote:
>
>> Hi,
>>
>> Have a look at org.neo4j.kernel.impl.nioneo.xa package. To implement a
>> new persistence source start by creating
power supply died a few days ago so I'm waiting on a new one to
> arrive), so I only have 2GB of RAM. Heap is set to 1.5GB at the moment.
>
> Given my configuration is the performance I described typical?
>
> Alex
>
> On Tue, May 18, 2010 at 1:50 PM, Johan Svensson
>
Alex,
How large heap and what configuration setting do you use? To inject
250M random relationships at highest possible speed would require at
least a 8GB heap with most of it assigned to the relationship store.
See
http://wiki.neo4j.org/content/Batch_Insert#How_to_configure_the_batch_inserter_pr
Garrett,
This could be a bug. Could you please provide a test case that trigger
this behavior.
-Johan
On Sat, May 15, 2010 at 8:46 PM, Tobias Ivarsson
wrote:
> Create a ticket for it, I've tagged it for reviewing when I get back to the
> office, you had the great unfortune to send this right at
Hi,
Have a look at org.neo4j.kernel.impl.nioneo.xa package. To implement a
new persistence source start by creating new implementations of the
NeoStoreXaDataSource and NeoStoreXaConnection classes. It is no longer
possible to swap in a different persistence source using configuration
(used to be)
Hi,
I understand the graph layout as this:
(publisher node)<--PUBLISHED_BY---(book)<--BORROWED_BY--(student)
There are other relationships between books and students
(RETURNED_BY,RESERVED_BY) but the relationship count on a book node
will still be low compared to the publisher node. Correct?
On
Hi,
Adding a literal (with average size around ~400 bytes if the numbers
are correct) should not result in such a big difference of injection
times.
Could you give some more information regarding setup so we can track
down the cause of this. Good things to know would be:
o java version and jvm s
On Thu, Apr 29, 2010 at 7:48 PM, Jon Noronha wrote:
> ...
>
> Is this a feature or a bug? All of the examples suggest that it's possible
> to read from the LuceneIndexBatchInserter, and indeed if I combine the code
> that creates the nodes and the index with the code that reads it into one
> file
Hi,
This code will shutdown the kernel right away. Depending on timing you
may shutdown the kernel while the thread pool is still executing and
that could be the cause of your error.
If you remove the @After / kernel shutdown code or add code in the
@Test method to wait for the thread pool to exe
Hi,
There should be no problem to do multiple modifying operations in the
same transaction. Since you are talking about statements I take it you
are using the rdf component?
What happens if you move the delete statement before the call to Tx.success()?
Regards,
Johan
On Wed, Apr 14, 2010 at 5:5
On Tue, Apr 20, 2010 at 10:42 AM, Erik Ask wrote:
> Tobias Ivarsson wrote:
>> The speedup you are seeing is because of caching. Items that are used are
>> loaded into an in-memory structure, that does not need to go through any
>> filesystem API, memory-mapped or not. The best way to load things i
Code for this is now in trunk (neo4j-kernel 1.1-SNAPSHOT).
-Johan
On Tue, Apr 13, 2010 at 3:00 PM, Johan Svensson wrote:
> ...
> On Mon, Apr 5, 2010 at 8:15 AM, Marko Rodriguez wrote:
>> Hi,
>>
>> What is the timeframe for providing a GraphDatabaseService tha
Hi Marko,
We had a discussion around this today and conclusion was that we will
patch 1.1 very soon to support this. If there are no problems
(something we didn't think about) I would expect this to be in trunk
before end of month.
Regards,
-Johan
On Mon, Apr 5, 2010 at 8:15 AM, Marko Rodriguez
Hi,
I had a look at this and can not figure out why -1 is returned.
When running the kernel in normal (write) mode the return value of
number of ids in use will only be correct if all previous shutdowns
have executed cleanly. This is an optimization to reduce the time
spent in recovery rebuilding
Hi,
The read only version is not faster on reads compared to a writable
store. Internally the only difference is we open files in read only
mode.
The reason you get the error is that your OS does not support to place
a memory mapped region to a file (opened in read only mode) when the
region maps
Hi Patrick,
Thanks for the feedback. I will have a look at this and implement
handling for disconnection and expiration of sessions.
Regarding the GC issues we are well aware of these (hopefully the new
"garbage first" or G1 GC will solve these problems). As you say the
concurrent mark sweep GC h
Hi,
Thanks for finding this since it is a bug. I have committed a fix for
it in kernel trunk.
Regards,
-Johan
On Fri, Mar 12, 2010 at 6:36 PM, Niels Hoogeveen
wrote:
>
> While loading data into the Neo4J database I got the following exception:
>
> Exception in thread "main"
> org.neo4j.kernel.
Node/relationship setProperty( key, null ) will throw
IllegalArgumentException so you have to use the removeProperty( key )
method.
-Johan
On Wed, Mar 3, 2010 at 4:15 PM, Rick Bullotta
wrote:
> Perhaps a stupid question, but is setting a property to null effectively the
> same as deleting a prop
1 - 100 of 350 matches
Mail list logo