Cypher, being the "SQL for neo4J", can be used to update nodes.

When using any relational DB, even a crappy one like MySQL, I have never 
seen a problem when I wanted to update each and every row in a table with 
one query, even when the table contains millions of rows.

(This is jsut an example, the same goes for creating a million links 
between nodes, based on a query that selects nodes of a different type 
where some properties match.)

So, I ask myself, why does Neo4J go out-of-memory when I want to do 
something like that?
It seems as if it tries to load every node into memory first before doing 
something, but that can never be a future-proof solution, since you can't 
expect to always have enough memory avaliable to store your whole database.

So I am wondering, what can I expect in future versions of Neo4j. Is this 
something that will be solved (so that I can run and manipulate a somewhat 
decent test database on my laptop with limited memory, like i can with a 
Postgres or MySQL DB).

(I am not really interested in all kinds of 'workarounds' that people may 
have found in order to circumvent these limitations).


-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to