Hi,
Really nice to hear that you could achieve this! Do you have any
information regarding when the 1.2 branch of uDig will be stable? (it
uses Eclipse 3.4)
I'm working with Ahmed on the AWE project, and keeping an eye on the uDIG
1.2 work, so we can switch over when they're ready. Last I
Does the line:
Caused by: java.nio.channels.OverlappingFileLockException
support this possibility? Seems likely?
On Sun, Jun 7, 2009 at 10:16 AM, Anders Nawroth and...@neotechnology.comwrote:
Hi Ahmed!
java.lang.RuntimeException: Could not create data source
nioneodb[nioneodb
Thanks for the great advice Peter,
I think all routes point to one commonality, Neoclipse needs to be modified
to not include neo4j in the same plugin, but access it from either another
plugin (to be accessed by several plugins), or the OSGi service you suggest.
Personally I think the OSGi
Is this true even if they are run in different threads?
It implies that a graphic application cannot run batch inserts, because it
is almost certain that other elements of the application will have
NeoService open to that database.
We certainly intend to use the 'import your data once' concept,
I had a JVM crash and after starting the application again got the following
error:
Error creating the view
Reason: Position[1615977] requested for operation is high id[1568046] or
store is flagged as dirty[true]
It almost sounds like the neoclipse database graph view was trying to access
the
Hi,
Recently Johan pointed out to me that merging databases was not something
easily added to the core neo4j because there would be clashes of ids.
However, I have thought a bit more about it, and I think there is a way this
can work (in theory) for special cases.
Here is my example:
- We
, Craig Taverner cr...@amanzi.com wrote:
Hi,
Recently Johan pointed out to me that merging databases was not something
easily added to the core neo4j because there would be clashes of ids.
However, I have thought a bit more about it, and I think there is a way
this
can work (in theory
I'd vote for a garbage collector too. I often have code that deletes large
trees of nodes, and it takes time, but I'd rather disconnect the tree, load
my new data and then delete the old data later (during idle time). Obviously
this can be done at application level, but it would be a nice
o apply the log then append any running transactions to the new log
This might be the heart of the problem. Dealing with running transactions. I
guess, for this reason, there is good merit in making all database access
that does not explicity focus on writing to the database, use readonly
I could imagine to to write (synchronize) all data important for BI to a
RDBMS, but use Neo as the main DB for live data
We have also built an eclipse RCP app for domain data with some BI. In our
case it is all client side (eclipse RCP), but we have at least one customer
asking for a server
I had an answer for this also, but see that Andreas and Anders have both
answered with the two mechanisms I had tried:
- lucene
- custom index
But I thought I'd add a little on the custom index idea, because I think
that can also provide fast sorting, if you are usually requiring answers
We also moved to the 'create on demand' approach largely because we often
make enhancements to the application and required data structures, but run
on older databases, so it is nice if the app can 'update' the database.
Doesn't always work, but reduces the number of times we need to delete the
Personally I think that since the direction is built into the relationship,
and the concept of a relationship is already a 'verb', I think both the 'IS'
(or 'HAS') and the 'OF') are redundant. I vote for:
father --(CHILD)-- son
or alternatively
father --(PARENT)-- son
Clean, simple,
I suspect many potential use cases can not only be resolved without
resorting to saving objects, but might be better without them. I had one
such case recently, where I wanted to save Date objects, and actually wrote
serialization code to save as byte[] to neo4j. Then it dawned on me that my
Dates
...@neotechnology.com wrote:
Hi folks,
I have come over the problem of importing data form XML files a lot
lately. There are 2 approaches that emerged after discussions with
Craig Taverner:
1. Write a generic utility: take the XML DOM tree and directly put it
as-is into Neo4j, then later
I'd like to comment on the question of indexing event logs. We are also
dealing with event logs and we query them based on time windows as well as
other properties. In our case they are also located on a map, and
categorized using event types. So we have three types of indexes in use:
- time
: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org]
On
Behalf Of Craig Taverner
Sent: Thursday, December 10, 2009 10:51 AM
To: Neo user discussions
Subject: Re: [Neo] Noob questions/comments
I'd like to comment on the question of indexing event logs. We are also
dealing
+1 for wave, the chat can be edited into meeting notes, or even final
documentation (during the meeting). Very flexible.
I also have invitations if people need.
On Wed, Dec 30, 2009 at 11:40 AM, Laurent Laborde kerdez...@gmail.comwrote:
On Wed, Dec 30, 2009 at 11:37 AM, Peter Neubauer
Hi,
I was wondering, how much space does a property name occupy in the database?
I mean if I use a name like 'x' or one like
'supercalifragilisticexpialidocious', and create that same property on 100k
nodes, would the longer one cause a much larger database, or are the
property names looked up in
of length of the
property key.
We will add some information about this on the wiki.
Regards,
Johan
On Sat, Jan 2, 2010 at 4:34 PM, Craig Taverner cr...@amanzi.com wrote:
Hi,
I was wondering, how much space does a property name occupy in the
database?
I mean if I use a name like 'x
I think a generic solution would mean a generic numerical index.
I wrote an index recently that is very similar in principle to the timeline
index Tobias mentioned below, except I allow multiple dimensions and
indexing over any numerical primitive type. Then we get to the next point,
combining
+1
On Wed, Jan 6, 2010 at 9:57 PM, Rick Bullotta
rick.bullo...@burningskysoftware.com wrote:
It's a relatively minor thing to fix in our code. I would suggest
reconsidering the retro façade, however - it really doesn't buy you much
and requires resources to keep it up-to-date, fattens up
If you are focusing on changes of nodes, why not just archive the changes as
a new sub-graph? This allows you to maintain everything in a single database
(allowing simpler code and more possibilities), and also allows you to
manage changes of different part of the graph independently, so you don't
I was wondering if the neo-shell or the neo4j.rb in IRB would solve this
requirement (easily creating or loading some initial graph). I have not
played much with the shell, but know that it has commands for making nodes
and relationships. But I think it is best for interactive work, and I think
it
Is there any recommendation on when to use additional node/relationship
connections versus additional node properties?
It depends of use case but go with relationships if it enriches the
graph with information that makes sense.
And this related to another aspect, based on the balance
Hi all,
I've been looking at Martin Kleppman's Scala
projectshttp://wiki.neo4j.org/content/Scalafor neo4j to see if they
can be used to help place neo4j behind a lift
application, but I do not (yet) see any sign that these are for lift at all.
I'm wondering if anyone knows how to get neo4j as the
:
LiftRules.unloadHooks.prepend(() = Neo4j.db.shutdown)
On Mon, Mar 1, 2010 at 1:50 PM, Craig Taverner cr...@amanzi.com wrote:
Hi Christophe,
The code you sent seems great for wrapping each request in a transaction.
But I still don't know how to get round lift's ORM or schemifier
mechanism
If the total number of players scores changed was a relatively small part of
the list, then re-linking those on each change might still be efficient, but
you indicate that the number might be high, then Rick's b-tree approach
works better. In my opinion you have two options:
- Lucene - you can
I'm uncertain about one ambiguity in your model, you are able to find
messages through FOLLOWS and IS_VISIBLE_BY. These will give two different
sets, and my first impression was that FOLLOWS gives you the right answer.
In other words you want to query for 'all messages by users I follow'? In
that
couldn't find this information in
the docs. If there's something I should be reading before posting these
questions, please point me to it.
Thanks!
Lincoln
On Wed, Mar 17, 2010 at 7:06 AM, Craig Taverner cr...@amanzi.com wrote:
I'm uncertain about one ambiguity in your model, you are able
And in the java API you can also iterate over the
GraphDatabaseService.getAllNodes result. Could be expensive though.
On Wed, Mar 17, 2010 at 5:14 PM, Lorenzo Livi lorenz.l...@gmail.com wrote:
Hi,
foreach KEY in the lucene index
retrieve the node n
if(degree(n)==0)
found!
should work
, 2010 at 12:41 PM, Craig Taverner cr...@amanzi.com wrote:
Hi Lincoln,
So it sounds like you don't need the IS_VISIBLE relations after all. The
traverser works by following all relationships of the specified types and
directions from each current node (as you traverse, or walk the graph
or is the
approach totally different?
Thanks,
Lincoln
On Wed, Mar 17, 2010 at 12:41 PM, Craig Taverner cr...@amanzi.com
wrote:
Hi Lincoln,
So it sounds like you don't need the IS_VISIBLE relations after all. The
traverser works by following all relationships of the specified types
for your help!
On Wed, Mar 17, 2010 at 5:43 PM, Craig Taverner cr...@amanzi.com
wrote:
I guess I could say that the approach is totally different, but in
reality
you start at the same point, working out what query you want. But
after
that
things change. Let's consider two cases
know much about it, but
seems super useful for those who like that type of graph searching. However,
is that just typical Ruby?
Take care,
Marko.
http://markorodriguez.com
On Mar 17, 2010, at 3:00 PM, Craig Taverner wrote:
This is a cool idea. Seems a bit like the pattern matching stuff
Why not use a graph based index? Properly structured it should provide fast
reads all the time (property structured kind-of means you have the most
likely reads in cache, which is dependant on the type of data).
On Tue, Mar 23, 2010 at 10:56 AM, Peter Neubauer
neubauer.pe...@gmail.comwrote:
Hi,
I'm proposing a project for *Google Summer of Code 2010* to add Neo4j as a
fully supported spatial database to the open source desktop application
uDIG, and the underlying GIS library Geotools. See the wiki pages on the
uDIG and Neo4j wikis:
-
+1 from me too. I love visualization, use neoclipse a lot, and think large
scale visualization like this is very cool. Would be interesting also to see
if there is any potential tie in with the Neo4j Spatial project which
typically works with reasonably large datasets also?
On Fri, Mar 26, 2010
Hi Leandro,
Thanks for showing an interest in the Neo4j-uDIG project. Your work on the
distributed rtree sounds very interesting and very appropriate to the
project we have in mind. I would love to discuss this with you and introduce
you to some of the ideas we are working on. I will contact you
On Wed, Apr 7, 2010 at 9:52 AM, Tobias Ivarsson
tobias.ivars...@neotechnology.com wrote:
Whow. that's a mouthful, Python to the rescue ;)
Here is how I'd color this bike shed http://0099cc.bikeshed.org/:
I've not read about the bikeshed in a decade:
A cool off timer will prevent you from
This is a very interesting discussion because I've been working on a
solution I call a 'composite index' that, for some datasets, should return
the correct results in one simple traverse, with no temporary memory used,
but first my high level view of this:
I think there are several types of
I think this is a very domain specific question. If you data is naturally
very disjoint, then there can be value in having separate databases, and
benefit from increased total database size, easy clustering/partitioning,
etc. But, as you say below, you cannot (or at least not easily) traverse the
Yes, I understood that. But I think you should be explicit about which
relationship types you *expect* to remove. This means that if the node had
relationships of types that you didn't expect those will not be removed and
an exception will be thrown when you commit the transaction. This is a
I think the way it's currently implemented is great for power users, but
obviously not always that easy to find for first time users. I'm a believer
in coding a couple of different ways for a user to find the same capability,
to suite a wider audience :-)
On Sun, Apr 25, 2010 at 11:55 PM, Anders
The example use cases are:
1a) A web page round trip where subsequent web pages display subgraphs as
the user drills down into the levels. I would want to be able to find a
subgraph by ID directly when working within a graph. Ex: A leaf node on
the
last page would become the top node for
Hi guys,
I've applied to present Neo4j Spatial (Neo4j as a true GIS database for
mapping data) at the FOSS4G conference in September. To increase the chances
of the presentation getting accepted, it helps to get community votes. So,
if you think Neo4j Spatial is a cool idea, vote for it :-)
, Raul Raja Martinez raulr...@gmail.com
wrote:
voted for it
2010/5/4 Craig Taverner cr...@amanzi.com
Hi guys,
I've applied to present Neo4j Spatial (Neo4j as a true GIS database for
mapping data) at the FOSS4G conference in September. To increase the
chances
Hi All,
I've just announced the upcoming Google Summer of Code project on the uDig
Blog:
http://udig-news.blogspot.com/2010/05/google-summer-of-code-2010.html
In two weeks time the Google Summer of Code for 2010 starts, and this year
uDig is taking part with an exciting project riding the
How can I get all the 2000 - 2001, Silver, SAAB?
This is a query based on multiple properties and I think there are three
options for this:
- Identify limited property and traverse on that, testing for the others
in the traversal. I think this is what Peter was suggesting. So, for
The problem is that outside of the test it's strange to surface the Neo
transactions outside of the persistence layer. This is not really a neo
issue but since the difference is so large, I was wondering if there is a
nice way to do this? I understand that we can't nest transactions.
Perhaps
A quick comment about transaction size. I find a good speed/memory
balance at a few thousand writes per transaction. More than that
improves performance with diminishing returns.
2010/5/17, Mattias Persson matt...@neotechnology.com:
2010/5/17 Kiss Miklós kissmik...@freemail.hu
I enabled the
I've thought about this briefly, and somehow it actually seems easier (to
me) to consider a compacting (defragmenting) algorithm than a generic
import/export. The problem is that in both cases you have to deal with the
same issue, the node/relationship ID's are changed. For the import/export
this
a properties file, and I used default JVM parameters.
There is at least one obvious downside to this though, and that is that you
pollute the dataset with GID properties.
Alex
On Wed, Jun 2, 2010 at 5:53 PM, Craig Taverner cr...@amanzi.com wrote:
I've thought about this briefly
Here is a crazy idea that probably only works for nodes. Don't actually
delete the nodes, just the relationships and the node properties. The
skeleton node will retain the id in the table preventing re-use. If these
orphans are not relevant to your tests, this should have the effect you are
Where do we sign up :-)
On Thu, Jun 3, 2010 at 1:19 PM, Peter Neubauer
peter.neuba...@neotechnology.com wrote:
Hi all,
Tim Anglade has started off a cool meetup series,
http://nosqlsummer.org with meetup places all over the world. The
Öresund gang is providing space and beer in Malmö, so if
Of course I can suggest the various data models in AWE, including the
spatial ones. I put a number of screenshots into the presentation I sent you
in late Feb, so you have lots of material to source.
On Tue, Jun 1, 2010 at 7:06 PM, Peter Neubauer
peter.neuba...@neotechnology.com wrote:
Hi all,
Seems that the string store is not optimal for the 'common' usage of
properties for names or labels, which are typically 5 to 20 characters long,
leading to about 5x (or more) space utilization than needed. By 'names or
labels' I mean things like username, tags, categorizations, product names,
Is there a specific constrain on disk space? Normally disk space isn't
a problem... it's cheap and there's usually loads of it.
Actually for most of my use cases the disk space has been fine. Except for
one data source, that surprised me by expanding from less than a gig of
original binary
Hi Davide and others,
I'm trying to convert the current RTree implementation to use the
BatchInserter to improve performance during bulk loads. But of course there
is a serious catch. The BatchInserter disallows deletion actions and the
RTree needs to delete relationships when splitting the
Hi all,
The list of presenters for the FOSS4G conference in Barcelona in September
is now available at http://2010.foss4g.org/presentations_gen_sel.php
I see a few that are of interest to nosql people (us):
- Geospatial Indexing with *MongoDB* - Kristina Chodorow
- GeoCouch: A spatial
+1 for OSM import into Neo4j Spatial
(in fact +1 for any import source that uses some kind of temporary foreign
key, eg. relational table dumps, xml structures, etc. - lucene is useful for
indexing the external foreign key during import, and then dropping it
entirely afterwards)
On Mon, Jun 21,
A side comment, since I think indexing relationships with lucene might be
good, but think there might be alternatives for your current example.
You said that the relationship property is a float from 0 to 1, so you
cannot use relationship types, but actually, when you consider that any
index is
if a node has many, many neighbours. But if the
relationship index would be for the entire graph (not per node) that
wouldn't really help, would it?
2010/6/21 Craig Taverner cr...@amanzi.com:
A side comment, since I think indexing relationships with lucene might be
good, but think there might
I like this new API. It is a tiny first step towards making the Java API a
little more DSL-like (1% of the way to the current scala and ruby API's :-)
My vote is actually for the includes() approach only because *and* is such a
widely used keyword. However, a quick test in Ruby showed that while
Why not have includes() return the same instance with internal state
changed, then the various call options are equivalent.
On Jun 23, 2010 6:41 PM, Tobias Ivarsson
tobias.ivars...@neotechnology.com wrote:
On Wed, Jun 23, 2010 at 6:10 PM, Craig Taverner cr...@amanzi.com wrote:
(I also noticed
it is
ok
to reuse the same object and mutate the internal state. It's something to
think about...
-tobias
On Wed, Jun 23, 2010 at 7:10 PM, Craig Taverner cr...@amanzi.com wrote:
Why not have includes() return the same instance with internal state
changed, then the various call options
This is an interesting problem. I'm not totally sure I've followed the
entire discussion, but here are my thoughts.
Your model is divided into two levels:
- Core model, with routes that specify the bus routes, times between
stops and locations. For example we can say that bus 5 stops at
, 2010 at 8:47 PM, Craig Taverner cr...@amanzi.com wrote:
Mutable objects have their issues, and yet this is a common and popular
paradigm in DSL design. I guess however the point with a DSL is that it
is
so simple and limited that it compensates for the potential risks.
Similar
to the dynamic
You mentioned that your also do queries or searches as part of the process.
If the graph is growing in complexity, perhaps the queries are getting
slower?
On Mon, Jul 5, 2010 at 10:16 AM, Qiuyan Xu
qiuyan...@mailbox.tu-berlin.dewrote:
I've just checked my code. It turns out that tx.finish() is
The way I view this point is that you query what you create. In other words,
you should create a graph structure to suite the kinds of queries you plan
to make. So if you are going to come back to the graph later on and have no
idea how to find a starting point for the traversal, then be sure to
the
graph is large? When the folder of the graph is about 300M, 1G memory
seems to be not enough for the program.
Cheers,
Qiuyan
Quoting Craig Taverner cr...@amanzi.com:
You mentioned that your also do queries or searches as part of the
process.
If the graph is growing in complexity
Thanks to Anders and Peter for getting the build up and running :-)
The current OSM imports do not yet expose themselves properly as indexed
layers, but we are very close to closing that gap. I'm nearly finished a
refactoring that respects the fact that one OSM dataset actually represents
database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.
On Wed, Jul 7, 2010 at 7:51 PM, Craig Taverner cr...@amanzi.com wrote:
Thanks to Anders and Peter for getting the build up and running :-)
The current OSM imports do not yet expose themselves properly as indexed
Even without the new traversal framework, the returnable evaluator has
access to the current node being evaluated and can investigate it's
relationships (and even run another traverser). I'm not sure if nested
traversing is a good idea, but I certainly have used methods like
getRelationships
Your description seems to indicate that each word only belongs to one
category, and if you want the category count, you will always get 1. But
your code indicates tat:
- You are counting the categories for similar works also
- You are searching to depth 2 for both similar and category
I think I could add a layer for all points, but I do not see the value. In
fact the current layer with all line-segments (all ways) is only an interim
layer so that something can be shown. I think I really want the meaningful
layers (streets, rivers, counties, etc.) and once those work I'm tempted
Many features are just points, so we need to add them to the spatial index.
Maybe is not useful to add all nodes as points, but I think we should
add at least every nodes which isn't in a Way.
Actually, my plan is to create an index for each identifiable layer. This
means geometries are
Also, hav you tried to open big graphs with Gephi? I recently tried to
open the Neo4j db generated by the test of
http://github.com/neo4j/neo4j-spatial (run mvn clean test), but
things slowed down to a trickle. Not sure if what the limit is for
Gephi, and how to open/surf nodes with a depth-1
I'm not worried that nodes will become related, I just wanted to make sure
that
because they aren't already related that this wasn't going to cause a
problem
before I implement the Timeline - I've already spent longer than I'd
intended on
this and don't want to reimplement it if it's not
I think the key point of Peters request is to separate the 'builder' from
the 'traverser'. Mattias argument appears to state that if the builder and
traverser are the same class (series of immutable instances of the same
class), you have more flexibility in refactoring, because you don't have to
Mapping property values to a discrete set, and refering to them using their
'id' is quite reminiscent of a foreign key in a relational database. Why not
take the next step and make a node for each value, and link all data nodes
to the value nodes? This is then a kind of index, a category index.
I
Depending on just how many are to be deleted, a common pattern is to collect
all 'to-be-deleted' during the main iteration, and then delete in a second
iteration over the collection. Especially if you only keep the relationship
ids, the memory load might be manageable. You said 'several hundred
Limit meaning in this run? Or at all times? The first is ok, the
second not. I guess you mean exiting after I have computed already 10
new relationships right?
Well, actually I was aiming for an algorithm that would converge on a stable
state. This means, if no new information is being added
Mmm we cannot limit the number of relationships in this app. One the
most important features
is that we'll keep looking for good matches for the user.
Since the trimming algorithm will entirely delete low scoring relationships,
the search algorithm will also keep finding them again, and if
uDig 1.2 is on Geotools 2.6, but Jody tells me that he got it working on
Geotools 2.7 last week (so we should see a 1.2.1 soon), and so a move of
Neo4j to the new Geotools should be a good move to make. I don't have time
to look at this myself right now, but you are certainly more than welcome to
the mapnik2geotools
generated SLD's as a starting point, since I'm sure you did a great job with
that :-)
Regards, Craig
On Thu, Oct 7, 2010 at 4:03 AM, David Winslow cdwins...@gmail.com wrote:
Hi all,
My name's David Winslow and I talked to Craig Taverner a few weeks ago at
FOSS4G about Neo4j-spatial
I found the correct geotools jar. It is gt-render.
On Thu, Oct 7, 2010 at 10:18 AM, Craig Taverner cr...@amanzi.com wrote:
Hi David,
I think the problem on my side was simply lack of experience with SLD's.
Since FOSS4G I've not spent any time on SLD's either, but instead on some
updates
in ('attraction', 'camp_site', 'caravan_site',
'picnic_site', '
ORDER BY z_order, way_area DESC;
--
David Winslow
On Thu, Oct 7, 2010 at 3:52 PM, Craig Taverner cr...@amanzi.com wrote:
Hi David,
The most important thing you can do to improve performance is to ensure
that
your style
Hi all,
I was playing with code swarm for another project and thought I would see
what the Neo4j development looked like. So I extracted the SVN logs from the
public repository and made the following video:
http://www.youtube.com/watch?v=sp2FhjH696g
The video looks much better in the original
://www.linkedin.com/in/neubauer
Twitter http://twitter.com/peterneubauer
http://www.neo4j.org - Your high performance graph
database.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.
On Thu, Oct 21, 2010 at 10:20 PM, Craig Taverner cr...@amanzi.com
Hi Axel,
GeoJSON and other GIS formats are planned to be supported in Neo4j Spatial.
In fact to support export to GeoJSON on the current code would be very easy,
I believe. I hope to get 'round to it soon. However, should you want to give
it a try now, I would be more than willing to give advice
Hi Axel,
I was away on a trip for a couple of days and did not see this discussion
until now. I see several people have had good advice for you already. I
think I can add to the discussion with some suggestions.
Your original description says two things of interest, there is a GeoJSON
document
I think this is because the storage slots of the old relationships is
not compacted but will be reused for new relationships in the future.
I am curious, are there any 'developer tricks' for influencing when old
slots are reused? For example, instead of adding the new relationships
A db restart is required to start reusing free'd ids? Seems a bit drastic. I
hope it happens sooner than that.
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user
Hi Axel,
Adding neo4j spatial to the picture should enable the CMS handling geo
data objects without adding to much complexity (I hope).
OK. I think I'm beginning to get an idea of what you want. You have a CMS,
and some of the data is geolocated, in the sense that you know what
countries it
I can make a few comments about modeling this kind of data.
While it is an option to use type=trunk properties to type your nodes, and
in fact I do that a lot myself, another very common approach is to deduce
the type from the incoming relationships. Then everything is about the graph
structure.
thinking to do now. Sounds like I at least need to keep my
type property. Thanks again for your insights. If you (or anyone else)
has
more ideas, please pass them along!
Thanks!
Andrew
On Fri, Nov 5, 2010 at 12:11 PM, Craig Taverner cr...@amanzi.com wrote:
I can make a few comments about
It's been 13 years since I left Chemistry, but I think I have some residual
interest in the subject :-)
My two cents worth for this problem is that it is possible to model
everything in one single graph:
- Store both the chemical structures and the relationships as graphs,
differentiating
Thanks for this very detailed answer. Some comments: I'm a novice at this
(Graph, Graph database, traversal, the math behind it). I'm more from the
science and sql corner. I have heard of lucene but never used myself, so
new too.
Well, I did my masters in Chemistry and worked with SQL for
Hi,
With all the discussions going on right now around Neography and using Ruby
to access the REST API, I decided to add a new wiki page on the Neo4j wiki
called 'Using the Neo4j Server with
Rubyhttp://wiki.neo4j.org/content/Using_the_Neo4j_Server_with_Ruby'.
The name parallels the related 'Using
1 - 100 of 256 matches
Mail list logo