Paul,
thanks for the info! I am down to the same, BerkeleyDB or JDBM.
Currently I am trying to get one of the BDB team to config the index
with me, so I can have some real impression of the performance (I am
just not able to tweak things right)

Meanwhile - is there anyone out there with a good knowledge of
BerkeleyDB who could help for a couple of hours to configure the Java
Edition to perform good in a Neo4j setup?

Cheers,

/peter neubauer

GTalk:      neubauer.peter
Skype       peter.neubauer
Phone       +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter      http://twitter.com/peterneubauer

http://www.neo4j.org               - Your high performance graph database.
http://startupbootcamp.org/    - Öresund - Innovation happens HERE.
http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.



On Sun, Mar 6, 2011 at 9:14 PM, Paul A. Jackson <[email protected]> wrote:
> Hi Peter,
>
> I finished my testing.  I tried jdbm tree and map, HSQL, and jboss cache as a 
> wrapper around both HSQL and jdbm.  I found that jboss cache doesn't 
> necessarily persist to disk at the end of a transaction, so it fails the acid 
> test. HSQL is super fast in memory but was terrible when forced to commit 
> every transaction. (I tested 1.8, which doesn't support transactions, only 
> each update is a transaction.  Maybe 2.0 is better.)  So that leave jdbm.  
> The tree (surprisingly) was much faster than the map.  I know from experience 
> that jdbm doesn't scale well withy multiple threads, yet, in this application 
> I was thinking it may still be a good fit.  It would be nice though if they 
> at least used a ReentrantReadWriteLock rather than method synchronization to 
> allow concurrent reads.
>
> Hope that helps.
>
> Thanks,
> -Paul
>
> Paul Jackson, Principal Software Engineer
> Pitney Bowes Business Insight
> 4200 Parliament Place | Suite 600 | Lanham, MD  20706-1844  USA
> O: 301.918.0850 | M: 703.862.0120 | www.pb.com
> [email protected]
>
> Every connection is a new opportunityT
>
>
>
> Please consider the environment before printing or forwarding this email. If 
> you do print this email, please recycle the paper.
>
> This email message may contain confidential, proprietary and/or privileged 
> information. It is intended only for the use of the intended recipient(s). If 
> you have received it in error, please immediately advise the sender by reply 
> email and then delete this email message. Any disclosure, copying, 
> distribution or use of the information contained in this email message to or 
> by anyone other than the intended recipient is strictly prohibited. Any views 
> expressed in this message are those of the individual sender, except where 
> the sender specifically states them to be the views of the Company.
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On 
> Behalf Of Peter Neubauer
> Sent: Tuesday, December 21, 2010 11:12 AM
> To: [email protected]
> Cc: Neo4j user discussions
> Subject: Re: [Neo4j] Big index solutions?
>
> Mmh,
> we are looking at JDBM now, and it seems to be promising. Will inform
> you on the progress of that!
>
> Cheers,
>
> /peter neubauer
>
> GTalk:      neubauer.peter
> Skype       peter.neubauer
> Phone       +46 704 106975
> LinkedIn   http://www.linkedin.com/in/neubauer
> Twitter      http://twitter.com/peterneubauer
>
> http://www.neo4j.org               - Your high performance graph database.
> http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.
>
>
>
> On Tue, Dec 21, 2010 at 12:19 PM, [email protected]
> <[email protected]> wrote:
>> That should fit in RAM just fine, except for the effect of the string
>> block/page size probably.  What about a btree backed by neo relationships?
>> Not fast enough?
>>
>> ----- Reply message -----
>> From: "Peter Neubauer" <[email protected]>
>> Date: Mon, Dec 20, 2010 3:54 pm
>> Subject: [Neo4j] Big index solutions?
>> To: "Neo4j user discussions" <[email protected]>
>>
>> Hi folks,
>> I wonder if any of you has seen a fast exact index solution that works
>> for the batchinserter (FAST) and over big indexes (like 100M strings
>> of length 20characters) that don't fit in RAM.
>>
>> Lucene is unable to cache such indexes and gets slow.
>>
>> Does anybody have experiences with other reverse lookup solutions like
>> Berkeley DB, Ehcache or others? Would be great to combine them with
>> the batchinserter to be able to fast insert big edge-lists with
>> node-index-lookups into Neo4j ...
>>
>> Cheers,
>>
>> /peter neubauer
>>
>> GTalk:      neubauer.peter
>> Skype       peter.neubauer
>> Phone       +46 704 106975
>> LinkedIn   http://www.linkedin.com/in/neubauer
>> Twitter      http://twitter.com/peterneubauer
>>
>> http://www.neo4j.org               - Your high performance graph database.
>> http://www.thoughtmade.com - Scandinavia's coolest Bring-a-Thing party.
>> _______________________________________________
>> Neo4j mailing list
>> [email protected]
>> https://lists.neo4j.org/mailman/listinfo/user
>>
>>
>>
>>
> _______________________________________________
> Neo4j mailing list
> [email protected]
> https://lists.neo4j.org/mailman/listinfo/user
>
> _______________________________________________
> Neo4j mailing list
> [email protected]
> https://lists.neo4j.org/mailman/listinfo/user
>
_______________________________________________
Neo4j mailing list
[email protected]
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to