I have  small test VerboseQuadStore alreday  build with LuceneIndexService.

Then I made test program where I create  new
LuceneFulltextQueryIndexService to test your suggestion , see below :

final GraphDatabaseService graphDb = new EmbeddedGraphDatabase( db);
            final IndexService indexService = new LuceneIndexService( graphDb );

            IndexService index = new LuceneFulltextQueryIndexService( graphDb);

 RdfStore store = new VerboseQuadStore( graphDb, indexService );
                Sail sail = (Sail) new GraphDatabaseSail( graphDb, store);

            Repository repo = new SailRepository( sail );
            RepositoryConnection rc = repo.getConnection();
            SailConnection sc = sail.getConnection();
            URI context = sail.getValueFactory().createURI(
"http://www.mementoweb.org/time/2008-02-00"; );
            RepositoryResult<Statement> result = rc.getStatements(null,
null, null, true, context);

      int count=0;
      Transaction tx =  tx = graphDb.beginTx();
                 index.removeIndex("context");

         while (result.hasNext()){
                 Statement st = result.next();
                 Resource sp = st.getSubject();
                 Node node = graphDb.createNode();
                 node.setProperty("uri",sp.stringValue() );
                 node.setProperty("time","2008-02-00" );

        count = count+1;
                if (count==10000) {
                System.out.println("count:" + count);
                count=0;
                tx.success();
                }
                index.index(node, "context", sp.stringValue()+"|20080200");

            }

The index hangs after 30000 rows  and never finishes. Did I missed
something ?
Is it designed that you can have multiple Indexes? If I need to create
fulltext index later I can do it ?
thanks

> You could perhaps use the LuceneFulltextQueryIndexService (which is a
> LuceneFulltextIndexService, but which interprets the "value" argument in
> getNodes() as lucene query
> syntax<http://lucene.apache.org/java/2_9_1/queryparsersyntax.html>).
> Index your <URI>|<time> as a one concatenated value and query it with
> range
> queries<http://lucene.apache.org/java/2_9_1/queryparsersyntax.html#Range%20Searches>,
> f.ex: [http://my-uri|0000000000000 TO http://my-uri|1270797634350] (since
> all values are strings in lucene), and grabbing the last one. If the
> results
> of the range query could be reversed you could grab the first one instead,
> which would be much better.
>
> This solution doesn't strike me as being particularly good, but might work
> (I haven't tried it).
>
> 2010/4/8 Lyudmila L. Balakireva <[email protected]>
>
>> Hi,
>>
>> I have  question how to better deal with time range index. I  am using
>> Sail layer to build a triple store with context as time value.  I need
>> binary search the nearest context value for the given uri and requested
>> time.
>> For example for the mysql it will be  table [ uri,time ] where rows:
>> u ,t1
>> u, t2
>> u1,t1
>> u2,t2
>> etc.
>>  having  the given uri  u and  time t   t1 < t <t2  I can fast find  the
>> nearest t1 for uri u  by query: select max (time) where time<t  and uri
>> =u;
>>
>> Is any  internal trick  I can use  for neo4j to build such index or
>> optimize the operation?
>>  The timeline index  is somewhat  different since I need first to break
>> recordset by uri and then find time. It will be many millions of
>> timelines attached to the one node and I still need to iterate thru the
>> nodes.  What else can be done here then Mysql?
>> Thank you for help,
>> Luda
>> _______________________________________________
>> Neo mailing list
>> [email protected]
>> https://lists.neo4j.org/mailman/listinfo/user
>>
>
>
>
> --
> Mattias Persson, [[email protected]]
> Hacker, Neo Technology
> www.neotechnology.com
> _______________________________________________
> Neo mailing list
> [email protected]
> https://lists.neo4j.org/mailman/listinfo/user
>

_______________________________________________
Neo mailing list
[email protected]
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to