Hey there, AFAIK this problem on S3 has not been solved but anyway there might be other solutions to overcome this problem. As you are running on amazon anyway you might wanna consider to have some locking service like ZooKeeper (http://hadoop.apache.org/zookeeper/) which could help you with other problems in the future too. You might wanna look at the recipes section (http://hadoop.apache.org/zookeeper/docs/r3.2.0/recipes.html#sc_recipes_Locks) and create you distributed lock yourself. One more thing, I guess this is not a lucene-locking problem as the locking API is highly customizable see LockFactory (http://lucene.apache.org/java/2_4_0/api/core/org/apache/lucene/store/LockFactory.html) and direct...@setlockfactory (http://lucene.apache.org/java/2_4_0/api/core/org/apache/lucene/store/Directory.html#setLockFactory%28org.apache.lucene.store.LockFactory%29)
simon On Wed, Sep 2, 2009 at 11:55 AM, prasenjit<prasen....@gmail.com> wrote: > > I am exploring the possibility of creating large lucene indices via ec2/s3. > Till now I have got only teh following url : > http://www.kimchy.org/lucene-and-amazon-s3/ > > But still dont know whether the lucene locking problem ( on a distributed FS > like S3/DFS ) is fixed or not. Any information is greatly apreciated. > -- > View this message in context: > http://www.nabble.com/lucene-on-amazon-s3-tp25254673p25254673.html > Sent from the Lucene - Java Users mailing list archive at Nabble.com. > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org