gt; H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Saurabh Agarwal [mailto:srbh.g...@gmail.com]
> > Sent: Tuesday, May 18, 2010 10:13 AM
> > To: java-user@lucene.apache.org
&g
gt; Subject: Re: Lock obtain timed out
>
> Thanks :)
>
> i am using only one server to create the index Saurabh Agarwal
>
>
> On Tue, May 18, 2010 at 1:41 PM, Ian Lea wrote:
>
> > Use SimpleFSLockFactory. The default, NativeFSLockFactory, doesn't
> > play w
ication IndexFiles) a
> 6
> > GB corpus which is on NFS, more over I am storing my index on NFS too.
> But
> > when I run the program
> > I get following exception
> >
> >
> >
> > caught a class org.apache.lucene.store.LockObtainFailedException
&g
>
>
>
> caught a class org.apache.lucene.store.LockObtainFailedException
> with message: Lock obtain timed out:
> NativeFSLock@/usr/home/saurabh/index/write.lock:
> java.io.IOException: Operation not supported
>
> Please Help out
>
>
> Regards
> Saurabh Agarwal
>
--
obtain timed out:
NativeFSLock@/usr/home/saurabh/index/write.lock:
java.io.IOException: Operation not supported
Please Help out
Regards
Saurabh Agarwal
e documents, some operations are
> unsuccessful. The following exception is thrown:
>
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
> simplefsl...@d:\testIndex\write.lock
>
> I understand what this means, and why this exception shows up. What I'm
> inte
Hey,
Lucene is deployed at my Tomcat server, and when I send parallel calls from
my client to add, delete or update documents, some operations are
unsuccessful. The following exception is thrown:
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
simplefsl...@d:\testIndex
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I cannot send you the source code without speaking to my manager
> first. I guess he would want me to change the code before sending it
> to you. You could have the log files now, but I expect you want to
> wait until the test application is ready t
>
> "pkimber" <[EMAIL PROTECTED]> wrote:
>
> > We are still getting various issues on our Lucene indexes running on
> > an NFS share. It has taken me some time to find some useful
> > information to report to the mailing list.
>
> Bummer!
>
> Can you zip up your test application that shows the iss
"pkimber" <[EMAIL PROTECTED]> wrote:
> We are still getting various issues on our Lucene indexes running on
> an NFS share. It has taken me some time to find some useful
> information to report to the mailing list.
Bummer!
Can you zip up your test application that shows the issue, as well as
t
ED]
lucene.icm.test.Write.main(): IncRef "_zr.cfs": pre-incr count is 3
I have added logging to our ExpirationTimeDeletionPolicy and I don't think
it is deleting the "_zr.cfs" file.
Once again, I would really appreciate your help solving this issue,
Thanks for your help,
Hi Michael
Just to let you know, I am on holiday for one week so will not be able
to send a progress report until I return.
I have deployed the new code to a test site so I will be informed if
the users notice any issues.
Thanks for your help
Patrick
On 04/07/07, Michael McCandless <[EMAIL P
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> Yes, there are many lines in the logs saying:
> hit FileNotFoundException when loading commit "segment_X"; skipping
> this commit point
> ...so it looks like the new code is working perfectly.
Super!
> I am sorry to be vague... but how do I check wh
Hi Michael
Yes, there are many lines in the logs saying:
hit FileNotFoundException when loading commit "segment_X"; skipping
this commit point
...so it looks like the new code is working perfectly.
I am sorry to be vague... but how do I check which segments file is
opened when a new writer is cr
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I have been running the test for over an hour without any problem.
> The index writer log file is getting rather large so I cannot leave
> the test running overnight. I will run the test again tomorrow
> morning and let you know how it goes.
Ahhh, th
Hi Michael
I have been running the test for over an hour without any problem.
The index writer log file is getting rather large so I cannot leave
the test running overnight. I will run the test again tomorrow
morning and let you know how it goes.
Thanks again...
Patrick
On 03/07/07, Patrick K
Hi Michael
I am setting up the test with the "take2" jar and will let you know
the results as soon as I have them.
Thanks for your help
Patrick
On 03/07/07, Michael McCandless <[EMAIL PROTECTED]> wrote:
OK I opened issue LUCENE-948, and attached a patch & new 2.2.0 JAR.
Please make sure you u
OK I opened issue LUCENE-948, and attached a patch & new 2.2.0 JAR.
Please make sure you use the "take2" versions (they have added
instrumentation to help us debug):
https://issues.apache.org/jira/browse/LUCENE-948
Patrick, could you please test the above "take2" JAR? Could you also call
Ind
Hi Michael
I am really pleased we have a potential fix. I will look out for the patch.
Thanks for your help.
Patrick
On 03/07/07, Michael McCandless <[EMAIL PROTECTED]> wrote:
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I am using the NativeFSLockFactory. I was hoping this would have
>
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I am using the NativeFSLockFactory. I was hoping this would have
> stopped these errors.
I believe this is not a locking issue and NativeFSLockFactory should
be working correctly over NFS.
> Here is the whole of the stack trace:
>
> Caused by: java
I think you should get " NFS, Lock obtain timed out" Exception (that you
mentioned in subject line) , instead of "java.io.FileNotFoundException:".
Because if one server is holding lock on the directory then other server
will wait till default LockTime Out and will thro
er" <[EMAIL PROTECTED]>
07/03/2007 03:47 PM
Please respond to
java-user@lucene.apache.org
To
java-user@lucene.apache.org
cc
Subject
Re: Lucene 2.2, NFS, Lock obtain timed out
Hi
I have added more logging to my test application. I have two servers
writing to a shared Lucene ind
t
Re: Lucene 2.2, NFS, Lock obtain timed out
Hi
I have added more logging to my test application. I have two servers
writing to a shared Lucene index on an NFS partition...
Here is the logging from one server...
[10:49:18] [DEBUG] LuceneIndexAccessor closing cached writer
[
Hi
I have added more logging to my test application. I have two servers
writing to a shared Lucene index on an NFS partition...
Here is the logging from one server...
[10:49:18] [DEBUG] LuceneIndexAccessor closing cached writer
[10:49:18] [DEBUG] ExpirationTimeDeletionPolicy onCommit() delete
Patrick Kimber wrote:
> I have been checking the application log. Just before the time when
> the lock file errors occur I found this log entry:
> [11:28:59] [ERROR] IndexAccessProvider
> java.io.FileNotFoundException:
> /mnt/nfstest/repository/lucene/lucene-icm-test-1-0/segments_h75 (No
> such
On 6/29/07, Doron Cohen <[EMAIL PROTECTED]> wrote:
> Note that some Solr users have reported a similar issue.
> https://issues.apache.org/jira/browse/SOLR-240
Seems the scenario there is without using native locks? -
"i get the stacktrace below ... with useNativeLocks turned off"
Yes... but th
Never used the IndexAccessor patch, so I may be
wrong in the following.
No, let's fix it... /;->
Don't mean to wade in over my head here, but just to help out those that
have not used LuceneIndexAccessor.
I am fairly certain that using the LuceneIndexAccessor could easily
create the File
Mark Miller wrote:
> You might try just using one of the nodes as
> the writer. In Michaels comments, he always seems
> to mention the pattern of one writer many
> readers on nfs. In this case you could use
> no LockFactory and perhaps gain a little speed there.
One thing I would worry about if m
Yonik wrote:
> Note that some Solr users have reported a similar issue.
> https://issues.apache.org/jira/browse/SOLR-240
Seems the scenario there is without using native locks? -
"i get the stacktrace below ... with useNativeLocks turned off"
Patrick Kimber wrote:
> As requested, I have been trying to improve the
> logging in the application so I can give you more
> details of the update pattern.
>
> I am using the Lucene Index Accessor contribution
> to co-ordinate the readers and writers:
> http://www.nabble.com/Fwd%3A-Contribution%3
: Perhaps i'm missing something, but i thought NativeFSLock was not suitable
: for NFS? ... or is is this what "lockd" provides? (my NFS knowledge is
: very out of date)
Do'h!
I just read the docs for NativeFSLockFactory and noticed the "For example,
for NFS servers there sometimes must be a sepa
: We are sharing a Lucene index in a Linux cluster over an NFS share. We have
: multiple servers reading and writing to the index.
:
: I am getting regular lock exceptions e.g.
: Lock obtain timed out:
:
NativeFSLock@/mnt/nfstest/repository/lucene/lock/lucene-2d3d31fa7f19eabb73d692df44087d81-n
Hi Mark
I just ran my test again... and the error occurred after 10 minutes -
which is the time when my deletion policy is triggered. So... I think
you might have found the answer to my problem.
I will spend more time looking at it on Monday.
Thank you very much for your help and enjoy your we
If your getting java.io.FileNotFoundException:
/mnt/nfstest/repository/lucene/lucene-icm-test-1-0/segments_h75 within 2
minutes, this is very odd indeed. That would seem to imply your deletion
policy is not working.
You might try just using one of the nodes as the writer. In Michaels
comments
Hi Mark
Yes, thank you. I can see your point and I think we might have to pay
some attention to this issue.
But, we sometimes see this error on an NFS share within 2 minutes of
starting the test so I don't think this is the only problem.
Once again, thanks for the idea. I will certainly be lo
This is an interesting choice. Perhaps you have modified
LuceneIndexAccessor, but it seems to me (without knowing much about your
setup) that you would have odd reader behavior. On a 3 node system, if you
add docs with node 1 and 2 but not 3 and your doing searches against all 3
nodes, node 3 will
"Patrick Kimber" <[EMAIL PROTECTED]> wrote on 29/06/2007
> > 02:01:08:
> >
> > > Hi,
> > >
> > > We are sharing a Lucene index in a Linux cluster over an NFS
> > > share. We have
> > > multiple servers reading and writing
>
> > We are sharing a Lucene index in a Linux cluster over an NFS
> > share. We have
> > multiple servers reading and writing to the index.
> >
> > I am getting regular lock exceptions e.g.
> > Lock obtain timed out:
> >
>
NativeFSLock@/mnt/nfste
oron
"Patrick Kimber" <[EMAIL PROTECTED]> wrote on 29/06/2007
02:01:08:
> Hi,
>
> We are sharing a Lucene index in a Linux cluster over an NFS
> share. We have
> multiple servers reading and writing to the index.
>
> I am getting regular lock exceptions
g a Lucene index in a Linux cluster over an NFS
> share. We have
> multiple servers reading and writing to the index.
>
> I am getting regular lock exceptions e.g.
> Lock obtain timed out:
>
NativeFSLock@/mnt/nfstest/repository/lucene/lock/lucene-2d3d31fa7f19eabb73d692df44087d81-
Hi,
We are sharing a Lucene index in a Linux cluster over an NFS share. We have
multiple servers reading and writing to the index.
I am getting regular lock exceptions e.g.
Lock obtain timed out:
NativeFSLock@/mnt/nfstest/repository/lucene/lock/lucene-2d3d31fa7f19eabb73d692df44087d81-n
ia <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Wednesday, May 9, 2007 12:26:38 AM
Subject: Lock obtain timed out while searching
Hello everyone,
I am getting the following exception while searching:
Lock obtain timed out: java.io.IOException: Lock obtain timed out:
[EMAIL
PROTECTE
Hello everyone,
I am getting the following exception while searching:
Lock obtain timed out: java.io.IOException: Lock obtain timed out:
[EMAIL
PROTECTED]:\WINDOWS\TEMP\lucene-22e0ad3c019e26a6e2991b0e6ed97e1c-commit.lock
I have implemented MultiSearcher only, No other methods are updating
"Jerome Chauvin" wrote:
> Thanks Michael for your answer, but following check of our processing, it
> appears all the updates of the index are made in a single thread.
> Actually,
> this kind of exception is thrown during a heavy batch processing. This
> processing is not multi-threaded.
Do you a
.org
Objet : Re: Lucene 2.1: java.io.IOException: Lock obtain timed out:
SimpleFSLock@
"Jerome Chauvin" <[EMAIL PROTECTED]> wrote:
> We encounter issues while updating the lucene index, here is the stack
> trace:
>
> Caused by: java.io.IOException: Lock obtain timed
"Jerome Chauvin" <[EMAIL PROTECTED]> wrote:
> We encounter issues while updating the lucene index, here is the stack
> trace:
>
> Caused by: java.io.IOException: Lock obtain timed out:
> SimpleFSLock@/data/www/orcanta/lucene/store1/write.lock
> at org.apache.
All,
We encounter issues while updating the lucene index, here is the stack trace:
Caused by: java.io.IOException: Lock obtain timed out:
SimpleFSLock@/data/www/orcanta/lucene/store1/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:69)
at
But, locking should be fine even for this "hammering" use case (and if
it's not, that's a bug, and I'd really like to know about it!).
I have hammered over 2.5 million 5-10k docs into an index this way (a
realtime system that I had not yet added a special load call to) and had 0
problems. On
ndexer(counter,filename);//this counter is initially set to
>> 0 so that for the first file indexed,it creates new index (the
>> IndexWriter
>> flag set to true and false otherwise) and later it is incremented.
>>
>>
>> System.out.println("Indexing
counter++;//a file has succesfully indexed,increment the
counter
luceneIndex.closeIndex();// ==>I close the index
already,so it should have released the lock
}
3.I have deleted the directory where the indexfile exist and try
to index from the beg
a
wrote:
> I am indexing thousands of XML document,then it stops after indexing
for about 7 hrs
>
> ...
> Indexing C:\sweetpea\wikipedia_xmlfiles\part-0\37027.xml
> java.io.IOException: Lock obtain timed out: [EMAIL PROTECTED]
:\sweetpea\dual_index\DI\write.lock
> java.lang.N
have deleted the directory where the indexfile exist and try to index
from the beginning...I dunno wheter 7 hrs later it will raise the same
problem"Lock obtain timed out"
4.I use the latest version of Lucene (nightly build)
Thanks and Regards,
Maureen
Michael McCandless
part-0\37022.xml
Indexing C:\sweetpea\wikipedia_xmlfiles\part-0\37027.xml
java.io.IOException: Lock obtain timed out: [EMAIL PROTECTED]
:\sweetpea\dual_index\DI\write.lock
java.lang.NullPointerException
Can anyone suggest how to overcome this lock time out error?
Thanks and Re
maureen tanuwidjaja wrote:
I am indexing thousands of XML document,then it stops after indexing for
about 7 hrs
...
Indexing C:\sweetpea\wikipedia_xmlfiles\part-0\37027.xml
java.io.IOException: Lock obtain timed out: [EMAIL
PROTECTED]:\sweetpea\dual_index\DI\write.lock
\wikipedia_xmlfiles\part-0\37027.xml
java.io.IOException: Lock obtain timed out: [EMAIL
PROTECTED]:\sweetpea\dual_index\DI\write.lock
java.lang.NullPointerException
Can anyone suggest how to overcome this lock time out error?
Thanks and Regards,
Maureen
ngren
Bankaktiebolaget Avanza
Tel: 08 5622 5055
Fax: 08 5622 5051
Mobil: 0708 618 055
_
-Original Message-
From: karl wettin [mailto:[EMAIL PROTECTED]
Sent: den 27 juli 2006 11:15
To: java-user@lucene.apache.org
Subject: Re: SOLVED: Lock obtain timed out
Importance: Low
On Thu,
On Thu, 2006-07-27 at 11:06 +0200, Björn Ekengren wrote:
> Thancks everybody for the feedback. I now rewrote my app like this:
>
> synchronized (searcher.getWriteLock()){
> IndexReader reader = searcher.getIndexSearcher().getIndexReader();
> try {
>
nt: den 27 juli 2006 09:25
To: java-user@lucene.apache.org
Subject: RE: Lock obtain timed out
Importance: Low
On Thu, 2006-07-27 at 08:59 +0200, Björn Ekengren wrote:
> > > When I close my application containing index writers the
> > > lock files are left in the temp direc
On Thu, 2006-07-27 at 08:59 +0200, Björn Ekengren wrote:
> > > When I close my application containing index writers the
> > > lock files are left in the temp directory causing an "Lock obtain
> > > timed out" error upon the next restart.
> >
> > My
?
OS: win xp
JVM: 1.5
/B
_
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: den 26 juli 2006 20:16
To: java-user@lucene.apache.org
Subject: Re: Lock obtain timed out
Importance: Low
>> When I close my application containing index writers the
&g
When I close my application containing index writers the
lock files are left in the temp directory causing an "Lock obtain
timed out" error upon the next restart.
My guess is that you keep a writer open even though there is no activity
involving adding new documents. Unless I have
On Wed, 2006-07-26 at 16:24 +0200, Björn Ekengren wrote:
> When I close my application containing index writers the
> lock files are left in the temp directory causing an "Lock obtain
> timed out" error upon the next restart.
My guess is that you keep a writer open even
Ok, this might have been answered somewhere, but I can't find it so here goes:
When I close my application containing index writers the lock files are left in
the temp directory causing an "Lock obtain timed out" error upon the next
restart. I works of course if I remove the
less similar to the
> >> demo
> >> privided with lucene distribution.
> >> Initially everything was working fine without any problems. But today
> >> while
> >> running the application i have been getting this exception
> >>
> >> java.io.
ore or less similar to the
demo
privided with lucene distribution.
Initially everything was working fine without any problems. But today
while
running the application i have been getting this exception
java.io.IOException: Lock obtain timed out: Lock@/tmp/lucene-
dcc982e203ef1d2aebb5d8a4b5
; serach
> through text files. And my program is more or less similar to the demo
> privided with lucene distribution.
> Initially everything was working fine without any problems. But today
> while
> running the application i have been getting this exception
>
> java.io.IOExce
been getting this exception
java.io.IOException: Lock obtain timed out: Lock@/tmp/lucene-
dcc982e203ef1d2aebb5d8a4b55b3a60-write.lock
whever i try to read or write to the index. I am unable to understand why
this is happening. IS there some mistake I am making in the code.. because I
havent
I kept getting this exception when adding a new document to an existing
index:
22:19:10,281 INFO [STDOUT] java.io.IOException: Lock obtain timed out:
[EMAIL PROTECTED]:\DOCUME~1\xin\LOCALS~
1\Temp\lucene-31c482aaf5f581ad3dc0249eeeb8d281-write.lock
(Stack trace is like:
22:19:10,312 INFO
o luck. What could
> be wrong?
>
> java.io.IOException: Lock obtain timed out:
> [EMAIL PROTECTED]:\DOCUME~1\harini\LOCALS~1\Temp\lucene-1b92bc
> 48efc5c13ac4ef4ad9fd17c158-commit.lock
> at org.apache.lucene.store.Lock.obtain(Lock.java:58)
> at org.apache.luce
Probably a stale lock - remove it.
Otis
- Original Message
From: Harini Raghavan <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Mon 09 Jan 2006 01:36:53 PM EST
Subject: Lock obtain timed out + IndexSearcher
Hi All,
All of a sudden I have started getting LockTimeOut exc
wrong?
java.io.IOException: Lock obtain timed out:
[EMAIL PROTECTED]:\DOCUME~1\harini\LOCALS~1\Temp\lucene-1b92bc
48efc5c13ac4ef4ad9fd17c158-commit.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:58)
at org.apache.lucene.store.Lock$With.run(Lock.java:108)
at
The default value of IndexWriter.WRITE_LOCK_TIMEOUT property is 1000ms. Can
this value be increased to some optimum value?
- Original Message -
From: "Harini Raghavan" <[EMAIL PROTECTED]>
To:
Sent: Saturday, July 30, 2005 11:23 PM
Subject: IOException : Lock obtain
On Saturday 30 July 2005 19:53, Harini Raghavan wrote:
> So I am wondering why this problem is
> occuring. Can someone please help?
You could try with the version from SVN, it fixes a bug that a timeout was
never obeyed (not sure if it was *this* timeout).
Regards
Daniel
--
http://www.daniel
am getting a
different exception : IOException : Lock obtain timed out.
If I manually delete the lock file from the index directory, then it works
fine for a while and again throws the same exception.
Below is the exception stack trace.
java.io.IOException: Lock obtain timed out:
Lock@/var/
74 matches
Mail list logo