salvaging uncommitted data

2011-01-18 Thread Udi Nir
Hi,
I have a solr server that is failing to acquire a lock with the exception
below. I think that the server has a lot of uncommitted data (I am not sure
how to verify this) and if so I would like to salvage it.
Any suggestions how to proceed?

(btw i tried removing the lock file but it did not help)

Thanks,
Udi


Jan 18, 2011 5:17:06 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out
: NativeFSLock@
/vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
6-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
at
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
0)


Re: salvaging uncommitted data

2011-01-18 Thread Otis Gospodnetic
Udi,

Hm, don't know off the top of my head, but sounds like an interesting problem.
Are you getting this error while still writing to the index or did you stop all 
writing?
Do you get this error when you issue a commit or?
Is the index on the local disk or?

Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



- Original Message 
 From: Udi Nir u...@threadsy.com
 To: solr-user@lucene.apache.org
 Sent: Tue, January 18, 2011 12:29:47 PM
 Subject: salvaging uncommitted data
 
 Hi,
 I have a solr server that is failing to acquire a lock with the  exception
 below. I think that the server has a lot of uncommitted data (I am  not sure
 how to verify this) and if so I would like to salvage it.
 Any  suggestions how to proceed?
 
 (btw i tried removing the lock file but it  did not help)
 
 Thanks,
 Udi
 
 
 Jan 18, 2011 5:17:06 PM  org.apache.solr.common.SolrException log
 SEVERE:  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
 out
 :  NativeFSLock@
 /vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
 6-write.lock
  at  org.apache.lucene.store.Lock.obtain(Lock.java:85)
  at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
  at  org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
   at
 org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
 0)
 


Re: salvaging uncommitted data

2011-01-18 Thread Udi Nir
i have not stopped writing so i am getting this error all the time.
the commit actually seems to go through with no errors but it does not seem
to write anything to the index files (i can see this because they are old
and i cannot see new stuff in search results).

my index folder is on an amazon ebs volume which is a block device and looks
like a local disk.

thanks!

udi


On Tue, Jan 18, 2011 at 10:49 AM, Otis Gospodnetic 
otis_gospodne...@yahoo.com wrote:

 Udi,

 Hm, don't know off the top of my head, but sounds like an interesting
 problem.
 Are you getting this error while still writing to the index or did you stop
 all
 writing?
 Do you get this error when you issue a commit or?
 Is the index on the local disk or?

 Otis
 
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
 Lucene ecosystem search :: http://search-lucene.com/



 - Original Message 
  From: Udi Nir u...@threadsy.com
  To: solr-user@lucene.apache.org
  Sent: Tue, January 18, 2011 12:29:47 PM
  Subject: salvaging uncommitted data
 
  Hi,
  I have a solr server that is failing to acquire a lock with the
  exception
  below. I think that the server has a lot of uncommitted data (I am  not
 sure
  how to verify this) and if so I would like to salvage it.
  Any  suggestions how to proceed?
 
  (btw i tried removing the lock file but it  did not help)
 
  Thanks,
  Udi
 
 
  Jan 18, 2011 5:17:06 PM  org.apache.solr.common.SolrException log
  SEVERE:  org.apache.lucene.store.LockObtainFailedException: Lock obtain
 timed
  out
  :  NativeFSLock@
  /vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
  6-write.lock
   at  org.apache.lucene.store.Lock.obtain(Lock.java:85)
   at
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
   at
  org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
at
  org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
  0)
 



Re: salvaging uncommitted data

2011-01-18 Thread Otis Gospodnetic
Udi,

It's hard for me to tell from here, but it looks like your writes are really 
not 
going in at all, in which case there may be nothing (much) to salvage.

The EBS volume is mounted?  And fast (try listing a bigger dir or doing 
something that involves some non-trivial disk IO)?
No errors anywhere in the log on commit?
How exactly are you invoking the commit?  There is a wait option there...

Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



- Original Message 
 From: Udi Nir u...@threadsy.com
 To: solr-user@lucene.apache.org
 Sent: Tue, January 18, 2011 2:04:56 PM
 Subject: Re: salvaging uncommitted data
 
 i have not stopped writing so i am getting this error all the time.
 the  commit actually seems to go through with no errors but it does not seem
 to  write anything to the index files (i can see this because they are old
 and i  cannot see new stuff in search results).
 
 my index folder is on an amazon  ebs volume which is a block device and looks
 like a local  disk.
 
 thanks!
 
 udi
 
 
 On Tue, Jan 18, 2011 at 10:49 AM,  Otis Gospodnetic 
 otis_gospodne...@yahoo.com  wrote:
 
  Udi,
 
  Hm, don't know off the top of my head,  but sounds like an interesting
  problem.
  Are you getting this  error while still writing to the index or did you stop
  all
   writing?
  Do you get this error when you issue a commit or?
  Is  the index on the local disk or?
 
  Otis
  
   Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
  Lucene ecosystem  search :: http://search-lucene.com/
 
 
 
  - Original  Message 
   From: Udi Nir u...@threadsy.com
   To: solr-user@lucene.apache.org
Sent: Tue, January 18, 2011 12:29:47 PM
   Subject: salvaging  uncommitted data
  
   Hi,
   I have a solr server  that is failing to acquire a lock with the
   exception
below. I think that the server has a lot of uncommitted data (I am   not
  sure
   how to verify this) and if so I would like to  salvage it.
   Any  suggestions how to proceed?
   
   (btw i tried removing the lock file but it  did not  help)
  
   Thanks,
   Udi
  
   
   Jan 18, 2011 5:17:06 PM   org.apache.solr.common.SolrException log
   SEVERE:   org.apache.lucene.store.LockObtainFailedException: Lock obtain
   timed
   out
   :  NativeFSLock@
/vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
6-write.lock
at   org.apache.lucene.store.Lock.obtain(Lock.java:85)
 at
   org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
 at
org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
  at
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
0)
  
 
 


Re: salvaging uncommitted data

2011-01-18 Thread Udi Nir
the ebs volume is operational and i cannot see any error in dmesg etc.
the only errors in catalina.out are the lock related ones (even though i
removed the lock file) and when i do a commit everything looks fine in the
log.
i am using the following for the commit:
curl http://localhost:8983/solr/update -s -H Content-type:text/xml;
charset=utf-8 -d commit/


btw where will i find the writes that have not been committed? are they all
in memory or are they in some temp files somewhere?

udi


On Tue, Jan 18, 2011 at 11:24 AM, Otis Gospodnetic 
otis_gospodne...@yahoo.com wrote:

 Udi,

 It's hard for me to tell from here, but it looks like your writes are
 really not
 going in at all, in which case there may be nothing (much) to salvage.

 The EBS volume is mounted?  And fast (try listing a bigger dir or doing
 something that involves some non-trivial disk IO)?
 No errors anywhere in the log on commit?
 How exactly are you invoking the commit?  There is a wait option there...

 Otis
 
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
 Lucene ecosystem search :: http://search-lucene.com/



 - Original Message 
  From: Udi Nir u...@threadsy.com
  To: solr-user@lucene.apache.org
  Sent: Tue, January 18, 2011 2:04:56 PM
  Subject: Re: salvaging uncommitted data
 
  i have not stopped writing so i am getting this error all the time.
  the  commit actually seems to go through with no errors but it does not
 seem
  to  write anything to the index files (i can see this because they are
 old
  and i  cannot see new stuff in search results).
 
  my index folder is on an amazon  ebs volume which is a block device and
 looks
  like a local  disk.
 
  thanks!
 
  udi
 
 
  On Tue, Jan 18, 2011 at 10:49 AM,  Otis Gospodnetic 
  otis_gospodne...@yahoo.com  wrote:
 
   Udi,
  
   Hm, don't know off the top of my head,  but sounds like an interesting
   problem.
   Are you getting this  error while still writing to the index or did you
 stop
   all
writing?
   Do you get this error when you issue a commit or?
   Is  the index on the local disk or?
  
   Otis
   
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
   Lucene ecosystem  search :: http://search-lucene.com/
  
  
  
   - Original  Message 
From: Udi Nir u...@threadsy.com
To: solr-user@lucene.apache.org
 Sent: Tue, January 18, 2011 12:29:47 PM
Subject: salvaging  uncommitted data
   
Hi,
I have a solr server  that is failing to acquire a lock with the
exception
 below. I think that the server has a lot of uncommitted data (I am
 not
   sure
how to verify this) and if so I would like to  salvage it.
Any  suggestions how to proceed?

(btw i tried removing the lock file but it  did not  help)
   
Thanks,
Udi
   

Jan 18, 2011 5:17:06 PM   org.apache.solr.common.SolrException log
SEVERE:   org.apache.lucene.store.LockObtainFailedException: Lock
 obtain
timed
out
:  NativeFSLock@
 /vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
 6-write.lock
 at   org.apache.lucene.store.Lock.obtain(Lock.java:85)
  at
org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
  at
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
   at
   
  org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
 0)
   
  
 



Re: salvaging uncommitted data

2011-01-18 Thread Jason Rutherglen
 btw where will i find the writes that have not been committed? are they all
 in memory or are they in some temp files somewhere?

The writes'll be gone if they haven't been committed yet and the
process fails.

 org.apache.lucene.store.LockObtainFailedException: Lock obtain timed

If it's removed then you on restart of the process, this should go
away.  However you may see a corrupted index exception.

On Tue, Jan 18, 2011 at 11:31 AM, Udi Nir u...@threadsy.com wrote:
 the ebs volume is operational and i cannot see any error in dmesg etc.
 the only errors in catalina.out are the lock related ones (even though i
 removed the lock file) and when i do a commit everything looks fine in the
 log.
 i am using the following for the commit:
 curl http://localhost:8983/solr/update -s -H Content-type:text/xml;
 charset=utf-8 -d commit/


 btw where will i find the writes that have not been committed? are they all
 in memory or are they in some temp files somewhere?

 udi


 On Tue, Jan 18, 2011 at 11:24 AM, Otis Gospodnetic 
 otis_gospodne...@yahoo.com wrote:

 Udi,

 It's hard for me to tell from here, but it looks like your writes are
 really not
 going in at all, in which case there may be nothing (much) to salvage.

 The EBS volume is mounted?  And fast (try listing a bigger dir or doing
 something that involves some non-trivial disk IO)?
 No errors anywhere in the log on commit?
 How exactly are you invoking the commit?  There is a wait option there...

 Otis
 
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
 Lucene ecosystem search :: http://search-lucene.com/



 - Original Message 
  From: Udi Nir u...@threadsy.com
  To: solr-user@lucene.apache.org
  Sent: Tue, January 18, 2011 2:04:56 PM
  Subject: Re: salvaging uncommitted data
 
  i have not stopped writing so i am getting this error all the time.
  the  commit actually seems to go through with no errors but it does not
 seem
  to  write anything to the index files (i can see this because they are
 old
  and i  cannot see new stuff in search results).
 
  my index folder is on an amazon  ebs volume which is a block device and
 looks
  like a local  disk.
 
  thanks!
 
  udi
 
 
  On Tue, Jan 18, 2011 at 10:49 AM,  Otis Gospodnetic 
  otis_gospodne...@yahoo.com  wrote:
 
   Udi,
  
   Hm, don't know off the top of my head,  but sounds like an interesting
   problem.
   Are you getting this  error while still writing to the index or did you
 stop
   all
    writing?
   Do you get this error when you issue a commit or?
   Is  the index on the local disk or?
  
   Otis
   
    Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
   Lucene ecosystem  search :: http://search-lucene.com/
  
  
  
   - Original  Message 
From: Udi Nir u...@threadsy.com
To: solr-user@lucene.apache.org
     Sent: Tue, January 18, 2011 12:29:47 PM
Subject: salvaging  uncommitted data
   
Hi,
I have a solr server  that is failing to acquire a lock with the
    exception
 below. I think that the server has a lot of uncommitted data (I am
 not
   sure
how to verify this) and if so I would like to  salvage it.
Any  suggestions how to proceed?
    
(btw i tried removing the lock file but it  did not  help)
   
Thanks,
Udi
   
    
Jan 18, 2011 5:17:06 PM   org.apache.solr.common.SolrException log
SEVERE:   org.apache.lucene.store.LockObtainFailedException: Lock
 obtain
    timed
out
:  NativeFSLock@
 /vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
 6-write.lock
         at   org.apache.lucene.store.Lock.obtain(Lock.java:85)
          at
    org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
              at
     org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
               at
   
  org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
     0)
   
  
 




Re: salvaging uncommitted data

2011-01-18 Thread Udi Nir
i have not restarted the process yet.
if i restart it, will i lose any data that is in memory? if so, is there a
way around it?
is there a way to know if there is any data waiting to be written? (if not,
i will just restart...)

thanks.

On Tue, Jan 18, 2011 at 12:23 PM, Jason Rutherglen 
jason.rutherg...@gmail.com wrote:

  btw where will i find the writes that have not been committed? are they
 all
  in memory or are they in some temp files somewhere?

 The writes'll be gone if they haven't been committed yet and the
 process fails.

  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed

 If it's removed then you on restart of the process, this should go
 away.  However you may see a corrupted index exception.

 On Tue, Jan 18, 2011 at 11:31 AM, Udi Nir u...@threadsy.com wrote:
  the ebs volume is operational and i cannot see any error in dmesg etc.
  the only errors in catalina.out are the lock related ones (even though i
  removed the lock file) and when i do a commit everything looks fine in
 the
  log.
  i am using the following for the commit:
  curl http://localhost:8983/solr/update -s -H Content-type:text/xml;
  charset=utf-8 -d commit/
 
 
  btw where will i find the writes that have not been committed? are they
 all
  in memory or are they in some temp files somewhere?
 
  udi
 
 
  On Tue, Jan 18, 2011 at 11:24 AM, Otis Gospodnetic 
  otis_gospodne...@yahoo.com wrote:
 
  Udi,
 
  It's hard for me to tell from here, but it looks like your writes are
  really not
  going in at all, in which case there may be nothing (much) to salvage.
 
  The EBS volume is mounted?  And fast (try listing a bigger dir or doing
  something that involves some non-trivial disk IO)?
  No errors anywhere in the log on commit?
  How exactly are you invoking the commit?  There is a wait option
 there...
 
  Otis
  
  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
  Lucene ecosystem search :: http://search-lucene.com/
 
 
 
  - Original Message 
   From: Udi Nir u...@threadsy.com
   To: solr-user@lucene.apache.org
   Sent: Tue, January 18, 2011 2:04:56 PM
   Subject: Re: salvaging uncommitted data
  
   i have not stopped writing so i am getting this error all the time.
   the  commit actually seems to go through with no errors but it does
 not
  seem
   to  write anything to the index files (i can see this because they are
  old
   and i  cannot see new stuff in search results).
  
   my index folder is on an amazon  ebs volume which is a block device
 and
  looks
   like a local  disk.
  
   thanks!
  
   udi
  
  
   On Tue, Jan 18, 2011 at 10:49 AM,  Otis Gospodnetic 
   otis_gospodne...@yahoo.com  wrote:
  
Udi,
   
Hm, don't know off the top of my head,  but sounds like an
 interesting
problem.
Are you getting this  error while still writing to the index or did
 you
  stop
all
 writing?
Do you get this error when you issue a commit or?
Is  the index on the local disk or?
   
Otis

 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem  search :: http://search-lucene.com/
   
   
   
- Original  Message 
 From: Udi Nir u...@threadsy.com
 To: solr-user@lucene.apache.org
  Sent: Tue, January 18, 2011 12:29:47 PM
 Subject: salvaging  uncommitted data

 Hi,
 I have a solr server  that is failing to acquire a lock with the
 exception
  below. I think that the server has a lot of uncommitted data (I
 am
  not
sure
 how to verify this) and if so I would like to  salvage it.
 Any  suggestions how to proceed?
 
 (btw i tried removing the lock file but it  did not  help)

 Thanks,
 Udi

 
 Jan 18, 2011 5:17:06 PM   org.apache.solr.common.SolrException log
 SEVERE:   org.apache.lucene.store.LockObtainFailedException: Lock
  obtain
 timed
 out
 :  NativeFSLock@
  /vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
  6-write.lock
  at   org.apache.lucene.store.Lock.obtain(Lock.java:85)
   at
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
   at
  org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
at

   org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
  0)

   
  
 
 



Re: salvaging uncommitted data

2011-01-18 Thread Jason Rutherglen
 if i restart it, will i lose any data that is in memory? if so, is there a
 way around it?

Usually I've restarted the process, and on restart Solr using the
unlockOnStartuptrue/unlockOnStartup in solrconfig.xml will
automatically remove the lock file (actually I think it may be removed
automatically when the process dies).

You'll lose the data.

 is there a way to know if there is any data waiting to be written? (if not,
 i will just restart...)

There is via the API, offhand via the Solr dashboard, I don't know.

On Tue, Jan 18, 2011 at 12:35 PM, Udi Nir u...@threadsy.com wrote:
 i have not restarted the process yet.
 if i restart it, will i lose any data that is in memory? if so, is there a
 way around it?
 is there a way to know if there is any data waiting to be written? (if not,
 i will just restart...)

 thanks.

 On Tue, Jan 18, 2011 at 12:23 PM, Jason Rutherglen 
 jason.rutherg...@gmail.com wrote:

  btw where will i find the writes that have not been committed? are they
 all
  in memory or are they in some temp files somewhere?

 The writes'll be gone if they haven't been committed yet and the
 process fails.

  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed

 If it's removed then you on restart of the process, this should go
 away.  However you may see a corrupted index exception.

 On Tue, Jan 18, 2011 at 11:31 AM, Udi Nir u...@threadsy.com wrote:
  the ebs volume is operational and i cannot see any error in dmesg etc.
  the only errors in catalina.out are the lock related ones (even though i
  removed the lock file) and when i do a commit everything looks fine in
 the
  log.
  i am using the following for the commit:
  curl http://localhost:8983/solr/update -s -H Content-type:text/xml;
  charset=utf-8 -d commit/
 
 
  btw where will i find the writes that have not been committed? are they
 all
  in memory or are they in some temp files somewhere?
 
  udi
 
 
  On Tue, Jan 18, 2011 at 11:24 AM, Otis Gospodnetic 
  otis_gospodne...@yahoo.com wrote:
 
  Udi,
 
  It's hard for me to tell from here, but it looks like your writes are
  really not
  going in at all, in which case there may be nothing (much) to salvage.
 
  The EBS volume is mounted?  And fast (try listing a bigger dir or doing
  something that involves some non-trivial disk IO)?
  No errors anywhere in the log on commit?
  How exactly are you invoking the commit?  There is a wait option
 there...
 
  Otis
  
  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
  Lucene ecosystem search :: http://search-lucene.com/
 
 
 
  - Original Message 
   From: Udi Nir u...@threadsy.com
   To: solr-user@lucene.apache.org
   Sent: Tue, January 18, 2011 2:04:56 PM
   Subject: Re: salvaging uncommitted data
  
   i have not stopped writing so i am getting this error all the time.
   the  commit actually seems to go through with no errors but it does
 not
  seem
   to  write anything to the index files (i can see this because they are
  old
   and i  cannot see new stuff in search results).
  
   my index folder is on an amazon  ebs volume which is a block device
 and
  looks
   like a local  disk.
  
   thanks!
  
   udi
  
  
   On Tue, Jan 18, 2011 at 10:49 AM,  Otis Gospodnetic 
   otis_gospodne...@yahoo.com  wrote:
  
Udi,
   
Hm, don't know off the top of my head,  but sounds like an
 interesting
problem.
Are you getting this  error while still writing to the index or did
 you
  stop
all
 writing?
Do you get this error when you issue a commit or?
Is  the index on the local disk or?
   
Otis

 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem  search :: http://search-lucene.com/
   
   
   
- Original  Message 
 From: Udi Nir u...@threadsy.com
 To: solr-user@lucene.apache.org
  Sent: Tue, January 18, 2011 12:29:47 PM
 Subject: salvaging  uncommitted data

 Hi,
 I have a solr server  that is failing to acquire a lock with the
 exception
  below. I think that the server has a lot of uncommitted data (I
 am
  not
sure
 how to verify this) and if so I would like to  salvage it.
 Any  suggestions how to proceed?
 
 (btw i tried removing the lock file but it  did not  help)

 Thanks,
 Udi

 
 Jan 18, 2011 5:17:06 PM   org.apache.solr.common.SolrException log
 SEVERE:   org.apache.lucene.store.LockObtainFailedException: Lock
  obtain
 timed
 out
 :  NativeFSLock@
  /vol-unifi-solr/data/index/lucene-043c34f1f06a280de60b3d4e8e05601
  6-write.lock
          at   org.apache.lucene.store.Lock.obtain(Lock.java:85)
           at
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
           at
  org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1402)
            at

   org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:19
  0)