Re: Adding and deleting documents in the same update request

2019-01-27 Thread Luiz Armesto
You're correct. It' not a good idea mixing different operation types in the
same request. You can't rely on the operations order. There is a
presentation about SolrJ where they explain it:

https://youtu.be/ACPUR_GL5zM?t=1985



On Sun, Jan 27, 2019, 09:14 Andreas Nilsson  Thanks for the suggestions, Shawn.
>
>
> Unfortunately in this case, I don't think there is a natural key to use as
> the primary key due to the requirements of having multiple versions of the
> source indexed at the same time.
>
>
> I have now found a way to tweak the delete query in order for it to not
> overlap the added documents. I will go with either that or sending the
> deletes as separate requests.
>
>
> Just as a clarification, however: am I correct to assume that the
> multi-update operations are executed in an undefined order and can fail
> partially when sent like this? It's my leading theory for a bug I am
> investigating at the moment, and seems very likely given what I've seen,
> but it's also very hard to reproduce.
>
>
> Regards,
>
> Andreas Nilsson
>
>
> 
> From: Shawn Heisey 
> Sent: Wednesday, January 23, 2019 3:33 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Adding and deleting documents in the same update request
>
> On 1/23/2019 5:58 AM, Andreas Nilsson wrote:
> > Related: in the case where I cannot rely on the operations order in a
> single update request, is there a recommended way to do these kinds of
> updates "atomically" in a single request? Ideally, I obviously don't want
> the collection to be left in a state where the deletion has happened but
> not the additions or the other way around.
>
> Assuming that you have a uniqueKey field and that you are replacing an
> existing document, do not issue a delete for that document at all.  When
> you index a document with the same value in the uniqueKey field as an
> existing document, Solr will handle the delete of the existing document
> for you.
>
> When a uniqueKey is present, you should only issue delete commands for
> documents that will be permanently deleted.
>
> Alternatively, send deletes in their own request, separate from
> inserts.  If you take this route, wait for acknowledgement from the
> delete before sending the insert.
>
> Thanks,
> Shawn
>
>
On Jan 27, 2019 09:14, "Andreas Nilsson"  wrote:

Thanks for the suggestions, Shawn.


Unfortunately in this case, I don't think there is a natural key to use as
the primary key due to the requirements of having multiple versions of the
source indexed at the same time.


I have now found a way to tweak the delete query in order for it to not
overlap the added documents. I will go with either that or sending the
deletes as separate requests.


Just as a clarification, however: am I correct to assume that the
multi-update operations are executed in an undefined order and can fail
partially when sent like this? It's my leading theory for a bug I am
investigating at the moment, and seems very likely given what I've seen,
but it's also very hard to reproduce.


Regards,

Andreas Nilsson



From: Shawn Heisey 
Sent: Wednesday, January 23, 2019 3:33 PM
To: solr-user@lucene.apache.org
Subject: Re: Adding and deleting documents in the same update request


On 1/23/2019 5:58 AM, Andreas Nilsson wrote:
> Related: in the case where I cannot rely on the operations order in a
single update request, is there a recommended way to do these kinds of
updates "atomically" in a single request? Ideally, I obviously don't want
the collection to be left in a state where the deletion has happened but
not the additions or the other way around.

Assuming that you have a uniqueKey field and that you are replacing an
existing document, do not issue a delete for that document at all.  When
you index a document with the same value in the uniqueKey field as an
existing document, Solr will handle the delete of the existing document
for you.

When a uniqueKey is present, you should only issue delete commands for
documents that will be permanently deleted.

Alternatively, send deletes in their own request, separate from
inserts.  If you take this route, wait for acknowledgement from the
delete before sending the insert.

Thanks,
Shawn


Re: Error Opening new IndexSearcher - LockObtainFailedException

2017-09-21 Thread Luiz Armesto
Hi Shashank,

There is an open issue about this exception [1]. Can you take a look and
test the patch to see if it works in your case?

[1] https://issues.apache.org/jira/browse/SOLR-11297

On Sep 21, 2017 10:19 PM, "Shashank Pedamallu" 
wrote:

Hi,

I’m seeing the following exception in Solr that gets automatically resolved
eventually.
2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [   x:spedamallu1-core-1]
o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error
opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCore.java:952)
at org.apache.solr.core.SolrCore.(SolrCore.java:816)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890)
at org.apache.solr.core.CoreContainer.getCore(
CoreContainer.java:1167)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:345)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:296)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
doFilter(ServletHandler.java:1691)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(
ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(
SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.
doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.
doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(
ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.
doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.
doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.
handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(
HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041)
at org.apache.solr.core.SolrCore.(SolrCore.java:925)
... 32 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by
this virtual machine: /Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/
data/index/write.lock
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(
NativeFSLockFactory.java:127)
at org.apache.lucene.store.FSLockFactory.obtainLock(
FSLockFactory.java:41)
at org.apache.lucene.store.BaseDirectory.obtainLock(
BaseDirectory.java:45)
at org.apache.lucene.store.FilterDirectory.obtainLock(
FilterDirectory.java:104)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804)
at org.apache.solr.update.SolrIndexWriter.(
SolrIndexWriter.java:125)
at org.apache.solr.update.SolrIndexWriter.create(
SolrIndexWriter.java:100)
at org.apache.solr.update.DefaultSolrCoreState.
createMainIndexWriter(DefaultSolrCoreState.java:240)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(
DefaultSolrCoreState.java:114)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1852)

I kind of have a theory of why this is