Just a sanity check. That directory mentioned, what kind of file system is
that on? NFS, NAS, RAID?

Regards,
    Alex

On 10 Oct 2016 1:09 AM, "Reinhard Budenstecher" <rabu...@maills.de> wrote:

>
> That's considerably larger than you initially indicated.  In just one
> index, you've got almost 300 million docs taking up well over 200GB.
> About half of them have been deleted, but they are still there.  Those
> deleted docs *DO* affect operation and memory usage.
>

Yes, that's larger than I expected. Two days ago the index was at the size
I've written. This huge increase does happen because of running ETL.

>
> usage.  The only effective way to get rid of them is to optimize the
> index ... but I will warn you that with an index of that size, the time
> required for an optimize can reach into multiple hours, and will
> temporarily require considerable additional disk space.  The fact that

Three days ago we've upgraded from Solr 5.5.3 to 6.2.1. Before upgrading
I've optimized this index already and yes, it took some hours. So when two
days of ETL cause such an increase of index size, running a daily optimize
is not an option.

>
> You don't need to create it.  Stacktraces are logged by Solr, in a file
> named solr.log, whenever most errors occur.
>

Really, there is nothing in solr. log. I did not change any option related
to this in config.  Solr died again some hours ago and the last entry is:

2016-10-09 22:02:31.051 WARN  (qtp225493257-1097) [   ]
o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_9102]
java.nio.file.NoSuchFileException: /var/solr/data/myshop/data/
index/segments_9102
        at sun.nio.fs.UnixException.translateToIOException(
UnixException.java:86)
        at sun.nio.fs.UnixException.rethrowAsIOException(
UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(
UnixException.java:107)
        at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(
UnixFileAttributeViews.java:55)
        at sun.nio.fs.UnixFileSystemProvider.readAttributes(
UnixFileSystemProvider.java:144)
        at sun.nio.fs.LinuxFileSystemProvider.readAttributes(
LinuxFileSystemProvider.java:99)
        at java.nio.file.Files.readAttributes(Files.java:1737)
        at java.nio.file.Files.size(Files.java:2332)
        at org.apache.lucene.store.FSDirectory.fileLength(
FSDirectory.java:243)
        at org.apache.lucene.store.NRTCachingDirectory.fileLength(
NRTCachingDirectory.java:128)
        at org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(
LukeRequestHandler.java:597)
        at org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(
LukeRequestHandler.java:585)
        at org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(
CoreAdminOperation.java:1007)
        at org.apache.solr.handler.admin.CoreAdminOperation.lambda$
static$3(CoreAdminOperation.java:170)
        at org.apache.solr.handler.admin.CoreAdminOperation.execute(
CoreAdminOperation.java:1056)
        at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.
call(CoreAdminHandler.java:365)
        at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(
CoreAdminHandler.java:156)
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(
RequestHandlerBase.java:154)
        at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(
HttpSolrCall.java:658)
        at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:440)
        at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:257)
        at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:208)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
doFilter(ServletHandler.java:1668)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(
ServletHandler.java:581)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(
SecurityHandler.java:548)
        at org.eclipse.jetty.server.session.SessionHandler.
doHandle(SessionHandler.java:226)
        at org.eclipse.jetty.server.handler.ContextHandler.
doHandle(ContextHandler.java:1160)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(
ServletHandler.java:511)
        at org.eclipse.jetty.server.session.SessionHandler.
doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.
doScope(ContextHandler.java:1092)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:213)
        at org.eclipse.jetty.server.handler.HandlerCollection.
handle(HandlerCollection.java:119)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
        at org.eclipse.jetty.server.Server.handle(Server.java:518)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
        at org.eclipse.jetty.server.HttpConnection.onFillable(
HttpConnection.java:244)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
AbstractConnection.java:273)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
        at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
SelectChannelEndPoint.java:93)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
produceAndRun(ExecuteProduceConsume.java:246)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
ExecuteProduceConsume.java:156)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
QueuedThreadPool.java:654)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
QueuedThreadPool.java:572)
        at java.lang.Thread.run(Thread.java:745)

So the only solution is to reduce index size by lowering number of docs or
reducing amount of indexed data per doc (which is not an option also)? We
can extend RAM to 192GB, but if that helps would only be a try. The numer
of "active" docs would remain at 150-160 million docs, but the number of
deleted docs caused by ETL process will not be forseeable.


______________________________________________________
Gesendet mit Maills.de - mehr als nur Freemail www.maills.de

Reply via email to