Facet sorting algorithm for index
Hi, I have an external application that use the output of a facet to join other dataset using the keys of the facet result. The facet query use index sort but in some point, my application crash because the order of the keys is not correct. If I do an unix sort over the keys of the result with LC_ALL=C doesn't output the same result. I identified a case like this: 760d1f833b764591161\84b20f28242a0 760d1f833b76459116184b20f2 Why the line whit the '\' is before? This chain of chars is the character or is raw and are 2 chars? In ASCII the has lower ord than character 8, if \ is then this sort makes sense ... My question here is how index sort works and how I can replicate it in C++ - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Facet-sorting-algorithm-for-index-tp4197174.html Sent from the Solr - User mailing list archive at Nabble.com.
Query always fail if row value is too high
I'm trying to retrieve from Solr a query in CSV format with around 500K registers and I always get this error: Expected mime type application/octet-stream but got application/xml. ?xml version=\1.0\ encoding=\UTF-8\?\nresponse\nlst name=\error\str name=\msg\application/x-www-form-urlencoded content length (6040427 bytes) exceeds upload limit of 2048 KB/strint name=\code\400/int/lst\n/response\n If the rows value is lower, like 5 the query doesn't fail. What I'm doing wrong? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Query-always-fail-if-row-value-is-too-high-tp4185047.html Sent from the Solr - User mailing list archive at Nabble.com.
How I raise the maxUpdateConnections under Solr 4.6.1
Hi, How I can raise this two variables: maxUpdateConnections, maxUpdateConnectionsPerHost in Solr 4.6.1 with the old solr.xml style? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/How-I-raise-the-maxUpdateConnections-under-Solr-4-6-1-tp4173546.html Sent from the Solr - User mailing list archive at Nabble.com.
Move a shard from one disk to another
Hi, I need to move some data from one disk to another one. My question is if can I move the shard and do a symlink on the place where the shard was? This works? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Move-a-shard-from-one-disk-to-another-tp4171047.html Sent from the Solr - User mailing list archive at Nabble.com.
Optimize during indexing
Hi, It´s possible perform an optimize operation and continuing indexing over a collection? I need to force expunge deletes from the index I have millions os deletes and need free space. - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Optimize-during-indexing-tp4170261.html Sent from the Solr - User mailing list archive at Nabble.com.
Delete data from stored documents
Hi, It's possible remove store data of an index deleting the unwanted fields from schema.xml and after do an optimize over the index? Thanks, /yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Delete-data-from-stored-documents-tp4167990.html Sent from the Solr - User mailing list archive at Nabble.com.
Adding DocValues in an existing field
Hi, Can I add to an existing field the docvalue feature without wipe the actual? The modification on the schema will be something like this: field name=surrogate_id type=tlong indexed=true stored=true multiValued=false / field name=surrogate_id type=tlong indexed=true stored=true multiValued=false docValues=true/ I want use the actual data to reindex it again in the same collection but in the process create the docvalues too, it's possible? I'm using solr 4.6.1 - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Adding-DocValues-in-an-existing-field-tp4114462.html Sent from the Solr - User mailing list archive at Nabble.com.
Facets maxcount feature?
Hi, I'm wondering if Solr has some feature like face.mincount but for maxcount. I have an use case where I need to know what facets have less than n elements. I can do this adding the facet.limit=-1 parameter and fetch the whole set and client-side remove the elements that don't match the threshold. The problem is that the facet can return millions of rows and fetch the response and reduce the set can take a while ... /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Facets-maxcount-feature-tp4111408.html Sent from the Solr - User mailing list archive at Nabble.com.
Service Unavailable Error.
I having this error on my logs: ERROR - dat1 - 2013-12-18 11:40:11.704; org.apache.solr.update.StreamingSolrServers$1; error org.apache.solr.common.SolrException: Service Unavailable request: http://192.168.20.106:8983/solr/statistics-13_shard12_replica4/update?update.distrib=FROMLEADERdistrib.from=http%3A%2F%2F192.168.20.101%3A8983%2Fsolr%2Fstatistics-13_shard12_replica5%2Fwt=javabinversion=2 at org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:240) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) The machine is zen no load, no IO how it's possible be unavailable? I'm on Solr 4.6.0 solrcloud mode. - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Service-Unavailable-Error-tp4107242.html Sent from the Solr - User mailing list archive at Nabble.com.
No registered leader was found, but the UI says that I have.
I'm getting an error on Solr 4.6.0 about leader registation, the admin shows this: http://picpaste.com/a839446d0808df205aa7be78c780ed32.png But my logs says: ERROR - dat6 - 2013-12-18 11:43:54.253; org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: No registered leader was found, collection:statistics-13 slice:shard23_1 at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:484) at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:467) at org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:223) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:428) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100) at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:89) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:151) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:131) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:223) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:116) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:188) at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:114) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:158) at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:99) at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:368) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:953) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Unknown Source) Any idea how can I fix this? - Best regards -- View this message in context:
Question about external file fields
Hi, I read this post http://1opensourcelover.wordpress.com/ about EEF's and I found very interesting. Can someone give me more use cases about the utility of EEF's? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Question-about-external-file-fields-tp4105213.html Sent from the Solr - User mailing list archive at Nabble.com.
Migration from old solr.xml to the new solr.xml style
Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com.
Question about upgrading Sorl and DocValues
Hi, After the reading this link about DocValues and be pointed by Mark Miller to raise the question on the mailing list, I have some questions about the codec implementation note: Note that only the default implementation is supported by future version of Lucene: if you try an alternative format, you may need to switch back to the default and rewrite your index (e.g. forceMerge) before upgrading. My questions is about how I can do this, either the wiki or the ref guide don't explain how this process can be done. I'm using the per-field DocValues formats, therefore I'm not using the default implementation, and this in some way this scare me, because I have in some way the possibility of make Solr updates compromised. /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Question-about-upgrading-Sorl-and-DocValues-tp4102007.html Sent from the Solr - User mailing list archive at Nabble.com.
Configure maxConnectionsPerHost
Hi, Where can I configure the maxConnectionsPerHost on Solr? I'm using Solr 4.5.1 with the old style of solr.xml (I have a lot of collections and switch to the new style of solr.xml is too much work) - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Configure-maxConnectionsPerHost-tp4100870.html Sent from the Solr - User mailing list archive at Nabble.com.
Document routing question.
Hi, I read this post http://searchhub.org/2013/06/13/solr-cloud-document-routing and I have some questions. When a tenant is too large to fit on one shard, we can specify the number of bit from the shardKey that we want to use. If we set a doc's key as tenant1/4!docXXX we are saying to spread the docs over the 1/4th of the collection. If the collection has 4 shards this means that all docs with the same shardKey will go to the same shard, or we will spread 25% in each shard? Other question is: at query time, we must configurate shardKeys param as shard.keys=tenant1! or as shard.keys=tenant1/4! /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Document-routing-question-tp4100938.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud unstable
Sometime ago I posted this issue http://lucene.472066.n3.nabble.com/Leader-election-fails-in-some-point-td4096514.html The link for screenshot is no longer available. When some shard fails and lost the leader I have those exceptions. - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-unstable-tp4100419p4100432.html Sent from the Solr - User mailing list archive at Nabble.com.
2 replicas with different num of documents
Hi, I have 2 replicas with different number of documents, Is it possible? I'm using Solr 4.5.1 Replica 1: version:77847 numDocs:5951879 maxDoc:5951978 deletedDocs:99 Replica 2: version:76011 numDocs:5951793 maxDoc:5951965 deletedDocs:172 Is it not supposed tlog ensure the data consistency? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/2-replicas-with-different-num-of-documents-tp4099279.html Sent from the Solr - User mailing list archive at Nabble.com.
Proposal for new feature, cold replicas, brainstorming
I'm wondering some time ago if it's possible have replicas of a shard synchronized but in an state that they can't accept queries only updates. This replica in replication mode only awake to accept queries if it's the last alive replica and goes to replication mode when other replica becomes alive and synchronized. The motivation of this is simple, I want have replication but I don't want have n replicas actives with full resources allocated (cache and so on). This is usefull in enviroments where replication is needed but a high query throughput is not fundamental and the resources are limited. I know that right now is not possible, but I think that it's a feature that can be implemented in a easy way creating a new status for shards. The bottom line question is, I'm the only one with this kind of requeriments? Does it make sense one functionality like this? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-tp4097501.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr 4.5 router.name issue?
Hi, I create a collection with command (Solr 4.5): http://localhost:8983/solr/admin/collections?action=CREATEname=testDocValuescollection.configName=page-statisticsnumShards=12maxShardsPerNode=12router.field=month The documentation says that the default router.name it's compositeId. The clusterstate.json it's write compositeId for the testDocValues collection but the zookepeer's node /collections/testDocValues says: { configName:page-statistics, router:{name:implicit} } Is it this correct or is some kind of issue? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-5-router-name-issue-tp4097110.html Sent from the Solr - User mailing list archive at Nabble.com.
Question about sharding and overlapping
Hi, I created a collection with 12 shards and route.field=month (month field will have values between 1 .. 12) I notice that I have shards with more that a month into them. This could left empty some shard and I want the documents one month in each shard. My question is, how I configure the sharding method to avoid overlaps? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Question-about-sharding-and-overlapping-tp4097111.html Sent from the Solr - User mailing list archive at Nabble.com.
Question about docvalues
Hi, If I have a field (named dv_field) configured to be indexed, stored and with docvalues=true. How I know that when I do a query like: q=*:*facet=truefacet.field=dv_field, I'm really using the docvalues and not the normal way? Is it necessary duplicate the field and set index and stored to false and let the docvalues property set to true? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Question-about-docvalues-tp4096802.html Sent from the Solr - User mailing list archive at Nabble.com.
Leader election fails in some point.
Hi, In this screenshot I have a shard with two replicas without leader, http://picpaste.com/qf2jdkj8.png On machine with shard green I found this exception: INFO - dat5 - 2013-10-18 22:48:04.775; org.apache.solr.handler.admin.CoreAdminHandler; Going to wait for coreNodeName: 192.168.20.106:8983_solr_statistics-13_shard18_replica4, state: recovering, checkLive: true, onlyIfLeader: true ERROR - dat5 - 2013-10-18 22:48:04.775; org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: We are not the leader at org.apache.solr.handler.admin.CoreAdminHandler.handleWaitForStateAction(CoreAdminHandler.java:824) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:192) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:655) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:246) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) -- at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Unknown Source) On the machine with the shard in recovery state I found this exception: INFO - dat6 - 2013-10-18 22:48:44.131; org.apache.solr.cloud.ShardLeaderElectionContext; Running the leader process for shard shard18 INFO - dat6 - 2013-10-18 22:48:44.137; org.apache.solr.cloud.ShardLeaderElectionContext; Checking if I should try and be the leader. INFO - dat6 - 2013-10-18 22:48:44.138; org.apache.solr.cloud.ShardLeaderElectionContext; My last published State was recovering, I won't be the leader. INFO - dat6 - 2013-10-18 22:48:44.139; org.apache.solr.cloud.ShardLeaderElectionContext; There may be a better leader candidate than us - going back into recovery INFO - dat6 - 2013-10-18 22:48:44.142; org.apache.solr.update.DefaultSolrCoreState; Running recovery - first canceling any ongoing recovery WARN - dat6 - 2013-10-18 22:48:44.142; org.apache.solr.cloud.RecoveryStrategy; Stopping recovery for zkNodeName=192.168.20.106:8983_solr_statistics-13_shard18_replica4core=statistics-13_shard18_replica4 INFO - dat6 - 2013-10-18 22:48:45.131; org.apache.solr.cloud.RecoveryStrategy; Finished recovery process. core=statistics-13_shard18_replica4 INFO - dat6 - 2013-10-18 22:48:45.131; org.apache.solr.cloud.RecoveryStrategy; Starting recovery process. core=statistics-13_shard18_replica4 recoveringAfterStartup=false INFO - dat6 - 2013-10-18 22:48:45.131; org.apache.solr.cloud.ZkController; publishing core=statistics-13_shard18_replica4 state=recovering INFO - dat6 - 2013-10-18 22:48:45.132; org.apache.solr.cloud.ZkController; numShards not found on descriptor - reading it from system property INFO - dat6 - 2013-10-18 22:48:45.141; org.apache.solr.client.solrj.impl.HttpClientUtil; Creating new http client, config:maxConnections=128maxConnectionsPerHost=32followRedirects=false ERROR - dat6 - 2013-10-18 22:48:45.143; org.apache.solr.common.SolrException; Error while trying to recover. core=statistics-13_shard18_replica4:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: We are not the leader at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:424) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:198) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:342) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:219) No leader means we can't index data because a 503 http status code is returned. Is this the normal behaviour or a bug? - Best regards -- View this message in context:
Cores with lot of folders with prefix index.XXXXXXX
Hi, I have some cores with lot of folder with format index.X, my question is why? The collateral effect of this are shards with 50% of size than replicas in other nodes. There is any way to delete this folders to free space? It's a bug? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Cores-with-lot-of-folders-with-prefix-index-XXX-tp4094920.html Sent from the Solr - User mailing list archive at Nabble.com.
Split shard doesn't persist data correctly on solr.xml
I notice that when a SPLISHARD operation finish, the solr.xml is not update properly. # Parent solr.xml: core numShards=2 name=test_shard1_replica1 instanceDir=test_shard1_replica1 shard=shard1 collection=test/ # Children solr.xml: core name=test_shard1_0_replica1 shardState=construction instanceDir=test_shard1_0_replica1 shard=shard1_0 collection=test property name=shardRange value=8000-bfff/ /core core name=test_shard1_1_replica1 shardState=construction instanceDir=test_shard1_1_replica1 shard=shard1_1 collection=test property name=shardRange value=c000-/ /core # Paren Clusterstate: shard1:{ range:8000-, state:inactive, replicas:{192.168.2.18:8983_solr_test_shard1_replica1:{ state:active, base_url:http://192.168.2.18:8983/solr;, core:test_shard1_replica1, node_name:192.168.2.18:8983_solr, leader:true}}}, # Children Clusterstate: shard1_0:{ range:8000-bfff, state:active, replicas:{192.168.2.18:8983_solr_test_shard1_0_replica1:{ state:active, base_url:http://192.168.2.18:8983/solr;, core:statistics-11_shard1_0_replica1, node_name:192.168.2.18:8983_solr, leader:true}}}, shard1_1:{ range:c000-, state:active, replicas:{192.168.2.18:8983_solr_test_shard1_1_replica1:{ state:active, base_url:http://192.168.2.18:8983/solr;, core:statistics-11_shard1_1_replica1, node_name:192.168.2.18:8983_solr, leader:true}}}, I only notice this because I did a restart and the nodes was show on cloud graph as down. The shards where I did a manual replication were written to the solr.xml file as expected, but not at the time that I executed the CREATE command. command: curl 'http://192.168.2.18:8983/solr/admin/cores?action=CREATEname=test_shard2_0_replicaXcollection=testshard=shard2_0' Create replicaA - solr.xml not write nothing about the replicaA. Create replicaA - solr.xml not write nothing about the replicaB, registered data about the replicaA. Is like I have a lag of 1 operation, this is normal? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Split-shard-doesn-t-persist-data-correctly-on-solr-xml-tp4093996.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr 4.5 - CoreAPI issue with CREATE
Hi, I'm doing replicas for my shards manually and the solr.xml config doesn't save the changes (solr.xml attribute persist = true). The command used is: curl 'http://192.168.2.18:8983/solr/admin/cores?action=CREATEname=test_shard1_replica2collection=testshard=shard1' Someone else with the same behaviour? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-5-CoreAPI-issue-with-CREATE-tp4094001.html Sent from the Solr - User mailing list archive at Nabble.com.
Shard split issue
Hi, Yesterday I did a SPLITSHARD operation on one of my shards (50G size), today the cluster state says that the children are in construction state and the parent is active. Is it not supposed that the parent becomes to inactive state and the new 2 shards becomes to active state? An split takes more than 12 hours? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Shard-split-issue-tp4093633.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud distribute search question.
Hi, When a distributed search is done, the inital query is forwarded to all shards that are part of the specific collection that we are querying. My question here is, Which is the machine that does the aggregation for results from shards? Is the machine which receives the initial request? I need to have the control of the machine that does the aggregation. /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-distribute-search-question-tp4093523.html Sent from the Solr - User mailing list archive at Nabble.com.
ALIAS feature, can be used for what?
Today I was thinking about the ALIAS feature and the utility on Solr. Can anyone explain me with an example where this feature may be useful? It's possible have an ALIAS of multiples collections, if I do a write to the alias, Is this write replied to all collections? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/ALIAS-feature-can-be-used-for-what-tp4092095.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Ref guide question
Hi all, I think that there is some lack in solr's ref doc. Section Running Solr says to run solr using the command: $ java -jar start.jar But If I do this with a fresh install, I have a stack trace like this: http://pastebin.com/5YRRccTx Is it this behavior as expected? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Ref-guide-question-tp4086142.html Sent from the Solr - User mailing list archive at Nabble.com.
Docvalue on a StrField equal a codec error
Hi, I have this error on a solr.StrField defined in my schema: FieldType 'string_dv' is configured with a docValues format, but the codec does not support it. fieldtype name=string_dv class=solr.StrField sortMissingLast=true omitNorms=true docValuesFormat=Disk/ In documentation http://wiki.apache.org/solr/DocValues#Specifying_a_different_Codec_implementation , the StrField field format appears as supported. What I doing wrong? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Docvalue-on-a-StrField-equal-a-codec-error-tp4067347.html Sent from the Solr - User mailing list archive at Nabble.com.
SPLITSHARD: time out error
Hi, I have a time out error when I try to split a collection with 15M documents The exception (solr version 4.3): 542468 [catalina-exec-27] INFO org.apache.solr.servlet.SolrDispatchFilter – [admin] webapp=null path=/admin/collections params={shard=00action=SPLITSHARDcollection=ST-0112_replicated} status=500 QTime=300028 542469 [catalina-exec-27] ERROR org.apache.solr.servlet.SolrDispatchFilter – null:org.apache.solr.common.SolrException: splitshard the collection time out:300s at org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:166) at org.apache.solr.handler.admin.CollectionsHandler.handleSplitShardAction(CollectionsHandler.java:300) at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:136) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:608) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:215) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 582557 [catalina-exec-39] INFO org.apache.solr.update.SolrIndexSplitter – SolrIndexSplitter: partition #1 582561 [catalina-exec-39] INFO org.apache.solr.core.SolrCore – SolrDeletionPolicy.onInit: commits:num=1 commit{dir=/disk2/node00.solrcloud/solr/home/0112_replicated_00_1_replica1/data/index,segFN=segments_1,generation=1,filenames=[segments_1] 582563 [catalina-exec-39] INFO org.apache.solr.core.SolrCore – newest commit = 1[segments_1] How I can split my collection without this error? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/SPLITSHARD-time-out-error-tp4066991.html Sent from the Solr - User mailing list archive at Nabble.com.
Disable all caches in solr
Hi, How I can disable all caches that solr use? Regards /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Disable-all-caches-in-solr-tp4066517.html Sent from the Solr - User mailing list archive at Nabble.com.
Query syntax error: Cannot parse ....
Hi, When I try run this query, http://localhost:8983/solr/coreA/select?q=source_id:(7D1FFB# OR 7D1FFB) city:ES, I have the error below: response lst name=responseHeader int name=status400/int int name=QTime1/int /lst lst name=error str name=msg org.apache.solr.search.SyntaxError: Cannot parse 'source_id:(7D1FFB': Encountered EOF at line 1, column 43. Was expecting one of: AND ... OR ... NOT ... + ... - ... BAREOPER ... ( ... ) ... * ... ^ ... QUOTED ... TERM ... FUZZY_SLOP ... PREFIXTERM ... WILDTERM ... REGEXPTERM ... [ ... { ... LPARAMS ... NUMBER ... /str int name=code400/int /lst /response How I can fix the query? Regards /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Query-syntax-error-Cannot-parse-tp4066560.html Sent from the Solr - User mailing list archive at Nabble.com.
The SPLITSHARD action doesn't honor the replicationFactor of a collection
Hi, I'm playing a little with the new feature to SPLIT shards. In my first tests, I realised the fact that if I do a split on a shard with replicationFactor=2 per example, the result of the split operation doesn't have the same replicationFactor, Is this the supposed behaviour of split? If is the case, how I can restore the replication factor for new shards? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/The-SPLITSHARD-action-doesn-t-honor-the-replicationFactor-of-a-collection-tp4064990.html Sent from the Solr - User mailing list archive at Nabble.com.
SPLITSHARD: Error if I try to split a shard again
Hi, I have this error if a try to split a shard again (The version of solr is 4.3.0): 19911726 [qtp1949819426-565] INFO org.apache.solr.handler.admin.CollectionsHandler ? Splitting shard : shard=shard1action=SPLITSHARDcollection=RPS-00-12 19911729 [main-EventThread] INFO org.apache.solr.cloud.DistributedQueue ? Watcher fired on path: /overseer/collection-queue-work state: SyncConnected type NodeChildrenChanged 19911730 [Overseer-89728901910364162-localhost:8983_solr-n_31] INFO org.apache.solr.cloud.OverseerCollectionProcessor ? Overseer Collection Processor: Get the message id:/overseer/collection-queue-work/qn-005302 message:{ operation:splitshard, shard:shard1, collection:RPS-00-12} 19911730 [Overseer-89728901910364162-localhost:8983_solr-n_31] INFO org.apache.solr.cloud.OverseerCollectionProcessor ? Split shard invoked 19911730 [Overseer-89728901910364162-localhost:8983_solr-n_31] INFO org.apache.solr.cloud.OverseerCollectionProcessor ? Unloading core: RPS-00-12_shard1_0_replica1 from node: localhost:8983_solr 19911730 [Overseer-89728901910364162-localhost:8983_solr-n_31] INFO org.apache.solr.cloud.OverseerCollectionProcessor ? Unloading core: RPS-00-12_shard1_1_replica1 from node: localhost:8983_solr 19911734 [qtp1949819426-318] INFO org.apache.solr.handler.admin.CoreAdminHandler ? Unregistering core RPS-00-12_shard1_0_replica1 from cloudstate. 19911734 [qtp1949819426-1639] INFO org.apache.solr.handler.admin.CoreAdminHandler ? Unregistering core RPS-00-12_shard1_1_replica1 from cloudstate. 19911735 [qtp1949819426-318] INFO org.apache.solr.core.SolrCore ? [RPS-00-12_shard1_0_replica1] CLOSING SolrCore org.apache.solr.core.SolrCore@72236aea 19911735 [qtp1949819426-1639] INFO org.apache.solr.core.SolrCore ? [RPS-00-12_shard1_1_replica1] CLOSING SolrCore org.apache.solr.core.SolrCore@73406330 19911736 [qtp1949819426-318] INFO org.apache.solr.update.UpdateHandler ? closing DirectUpdateHandler2{commits=0,autocommit maxDocs=5000,autocommit maxTime=1ms,autocommits=0,soft autocommit maxTime=2500ms,soft autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0} 19911736 [qtp1949819426-1639] INFO org.apache.solr.update.UpdateHandler ? closing DirectUpdateHandler2{commits=0,autocommit maxDocs=5000,autocommit maxTime=1ms,autocommits=0,soft autocommit maxTime=2500ms,soft autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0} 19911736 [qtp1949819426-318] INFO org.apache.solr.update.SolrCoreState ? Closing SolrCoreState 19911736 [qtp1949819426-318] INFO org.apache.solr.update.DefaultSolrCoreState ? SolrCoreState ref count has reached 0 - closing IndexWriter 19911736 [qtp1949819426-318] INFO org.apache.solr.update.DefaultSolrCoreState ? closing IndexWriter with IndexWriterCloser 19911736 [qtp1949819426-1639] INFO org.apache.solr.update.SolrCoreState ? Closing SolrCoreState 19911738 [qtp1949819426-1639] INFO org.apache.solr.update.DefaultSolrCoreState ? SolrCoreState ref count has reached 0 - closing IndexWriter 19911740 [qtp1949819426-1639] INFO org.apache.solr.update.DefaultSolrCoreState ? closing IndexWriter with IndexWriterCloser 19911743 [qtp1949819426-318] INFO org.apache.solr.core.SolrCore ? SolrDeletionPolicy.onCommit: commits:num=2 commit{dir=/usr/local/Cellar/solr/4.3.0/libexec/example/solr/RPS-00-12_shard1_0_replica1/data/index,segFN=segments_1,generation=1,filenames=[segments_1] commit{dir=/usr/local/Cellar/solr/4.3.0/libexec/example/solr/RPS-00-12_shard1_0_replica1/data/index,segFN=segments_2,generation=2,filenames=[_0.fnm, segments_2, _0.fdx, _0.si, _0.fdt] 19911743 [qtp1949819426-318] INFO org.apache.solr.core.SolrCore ? newest commit = 2[_0.fnm, segments_2, _0.fdx, _0.si, _0.fdt] 19911746 [qtp1949819426-1639] INFO org.apache.solr.core.SolrCore ? SolrDeletionPolicy.onCommit: commits:num=2 commit{dir=/usr/local/Cellar/solr/4.3.0/libexec/example/solr/RPS-00-12_shard1_1_replica1/data/index,segFN=segments_1,generation=1,filenames=[segments_1] commit{dir=/usr/local/Cellar/solr/4.3.0/libexec/example/solr/RPS-00-12_shard1_1_replica1/data/index,segFN=segments_2,generation=2,filenames=[_0.fnm, segments_2, _0.fdx, _0.si, _0.fdt] 19911746 [qtp1949819426-318] INFO org.apache.solr.core.SolrCore ? [RPS-00-12_shard1_0_replica1] Closing main searcher on request. 19911746 [qtp1949819426-1639] INFO org.apache.solr.core.SolrCore ? newest commit = 2[_0.fnm, segments_2, _0.fdx, _0.si, _0.fdt] 19911746 [qtp1949819426-318] INFO org.apache.solr.core.CachingDirectoryFactory ? Closing StandardDirectoryFactory - 2 directories currently being tracked 19911748 [qtp1949819426-318] INFO org.apache.solr.core.CachingDirectoryFactory ?
Error on recovery
Hi, I have a node that can't finish the recovery. The log shows this error: 3836028 [RecoveryThread] ERROR org.apache.solr.cloud.RecoveryStrategy – Recovery failed - trying again... (0) core=ST-XXX_0712 3836028 [RecoveryThread] ERROR org.apache.solr.cloud.RecoveryStrategy – Recovery failed - interrupted. core=ST-XXX_0712 3836028 [RecoveryThread] ERROR org.apache.solr.cloud.RecoveryStrategy – Recovery failed - I give up. core=ST-XXX_0712 3836028 [RecoveryThread] INFO org.apache.solr.cloud.ZkController – publishing core=ST-XXX_0712 state=recovery_failed 3836028 [RecoveryThread] INFO org.apache.solr.cloud.ZkController – numShards not found on descriptor - reading it from system property 3836124 [RecoveryThread] WARN org.apache.solr.cloud.RecoveryStrategy – Stopping recovery for zkNodeName=192.168.20.49:8983_solr_ST-XXX_0712core=ST-XXX_0712 3836124 [RecoveryThread] INFO org.apache.solr.cloud.RecoveryStrategy – Finished recovery process. core=ST-XXX_0712 3836125 [RecoveryThread] INFO org.apache.solr.cloud.RecoveryStrategy – Starting recovery process. core=ST-XXX_0712 recoveringAfterStartup=false 3836125 [RecoveryThread] ERROR org.apache.solr.update.UpdateLog – Exception reading versions from log java.nio.channels.ClosedChannelException at sun.nio.ch.FileChannelImpl.ensureOpen(Unknown Source) at sun.nio.ch.FileChannelImpl.read(Unknown Source) at org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:752) at org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89) at org.apache.solr.common.util.FastInputStream.readUnsignedByte(FastInputStream.java:71) at org.apache.solr.common.util.FastInputStream.readInt(FastInputStream.java:216) at org.apache.solr.update.TransactionLog$ReverseReader.init(TransactionLog.java:670) at org.apache.solr.update.TransactionLog.getReverseReader(TransactionLog.java:573) at org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:920) at org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:863) at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1014) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:259) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223) - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Error-on-recovery-tp4062413.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Lazy load Error on UI analysis area
Ok, I will do a fresh install in a VM and check that the error isn't reproduce. - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291p4061512.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Lazy load Error on UI analysis area
I found the error, the class of analysis field request handler was not set properly. - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291p4061526.html Sent from the Solr - User mailing list archive at Nabble.com.
ERROR: incref on a closed log
Hi all, I upgrade my solrcluster today from 4.2.1 to 4.3. On startup I can see some error like this: 2449515 [catalina-exec-51] ERROR org.apache.solr.core.SolrCore – org.apache.solr.common.SolrException: incref on a closed log: tlog{file=/opt/node02.solrcloud/solr/home/XXX/data/tlog/tlog.000 refcount=1} at org.apache.solr.update.TransactionLog.incref(TransactionLog.java:492) at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:998) at org.apache.solr.handler.component.RealTimeGetComponent.processGetVersions(RealTimeGetComponent.java:515) at org.apache.solr.handler.component.RealTimeGetComponent.process(RealTimeGetComponent.java:92) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Anyone know what could be happening? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/ERROR-incref-on-a-closed-log-tp4061609.html Sent from the Solr - User mailing list archive at Nabble.com.
Lazy load Error on UI analysis area
Hi, I was exploring the UI interface and in the analysis section I had a lazy load error. The logs says: INFO - 2013-05-07 11:52:06.412; org.apache.solr.core.SolrCore; [] webapp=/solr path=/admin/luke params={_=1367923926380show=schemawt=json} status=0 QTime=23 ERROR - 2013-05-07 11:52:06.499; org.apache.solr.common.SolrException; null:org.apache.solr.common.SolrException: lazy loading error at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.getWrappedHandler(RequestHandlers.java:258) at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:931) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.solr.common.SolrException: Error loading class 'solr.solr.FieldAnalysisRequestHandler' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:464) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:396) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:518) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:592) at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.getWrappedHandler(RequestHandlers.java:249) ... 20 more Caused by: java.lang.ClassNotFoundException: solr.solr.FieldAnalysisRequestHandler at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:266) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:448) ... 24 more - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Error creating collection
The solr version is 4.2.1. Here the stack trace: SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore XXX': Could not get shard_id for core: XXX coreNodeName:192.168.20.47:8983_solr_XXX$ at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)$ at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)$ at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)$ at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)$ at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)$ at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)$ at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)$ at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)$ at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)$ at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)$ at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)$ at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)$ at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)$ at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)$ at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)$ at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)$ at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)$ at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)$ at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)$ at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)$ at java.lang.Thread.run(Unknown Source)$ Caused by: org.apache.solr.common.SolrException: Could not get shard_id for core: XXX coreNodeName:192.168.20.47:8983_solr_XXX$ at org.apache.solr.cloud.ZkController.doGetShardIdProcess(ZkController.java:1221)$ at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1294)$ at org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:861)$ at org.apache.solr.core.CoreContainer.register(CoreContainer.java:841)$ at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:479)$ 20 more$ - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Error-creating-collection-tp4057859p4058231.html Sent from the Solr - User mailing list archive at Nabble.com.
Error creating collection
I get this exception when I try to create a new collection. someone have any idea that what's going on? org.apache.solr.common.SolrException: Error CREATEing SolrCore 'RPS_12': Could not get shard_id for core: RPS_12 coreNodeName:192.168.20.48:8983_solr_RPS_12 - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Error-creating-collection-tp4057859.html Sent from the Solr - User mailing list archive at Nabble.com.
Severe errors in log
I have got this in my logs. What's that mean? ConcurrentLRUCache was not destroyed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Severe-errors-in-log-tp4057860.html Sent from the Solr - User mailing list archive at Nabble.com.
The overseer is stucks
Hi,My overseer has enqueued more than 1 task and apparently is stuck. Exists any way to force to do the enqueued tasks?A screenshot of the overseer queue here http://tinypic.com/r/r8uhqq/4 - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/The-overseer-is-stucks-tp4057862.html Sent from the Solr - User mailing list archive at Nabble.com.
Too many close, count -1
Hi, Reviewing the solr's log I found this message. The solr version is 4.2.1, running in a tomcat 7 4973652:SEVERE: Too many close [count:-1] on org.apache.solr.core.SolrCore@5795a627. Please report this exception to solr-user@lucene.apache.org 5003386:SEVERE: REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@5795a627 () has a reference count of -1 2965529:SEVERE: Too many close [count:-1] on org.apache.solr.core.SolrCore@7722b49b. Please report this exception to solr-user@lucene.apache.org 52965531:SEVERE: Too many close [count:-1] on org.apache.solr.core.SolrCore@32530662. Please report this exception to solr-user@lucene.apache.org 52965533:SEVERE: Too many close [count:-1] on org.apache.solr.core.SolrCore@144e2972. Please report this exception to solr-user@lucene.apache.org 52971283:SEVERE: Too many close [count:-1] on org.apache.solr.core.SolrCore@1705c88e. Please report this exception to solr-user@lucene.apache.org 52978567:SEVERE: Too many close [count:-1] on org.apache.solr.core.SolrCore@c200c62. Please report this exception to solr-user@lucene.apache.org - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Too-many-close-count-1-tp4058129.html Sent from the Solr - User mailing list archive at Nabble.com.
Found child node with improper name
I have this warning when I try to create a collection and the collection is not created. Apr 01, 2013 10:05:26 AM org.apache.solr.handler.admin.CollectionsHandler handleCreateAction INFO: Creating Collection : collection.configName=statisticsBucket-archivemaxShardsPerNode=3name=ST-ARCHIVE_07replicationFactor=2action=CREATE Apr 01, 2013 10:05:26 AM org.apache.solr.cloud.DistributedQueue$LatchChildWatcher process INFO: Watcher fired on path: /overseer/collection-queue-work state: SyncConnected type NodeChildrenChanged Apr 01, 2013 10:05:26 AM org.apache.solr.cloud.DistributedQueue orderedChildren WARNING: Found child node with improper name: qnr-02 Apr 01, 2013 10:05:26 AM org.apache.solr.cloud.OverseerCollectionProcessor run INFO: Overseer Collection Processor: Get the message id:/overseer/collection-queue-work/qn-02 message:{ operation:createcollection, numShards:null, maxShardsPerNode:3, collection.configName:statisticsBucket-archive, createNodeSet:null, name:ST-ARCHIVE_07, replicationFactor:2} - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Found-child-node-with-improper-name-tp4052855.html Sent from the Solr - User mailing list archive at Nabble.com.
clusterstate.json size
Hi, Is there a size limitation for the clusterstate file? I can't create more collections for my cluster I have no error but the CREATE command not return any response. I read in the past that the max size for a file in zookeeper was 1MB, my clusterstate file has 1.1MB. It's possible be this the problem? If it's, how I can increase this limit? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/clusterstate-json-size-tp4052598.html Sent from the Solr - User mailing list archive at Nabble.com.
Error creating collection using CORE-API
Hi, I'm having an issue when I trying to create a collection: curl http://192.168.1.142:8983/solr/admin/cores?action=CREATEname=RT-4A46DF1563_12collection=RT-4A46DF1563_12shard=00collection.configName=reportssBucket-regular The curl call has an error because the collection.configName doesn't exists, then I fixed the curl call to: curl http://192.168.1.142:8983/solr/admin/cores?action=CREATEname=RT-4A46DF1563_12collection=RT-4A46DF1563_12shard=00collection.configName=reportsBucket-regular But now I have this stacktrace: INFO: Creating SolrCore 'RT-4A46DF1563_12' using instanceDir: /Users/yriveiro/Dump/solrCloud/node00.solrcloud/solr/home/RT-4A46DF1563_12 Mar 25, 2013 5:15:35 PM org.apache.solr.cloud.ZkController createCollectionZkNode INFO: Check for collection zkNode:RT-4A46DF1563_12 Mar 25, 2013 5:15:35 PM org.apache.solr.cloud.ZkController createCollectionZkNode INFO: Collection zkNode exists Mar 25, 2013 5:15:35 PM org.apache.solr.cloud.ZkController readConfigName INFO: Load collection config from:/collections/RT-4A46DF1563_12 Mar 25, 2013 5:15:35 PM org.apache.solr.cloud.ZkController readConfigName SEVERE: Specified config does not exist in ZooKeeper:reportssBucket-regular Mar 25, 2013 5:15:35 PM org.apache.solr.core.CoreContainer recordAndThrow SEVERE: Unable to create core: RT-4A46DF1563_12 org.apache.solr.common.cloud.ZooKeeperException: Specified config does not exist in ZooKeeper:reportssBucket-regular In fact the collection is in zookeeper as a file and not as a folder, the question here is: If the CREATE command doesn't find the config, why it's created a file? and Why after this, I can't run the command again with the correct syntax without remove the file create by the failed CREATE command? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Error-creating-collection-using-CORE-API-tp4051156.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.2 mechanism proxy request error
Solr 4.2.1 will solve this issue? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-2-mechanism-proxy-request-error-tp4047433p4049127.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr 4.2 mechanism proxy request error
Hi, I think that in solr 4.2 the new feature to proxy a request if the collection is not in the requested node has a bug. If I do a query with the parameter rows=0 and the node doesn't have the collection. If the parameter is rows=4 or superior then the search works as expected the curl returns The output of wget is: Connecting to 192.168.20.48:8983... connected. HTTP request sent, awaiting response... 200 OK Length: 210 [application/xml] Saving to: ‘select?q=*:*rows=0’ 0% [ ] 0 --.-K/s in 0s 2013-03-14 18:01:04 (0.00 B/s) - Connection closed at byte 0. Retrying. Curl says: curl http://192.168.20.48:8983/solr/ST-3A856BBCA3_12/select?q=*%3A*rows=0; curl: (56) Problem (2) in the Chunked-Encoded data Chrome says: This webpage is not available The webpage at http://192.168.20.48:8983/solr/ST-3A856BBCA3_12/select?q=*%3A*rows=0wt=xmlindent=true might be temporarily down or it may have moved permanently to a new web address. Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error. Someone have the same issue? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-2-mechanism-proxy-request-error-tp4047433.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.2 mechanism proxy request error
The log of the UI null:org.apache.solr.common.SolrException: Error trying to proxy request for url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select I will open the issue in Jira. Thanks - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-2-mechanism-proxy-request-error-tp4047433p4047440.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud index timeout
Hi, I have the next issue: I have a collection with a leader and a replica, both are synchronized. When I try to index data to this collection I have a timeout error (the output is python): (class 'requests.exceptions.Timeout', Timeout(TimeoutError(HTTPConnectionPool(host='192.168.20.50', port=8983): Request timed out. (timeout=60.0),),), traceback object at 0x7f64c033b908) Now, I can't index any document to this collection because I have always the timeout error. In the tomcat I have about 100 thread stuck, S 11393624 ms 0 KB30 KB 192.168.20.47 192.168.20.50 POST /solr/ST-4A46DF1563_0612/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2F192.168.20.48%3A8983%2Fsolr%2FST-4A46DF1563_0612%2Fwt=javabinversion=2 HTTP/1.1 Someone have any idea that what can be happening and why I can't index any document to the collection? - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-index-timeout-tp4046348.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud index timeout
Hi, The version is the 4.1 I'm not mixing deletes and adds, are only adds. I have a 4 nodes in 2 physical machines, 2 instances of tomcat in each machine. In this case the leader is located in a diferent physical machine that the replica. The collection has all shards in different nodes, I have not oversharding. The question of the stack I need install the visualvm and try to get the stack. I create the collection using the CORE API: LEADER curl http://192.168.20.48:8983/solr/admin/cores\?action\=CREATE\name\=ST-0112\collection\=ST-0112\shard\=00\collection.configName\=statisticsBucket-regular REPLICA curl http://192.168.20.50:8983/solr/admin/cores\?action\=CREATE\name\=ST-0112\collection\=ST-0112\shard\=00\collection.configName\=statisticsBucket-regular The data folders have the content: LEADER drwxr-xr-x 2 root root 4096 Jan 30 17:40 index drwxr-xr-x 2 root root 12288 Feb 5 13:28 index.20130130174052236 drwxr-xr-x 2 root root 36864 Mar 11 15:20 index.20130220001204140 -rw-r--r-- 1 root root78 Feb 20 00:13 index.properties -rw-r--r-- 1 root root 251 Feb 20 00:13 replication.properties drwxr-xr-x 2 root root 4096 Mar 11 15:19 tlog REPLICA drwxr-xr-x 2 root root 4096 Mar 11 15:59 index.20130228105843631 -rw-r--r-- 1 root root 78 Feb 28 10:59 index.properties -rw-r--r-- 1 root root 208 Feb 28 10:59 replication.properties drwxr-xr-x 2 root root 4096 Mar 11 12:17 tlog - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-update-timeout-tp4046348p4046385.html Sent from the Solr - User mailing list archive at Nabble.com.
Facet get distinct terms
Hi all, Anyone know if this patch works in distributed collections and if it's reliable? https://issues.apache.org/jira/browse/SOLR-2242 Thanks. /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Facet-get-distinct-terms-tp4043350.html Sent from the Solr - User mailing list archive at Nabble.com.
Eject a node from SolrCloud
Hi, Exists any way to eject a node from a solr cluster? If I shutdown a node in the cluster, the zookeeper tag the node as down. Thanks /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Eject-a-node-from-SolrCloud-tp4038950.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud sort inconsistency
How is possible that this sorted query returns different results? The highest value is the id P2450024023, sometimes the value returned is not the highest. This is an example, the second curl request is the correct result. NOTE: I did the query when a indexing process was running. ➜ ~ curl -H Cache-Control: no-cache http://192.168.1.241:8983/solr/ST-SHARD_0212/query\?q\=id:\*\rows\=10\fl\=id\=sort\=id%20desc\cache\=False { responseHeader:{ status:0, QTime:5, params:{ cache:False, rows:10, fl:id=sort=id desc, q:id:*}}, response:{numFound:2387312,start:0,maxScore:1.0,docs:[ { id:P2443605077}, { id:P2443588094}, { id:P2443647855}, { id:P2443613193}, { id:P2443572098}, { id:P2443562507}, { id:P2443643935}, { id:P2443556464}, { id:P2443625267}, { id:P2443580781}] }} ➜ ~ curl -H Cache-Control: no-cache http://192.168.1.241:8983/solr/ST-SHARD_0212/query\?q\=id:\*\rows\=10\fl\=id\=sort\=id%20desc\cache\=False { responseHeader:{ status:0, QTime:4, params:{ cache:False, rows:10, fl:id=sort=id desc, q:id:*}}, response:{numFound:2387312,start:0,maxScore:1.0,docs:[ { id:P2450024023}, { id:P2450017490}, { id:P2450062568}, { id:P2450053498}, { id:P2449990839}, { id:P2449973572}, { id:P2449957535}, { id:P2450099098}, { id:P2450090195}, { id:P2450072528}] }} ➜ ~ curl -H Cache-Control: no-cache http://192.168.1.241:8983/solr/ST-SHARD_0212/query\?q\=id:\*\rows\=10\fl\=id\=sort\=id%20desc\cache\=False { responseHeader:{ status:0, QTime:6, params:{ cache:False, rows:10, fl:id=sort=id desc, q:id:*}}, response:{numFound:2387312,start:0,maxScore:1.0,docs:[ { id:P2450024023}, { id:P2450017490}, { id:P2450062568}, { id:P2450053498}, { id:P2449990839}, { id:P2449973572}, { id:P2449957535}, { id:P2450099098}, { id:P2450090195}, { id:P2450072528}] }} ➜ ~ - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-sort-inconsistency-tp4033046.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Atomic Updates, Payloads, Non-stored data
Hi, Exists some issue open in the Solr Project about this issue? Thanks - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Atomic-Updates-Payloads-Non-stored-data-tp4006678p4023789.html Sent from the Solr - User mailing list archive at Nabble.com.
Count disctint groups in grouping distributed
Hi, Exists the possibility of do a distinct group count in a grouping done using a sharding schema? This issue https://issues.apache.org/jira/browse/SOLR-3436 make a fixe in the way to sum all groups returned in a distributed grouping operation, but not always we want the sum, in some cases is interesting have the distinct groups between shards. - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Count-disctint-groups-in-grouping-distributed-tp4007257.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: groups.limit=0 in sharding core results in IllegalArgumentException
Hi, I have the same issue using solr 4.0-ALPHA. -- View this message in context: http://lucene.472066.n3.nabble.com/groups-limit-0-in-sharding-core-results-in-IllegalArgumentException-tp4006086p4006110.html Sent from the Solr - User mailing list archive at Nabble.com.