Hi,
I have an external application that use the output of a facet to join other
dataset using the keys of the facet result.
The facet query use index sort but in some point, my application crash
because the order of the keys is not correct. If I do an unix sort over
the keys of the result with
I'm trying to retrieve from Solr a query in CSV format with around 500K
registers and I always get this error:
Expected mime type application/octet-stream but got application/xml. ?xml
version=\1.0\ encoding=\UTF-8\?\nresponse\nlst name=\error\str
name=\msg\application/x-www-form-urlencoded
Hi,
How I can raise this two variables: maxUpdateConnections,
maxUpdateConnectionsPerHost in Solr 4.6.1 with the old solr.xml style?
/Yago
-
Best regards
--
View this message in context:
Hi,
I need to move some data from one disk to another one. My question is if can
I move the shard and do a symlink on the place where the shard was?
This works?
-
Best regards
--
View this message in context:
Hi,
It´s possible perform an optimize operation and continuing indexing over a
collection?
I need to force expunge deletes from the index I have millions os deletes
and need free space.
-
Best regards
--
View this message in context:
Hi,
It's possible remove store data of an index deleting the unwanted fields
from schema.xml and after do an optimize over the index?
Thanks,
/yago
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Delete-data-from-stored-documents-tp4167990.html
Sent
Hi,
Can I add to an existing field the docvalue feature without wipe the actual?
The modification on the schema will be something like this:
field name=surrogate_id type=tlong indexed=true stored=true
multiValued=false /
field name=surrogate_id type=tlong indexed=true stored=true
Hi,
I'm wondering if Solr has some feature like face.mincount but for maxcount.
I have an use case where I need to know what facets have less than n
elements.
I can do this adding the facet.limit=-1 parameter and fetch the whole set
and client-side remove the elements that don't match the
I having this error on my logs:
ERROR - dat1 - 2013-12-18 11:40:11.704;
org.apache.solr.update.StreamingSolrServers$1; error
org.apache.solr.common.SolrException: Service Unavailable
request:
I'm getting an error on Solr 4.6.0 about leader registation, the admin shows
this:
http://picpaste.com/a839446d0808df205aa7be78c780ed32.png
But my logs says:
ERROR - dat6 - 2013-12-18 11:43:54.253;
org.apache.solr.common.SolrException; org.apache.solr.common.SolrException:
No registered leader
Hi,
I read this post http://1opensourcelover.wordpress.com/ about EEF's and I
found very interesting.
Can someone give me more use cases about the utility of EEF's?
/Yago
-
Best regards
--
View this message in context:
Hi,
There is some way to automatically migrate from old solr.xml style to the
new?
/Yago
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html
Sent from the Solr - User mailing list archive
Hi,
After the reading this link about DocValues and be pointed by Mark Miller to
raise the question on the mailing list, I have some questions about the
codec implementation note:
Note that only the default implementation is supported by future version of
Lucene: if you try an alternative
Hi,
Where can I configure the maxConnectionsPerHost on Solr?
I'm using Solr 4.5.1 with the old style of solr.xml (I have a lot of
collections and switch to the new style of solr.xml is too much work)
-
Best regards
--
View this message in context:
Hi,
I read this post http://searchhub.org/2013/06/13/solr-cloud-document-routing
and I have some questions.
When a tenant is too large to fit on one shard, we can specify the number of
bit from the shardKey that we want to use.
If we set a doc's key as tenant1/4!docXXX we are saying to spread
Sometime ago I posted this issue
http://lucene.472066.n3.nabble.com/Leader-election-fails-in-some-point-td4096514.html
The link for screenshot is no longer available. When some shard fails and
lost the leader I have those exceptions.
-
Best regards
--
View this message in context:
Hi,
I have 2 replicas with different number of documents, Is it possible?
I'm using Solr 4.5.1
Replica 1:
version:77847
numDocs:5951879
maxDoc:5951978
deletedDocs:99
Replica 2:
version:76011
numDocs:5951793
maxDoc:5951965
deletedDocs:172
Is it not supposed tlog ensure the data consistency?
I'm wondering some time ago if it's possible have replicas of a shard
synchronized but in an state that they can't accept queries only updates.
This replica in replication mode only awake to accept queries if it's the
last alive replica and goes to replication mode when other replica becomes
Hi,
I create a collection with command (Solr 4.5):
http://localhost:8983/solr/admin/collections?action=CREATEname=testDocValuescollection.configName=page-statisticsnumShards=12maxShardsPerNode=12router.field=month
The documentation says that the default router.name it's compositeId. The
Hi,
I created a collection with 12 shards and route.field=month (month field
will have values between 1 .. 12)
I notice that I have shards with more that a month into them. This could
left empty some shard and I want the documents one month in each shard.
My question is, how I configure the
Hi,
If I have a field (named dv_field) configured to be indexed, stored and with
docvalues=true.
How I know that when I do a query like:
q=*:*facet=truefacet.field=dv_field, I'm really using the docvalues and
not the normal way?
Is it necessary duplicate the field and set index and stored to
Hi,
In this screenshot I have a shard with two replicas without leader,
http://picpaste.com/qf2jdkj8.png
On machine with shard green I found this exception:
INFO - dat5 - 2013-10-18 22:48:04.775;
org.apache.solr.handler.admin.CoreAdminHandler; Going to wait for
coreNodeName:
Hi,
I have some cores with lot of folder with format index.X, my question is
why?
The collateral effect of this are shards with 50% of size than replicas in
other nodes.
There is any way to delete this folders to free space?
It's a bug?
/Yago
-
Best regards
--
View this message in
I notice that when a SPLISHARD operation finish, the solr.xml is not update
properly.
# Parent solr.xml:
core numShards=2 name=test_shard1_replica1
instanceDir=test_shard1_replica1 shard=shard1 collection=test/
# Children solr.xml:
core name=test_shard1_0_replica1 shardState=construction
Hi,
I'm doing replicas for my shards manually and the solr.xml config doesn't
save the changes (solr.xml attribute persist = true).
The command used is:
curl
'http://192.168.2.18:8983/solr/admin/cores?action=CREATEname=test_shard1_replica2collection=testshard=shard1'
Someone else with the
Hi,
Yesterday I did a SPLITSHARD operation on one of my shards (50G size), today
the cluster state says that the children are in construction state and the
parent is active.
Is it not supposed that the parent becomes to inactive state and the new 2
shards becomes to active state?
An split takes
Hi,
When a distributed search is done, the inital query is forwarded to all
shards that are part of the specific collection that we are querying.
My question here is, Which is the machine that does the aggregation for
results from shards?
Is the machine which receives the initial request?
I
Today I was thinking about the ALIAS feature and the utility on Solr.
Can anyone explain me with an example where this feature may be useful?
It's possible have an ALIAS of multiples collections, if I do a write to the
alias, Is this write replied to all collections?
/Yago
-
Best regards
Hi all,
I think that there is some lack in solr's ref doc.
Section Running Solr says to run solr using the command:
$ java -jar start.jar
But If I do this with a fresh install, I have a stack trace like this:
http://pastebin.com/5YRRccTx
Is it this behavior as expected?
-
Best regards
Hi,
I have this error on a solr.StrField defined in my schema:
FieldType 'string_dv' is configured with a docValues format, but the codec
does not support it.
fieldtype name=string_dv class=solr.StrField sortMissingLast=true
omitNorms=true docValuesFormat=Disk/
In documentation
Hi,
I have a time out error when I try to split a collection with 15M documents
The exception (solr version 4.3):
542468 [catalina-exec-27] INFO org.apache.solr.servlet.SolrDispatchFilter
– [admin] webapp=null path=/admin/collections
Hi,
How I can disable all caches that solr use?
Regards
/Yago
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Disable-all-caches-in-solr-tp4066517.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
When I try run this query,
http://localhost:8983/solr/coreA/select?q=source_id:(7D1FFB# OR 7D1FFB)
city:ES, I have the error below:
response
lst name=responseHeader
int name=status400/int
int name=QTime1/int
/lst
lst name=error
str name=msg
org.apache.solr.search.SyntaxError: Cannot parse
Hi,
I'm playing a little with the new feature to SPLIT shards. In my first
tests, I realised the fact that if I do a split on a shard with
replicationFactor=2 per example, the result of the split operation doesn't
have the same replicationFactor, Is this the supposed behaviour of split?
If is
Hi,
I have this error if a try to split a shard again (The version of solr is
4.3.0):
19911726 [qtp1949819426-565] INFO
org.apache.solr.handler.admin.CollectionsHandler ? Splitting shard :
shard=shard1action=SPLITSHARDcollection=RPS-00-12
19911729 [main-EventThread] INFO
Hi, I have a node that can't finish the recovery.
The log shows this error:
3836028 [RecoveryThread] ERROR org.apache.solr.cloud.RecoveryStrategy –
Recovery failed - trying again... (0) core=ST-XXX_0712
3836028 [RecoveryThread] ERROR org.apache.solr.cloud.RecoveryStrategy –
Recovery failed -
Ok, I will do a fresh install in a VM and check that the error isn't
reproduce.
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291p4061512.html
Sent from the Solr - User mailing list archive at Nabble.com.
I found the error, the class of analysis field request handler was not set
properly.
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291p4061526.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi all,
I upgrade my solrcluster today from 4.2.1 to 4.3. On startup I can see some
error like this:
2449515 [catalina-exec-51] ERROR org.apache.solr.core.SolrCore –
org.apache.solr.common.SolrException: incref on a closed log:
Hi,
I was exploring the UI interface and in the analysis section I had a lazy
load error.
The logs says:
INFO - 2013-05-07 11:52:06.412; org.apache.solr.core.SolrCore; []
webapp=/solr path=/admin/luke params={_=1367923926380show=schemawt=json}
status=0 QTime=23
ERROR - 2013-05-07
The solr version is 4.2.1.
Here the stack trace:
SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore XXX':
Could not get shard_id for core: XXX
coreNodeName:192.168.20.47:8983_solr_XXX$
at
I get this exception when I try to create a new collection. someone have any
idea that what's going on?
org.apache.solr.common.SolrException: Error CREATEing SolrCore 'RPS_12':
Could not get shard_id for core: RPS_12
coreNodeName:192.168.20.48:8983_solr_RPS_12
-
Best regards
--
View this
I have got this in my logs. What's that mean?
ConcurrentLRUCache was not destroyed prior to finalize(), indicates a bug
-- POSSIBLE RESOURCE LEAK!!!
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Severe-errors-in-log-tp4057860.html
Sent from the Solr -
Hi,My overseer has enqueued more than 1 task and apparently is stuck.
Exists any way to force to do the enqueued tasks?A screenshot of the
overseer queue here http://tinypic.com/r/r8uhqq/4
-
Best regards
--
View this message in context:
Hi,
Reviewing the solr's log I found this message.
The solr version is 4.2.1, running in a tomcat 7
4973652:SEVERE: Too many close [count:-1] on
org.apache.solr.core.SolrCore@5795a627. Please report this exception to
solr-user@lucene.apache.org
5003386:SEVERE: REFCOUNT ERROR: unreferenced
I have this warning when I try to create a collection and the collection
is not created.
Apr 01, 2013 10:05:26 AM org.apache.solr.handler.admin.CollectionsHandler
handleCreateAction
INFO: Creating Collection :
Hi,
Is there a size limitation for the clusterstate file?
I can't create more collections for my cluster I have no error but the
CREATE command not return any response.
I read in the past that the max size for a file in zookeeper was 1MB, my
clusterstate file has 1.1MB. It's possible be this
:
/Users/yriveiro/Dump/solrCloud/node00.solrcloud/solr/home/RT-4A46DF1563_12
Mar 25, 2013 5:15:35 PM org.apache.solr.cloud.ZkController
createCollectionZkNode
INFO: Check for collection zkNode:RT-4A46DF1563_12
Mar 25, 2013 5:15:35 PM org.apache.solr.cloud.ZkController
createCollectionZkNode
INFO
Solr 4.2.1 will solve this issue?
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-2-mechanism-proxy-request-error-tp4047433p4049127.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I think that in solr 4.2 the new feature to proxy a request if the
collection is not in the requested node has a bug.
If I do a query with the parameter rows=0 and the node doesn't have the
collection. If the parameter is rows=4 or superior then the search works as
expected
the curl
The log of the UI
null:org.apache.solr.common.SolrException: Error trying to proxy request for
url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select
I will open the issue in Jira.
Thanks
-
Best regards
--
View this message in context:
Hi,
I have the next issue:
I have a collection with a leader and a replica, both are synchronized.
When I try to index data to this collection I have a timeout error (the
output is python):
(class 'requests.exceptions.Timeout',
Timeout(TimeoutError(HTTPConnectionPool(host='192.168.20.50',
Hi,
The version is the 4.1
I'm not mixing deletes and adds, are only adds.
I have a 4 nodes in 2 physical machines, 2 instances of tomcat in each
machine. In this case the leader is located in a diferent physical machine
that the replica. The collection has all shards in different nodes, I have
Hi all,
Anyone know if this patch works in distributed collections and if it's
reliable?
https://issues.apache.org/jira/browse/SOLR-2242
Thanks.
/Yago
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Facet-get-distinct-terms-tp4043350.html
Sent from
Hi,
Exists any way to eject a node from a solr cluster?
If I shutdown a node in the cluster, the zookeeper tag the node as down.
Thanks
/Yago
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Eject-a-node-from-SolrCloud-tp4038950.html
Sent from the
How is possible that this sorted query returns different results?
The highest value is the id P2450024023, sometimes the value returned is not
the highest.
This is an example, the second curl request is the correct result.
NOTE: I did the query when a indexing process was running.
➜ ~ curl
Hi,
Exists some issue open in the Solr Project about this issue?
Thanks
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Atomic-Updates-Payloads-Non-stored-data-tp4006678p4023789.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Exists the possibility of do a distinct group count in a grouping done using
a sharding schema?
This issue https://issues.apache.org/jira/browse/SOLR-3436 make a fixe in
the way to sum all groups returned in a distributed grouping operation, but
not always we want the sum, in some cases is
Hi,
I have the same issue using solr 4.0-ALPHA.
--
View this message in context:
http://lucene.472066.n3.nabble.com/groups-limit-0-in-sharding-core-results-in-IllegalArgumentException-tp4006086p4006110.html
Sent from the Solr - User mailing list archive at Nabble.com.
59 matches
Mail list logo