I am also getting same error when performing shard splitting using solr 4.4.0
--
View this message in context:
http://lucene.472066.n3.nabble.com/Shard-splitting-failure-with-and-without-composite-hashing-tp4083662p4084177.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Roman,
Something bad happened in fresh checkout:
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
./queries/demo/demo.queries -s localhost -p 8983 -a --durationInSecs 60 -R
cms -t /solr/statements -e statements -U 100
Traceback (most recent call last):
File solrjmeter.py, line 1392,
Hi Alex,
Thanks for your reply and i looked into core analyser and also created
custom tokeniser using that.I have shared code below. when i tried to look
into analysis of solr, the analyser is working fine but when i tried to
submit 100 docs together i found in logs (with custom message
Hi,
when i set solr hostPort in tomcat system properties, it is not working. If
I specify that in solr.xml then it is working. Is it mandatory that
hostPort shouls be set only in solr.xml ?
Solr.xml setting:
solr
solrcloud
str name=host${host:}/str
*int name=hostPort${port:}/int*
Hi folks,
Not sure if this is a bug or intended behaviour, but the ping query
seems to rely on the value of the default df value in the
requestHandler, rather than on the core's defaultSearchField defined in
schema.xml.
I would expect the schema.xml values to override solrconfig.xml
Addendum: using Solr 4.3.1
I create a collection prior to tomcat startup.
--java -classpath .;zoo-lib/* org.apache.solr.cloud.ZkCLI -cmd upconfig
-zkhost localhost:2181 -confdir solr-conf -confname solrconf1
--java -classpath .;zoo-lib/* org.apache.solr.cloud.ZkCLI -cmd linkconfig
-zkhost 127.0.0.1:2181 -collection
HI,Mark.
When I managed collections via the Collections API.
How can I set the 'instanceDir' name?
eg:http://localhost:8983/solr/admin/collections?action=CREATEname=mycollectionnumShards=3replicationFactor=4
My instanceDir is 'mycollection_shard2_replica1'.
How can I change it to 'mycollection'?
Hi,
I am experimenting with solr 4.4.0 split shard feature. When i split the
shard i am getting the following exception.
/java.lang.IllegalArgumentException: maxValue must be non-negative (got: -1)
at
org.apache.lucene.util.packed.PackedInts.bitsRequired(PackedInts.java:1184)
at
Frankly, you're into somewhat uncharted waters, the whole lazy core
capability was designed for non-cloud mode.
Cores are initialized when the first request comes in that addresses
the core. Whether ZK and SolrCloud know a core is active before
the first time it's loaded I don't know.
I think
Nobody knows without considerably more details, please review:
http://wiki.apache.org/solr/UsingMailingLists
What do the Solr logs say on that node? You could consider just
blowing away the index on that core and letting it replicate in full
from the leader.
Best
Erick
On Mon, Aug 12, 2013 at
Again, why are you using ...admin/cores rather than admin/collections?
See:
http://wiki.apache.org/solr/SolrCloud#Managing_collections_via_the_Collections_API
Best
Erick
On Tue, Aug 13, 2013 at 5:00 AM, Prasi S prasi1...@gmail.com wrote:
I create a collection prior to tomcat startup.
This has been mentioned before, but it's never been
implemented. It's a pain to copy/paste the full field
definition, but the utility of subclassing fieldTypes
is really pretty restricted. How, for instance, would
you, say, tweak the parameters to WordDelimiterFilterFactory
in your sub-field? And
1 That's hard-coded at present. There's anecdotal evidence that there
are throughput improvements with larger batch sizes, but no action
yet.
2 Yep, all searchers are also re-opened, caches re-warmed, etc.
3 Odd. I'm assuming your Solr3 was master/slave setup? Seeing the
queries
did you do a (real) commit before trying to use this?
I am not sure how this splitting works, but at least the merge option
requires that.
i can't see this happening unless you are somehow splitting a 0
document index (or, if the splitter is creating 0 document splits)
so this is likely just a
On a quick scan I don't see a problem here. Attach
debug=query to your url and that'll show you the
parsed query, which will in turn show you what's been
pushed through the analysis chain you've defined.
You haven't stated whether you've tried this and it's
not working or you're looking for
Ya i am performing commit after split request is submitted to server.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Split-Shard-Error-maxValue-must-be-non-negative-tp4084220p4084256.html
Sent from the Solr - User mailing list archive at Nabble.com.
Not quite sure what you're seeing here. adding debugQuery=true
shows the parsed query, timings, things like that. What it does NOT
do is show you how things were scored.
If you have a document that you think should match, you can add
explainOther to the query and it'll show you how an arbitrary
You can probably make this pretty fast by doing a fq with bbox
to restrict the number of documents that needed their distance
calculated
Best
Erick
On Mon, Aug 12, 2013 at 9:13 AM, Raymond Wiker rwi...@gmail.com wrote:
It will probably have better performance than having a plan b query
It looks like you have older jar files in your classpath, evidenced
by the line:
Caused by: java.lang.ClassCastException: class
org.apache.solr.handler.dataimport.DataImportHandler
bq: Originally, there is not the lib folder under solr, so I created it for
adding several jar files.
This is
Unless this is a copy/paste error, it's just wrong G
-DjhostPort=8080
jhostPort?
But more to the point, your specification is wrong. The
sysprop that would be substituted is port since that's what's
in ${port:}.
Try:
int name=hostPort${hostPort:}/int
Best
Erick
On Tue, Aug 13, 2013 at 3:47
1 The defaultSearchField in schema.xml is deprecated
2 the df parameter will _probably_ override it anyway.
Best
Erick
On Tue, Aug 13, 2013 at 4:16 AM, Bram Van Dam bram.van...@intix.eu wrote:
Addendum: using Solr 4.3.1
Well, i meant before, but i just took a look and this is implemented
differently than the merge one.
In any case, i think its the same bug, because I think the only way
this can happen is if somehow this splitter is trying to create a
0-document split (or maybe a split containing all deletions).
Hi Erick,
sorry if that wasn't clear: this is what I'm actually observing in my
application.
I wrote the first post after looking at the explain (debugQuery=true):
the query
q=mag 778 G 69
is translated as follow:
/ +((DisjunctionMaxQuery((//myfield://*mag*//^3000.0)~0.1)
Hi,
I recently tried setting up Solr in Tomcat. It works well without issues.
I tried setting up SOLR 3.6.2 in Websphere 7.0.0.25 by deploying the solr
war available in the dist folder. But after starting the solr instance in
WAS, unable to view the Solr home page. It throws JSP processing
On 8/13/2013 3:07 AM, xinwu wrote:
When I managed collections via the Collections API.
How can I set the 'instanceDir' name?
eg:http://localhost:8983/solr/admin/collections?action=CREATEname=mycollectionnumShards=3replicationFactor=4
My instanceDir is 'mycollection_shard2_replica1'.
How
Hello All,
I am trying to develop custom tokeniser (please find code below) and found
some issue while adding multiple document one after another.
it works fine when i add first document and when i add another document
it's not calling create method from SampleTokeniserFactory.java but it
calls
Dear admin,
Please add me to the ContributorsGroup so that I can add my websites which
are using Solr and Lucene Java.
Thank you best regards,
Ann
Done, thanks for helping!
On Tue, Aug 13, 2013 at 9:17 AM, Ann Tran epping2...@gmail.com wrote:
Dear admin,
Please add me to the ContributorsGroup so that I can add my websites which
are using Solr and Lucene Java.
Thank you best regards,
Ann
A question recently came up: Does the tlog store the entire document when
an atomic update happens or just the incoming delta? My guess is that it
stores the entire document, but that's a guess...
Thanks,
Erick
I think you can get what you want by escaping the space with a backslash
YMMV of course.
Erick
On Tue, Aug 13, 2013 at 9:11 AM, Andrea Gazzarini
andrea.gazzar...@gmail.com wrote:
Hi Erick,
sorry if that wasn't clear: this is what I'm actually observing in my
application.
I wrote the
Trying...thank you very much!
I'll let you know
Best,
Andrea
On 08/13/2013 04:18 PM, Erick Erickson wrote:
I think you can get what you want by escaping the space with a backslash
YMMV of course.
Erick
On Tue, Aug 13, 2013 at 9:11 AM, Andrea Gazzarini
andrea.gazzar...@gmail.com wrote:
On Tue, Aug 13, 2013 at 10:11 AM, Erick Erickson
erickerick...@gmail.com wrote:
A question recently came up: Does the tlog store the entire document when
an atomic update happens or just the incoming delta? My guess is that it
stores the entire document, but that's a guess...
Correct.
-Yonik
If you do end up figuring it out, would you mind letting me know? Right
now, our solution is to use an older version of SolrJ, but that means we
miss out on some of the improvements/bugfixes around aliases.
Thanks,
Michael Della Bitta
Applications Developer
o: +1 646 532 3062 | c: +1 917 477
At this point you would need a higher level service sitting on top on solr
clusters which also talks to your zk setup in order to create custom
collections on the fly.
its not super difficult, but seems out of scope for solrcloud now.
let me know if others have a different opinion.
thanks,
Just a bump to see if anyone knows if this can be done.
We want to get the shard routing key during insert as we have a plugin
operating within the UpdateRequestProcessor that is inserting the original
document being indexed into a resilient backing store so Solr only has to
index it and not
quick question on a similar topic,
for a NRT call to index a doc ,returns a success return code, if and only
if all available server have successfully written the doc to their tlog.
correct?
On Tue, Aug 13, 2013 at 10:35 AM, Yonik Seeley yo...@lucidworks.com wrote:
On Tue, Aug 13, 2013 at
On Tue, Aug 13, 2013 at 11:01 AM, Anirudha Jadhav aniru...@nyu.edu wrote:
quick question on a similar topic,
for a NRT call to index a doc ,returns a success return code, if and only
if all available server have successfully written the doc to their tlog.
correct?
Right.
-Yonik
Hello,
I use the following distance sorting of SOLR
4(solr.SpatialRecursivePrefixTreeFieldType):
fl=*,scoresort=score ascq={!geofilt score=distance filter=false
sfield=coords pt=54.729696,-98.525391 d=10}
(from the tutorial on
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4)
Now i
I think I see the confusion. Erick is right that using collections API
would sort the problem, but here is my rationale on why the confusion
exists.
There are 3 stages to creating a valid collection (well this is how I think
of it)
1) Upload a solrconfig.xml/schema.xml (+ A N Other required
The splitting code calls commit before it starts the splitting. It creates
a LiveDocsReader using a bitset created by the split. This reader is merged
to an index using addIndexes.
Shouldn't the addIndexes code then ignore all such 0-document segments?
On Tue, Aug 13, 2013 at 6:08 PM, Robert
On Tue, Aug 13, 2013 at 11:39 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
The splitting code calls commit before it starts the splitting. It creates
a LiveDocsReader using a bitset created by the split. This reader is merged
to an index using addIndexes.
Shouldn't the addIndexes
Hi,
I'm getting some Out of memory (heap space) from my solr instance and
after investigating a little bit, I found several threads about sorting
behaviour in SOLR.
First, some information about the environment
- I'm using SOLR 3.6.1 and master / slave architecture with 1 master and
2
On Tue, Aug 13, 2013 at 9:15 PM, Robert Muir rcm...@gmail.com wrote:
On Tue, Aug 13, 2013 at 11:39 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
The splitting code calls commit before it starts the splitting. It creates
a LiveDocsReader using a bitset created by the split. This
When I run solr using
java -jar C:\solr\example\start.jar
It writes logs to C:\solr\example\logs.
When I run it using
java -Dsolr.solr.home=C:\solr\example\solr
-Djetty.home=C:\solr\example
-Djetty.logs=C:\solr\example\logs
-jar C:\solr\example\
start.jar
it writes logs only
: Not sure if this is a bug or intended behaviour, but the ping query seems to
: rely on the value of the default df value in the requestHandler, rather than
: on the core's defaultSearchField defined in schema.xml.
The df *request* param will always override the defaultSearchField in the
I have indexed the data from the db and so far it searches really well.
Now I want to create auto-complete/suggest feature in my website
So far I have seen articles about Suggester, spellchecker, and
searchComponents.
Can someone point me to a good article about basic autocomplete
implementation?
It's been my experience that using they convenient feature to change the output
key still doesn't save you from having to map it back to the field name
underlying it in order to trigger the filter query. With that in mind it just
makes more sense to me to leave the effort in the View portion
: I don't think so. I looked at sources - range and query facets are backed
: on SolrIndexSearcher.numDocs(Query, DocSet).
on fields that use docValues, range queries (regardless
of wether they are come from q, fq, facet.query, facet.range) are
sometimes implemented using the docValues via
Scratch that. I obviously didn't pay attention to the stack trace.
There is no workaround until 4.5 for this issue because we split the
range by half and thus cannot guarantee that all segments will have
numDocs 0.
On Tue, Aug 13, 2013 at 9:25 PM, Shalin Shekhar Mangar
shalinman...@gmail.com
Already fixed thanks to Shawn Heisey. Had to URL encode the ampersand and
semicolon in amp;
From: Erik Hatcher erik.hatc...@gmail.com
To: solr-user@lucene.apache.org solr-user@lucene.apache.org
Sent: Monday, August 12, 2013 9:06 PM
Subject: Re: Problem
More on this, I think I found something...
*Slave admin console- -- stats.jsp#cache**, FieldCache**
*
...
entries count: 22
entry#0 :
'MMapIndexInput(path=/home/agazzarini/solr-indexes/slave-data-dir/cbt/main/data/index/*_mp*.frq)'=*'title_sort'*,class
...
entry#9 :
I'm running into the same issue using composite routing keys when all of
the shard keys end up in one of the subshards.
-Greg
On Tue, Aug 13, 2013 at 9:34 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
Scratch that. I obviously didn't pay attention to the stack trace.
There is no
Hi,
I have created a custom component with some post filtering ability. Now I
am trying to add certain fields to the solr response. I was able to add it
as a separate response section, but i am having difficulty adding it to the
docs themselves. Is there an example of any component which adds
The fl field controls what appears for documents in the response.
You can add function queries to the fl list, including aliases, such as:
fl=*,Four:sum(2,2)
You could do a custom writer if you really need to mangle the actually
document output.
The bottom line is that fl will determine
I am using Solr Cloud 4.4. It is pretty much a base configuration. We have
2 servers and 3 collections. Collection1 is 1 shard and the Collection2 and
Collection3 both have 2 shards. Both servers are identical.
So, here is my process, I do a lot of queries on Collection1 and
Collection2. I then
Hi Eric,
Yeah a document can belong to multiple subcategory hierarchies. Also we
will be having multi-level categorization unlike the 2 level I previously
mentioned.
like : Electronics Phones Google Nexus ...
Also since solr does not support relational join, so shall I fetch the
categories
Any ideas?
On Aug 10, 2013, at 6:28 PM, Mark static.void@gmail.com wrote:
Our schema is pretty basic.. nothing fancy going on here
fieldType name=text class=solr.TextField omitNorms=false
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter
Hi Dmitry, oh yes, late night fixes... :) The latest commit should make it
work for you.
Thanks!
roman
On Tue, Aug 13, 2013 at 3:37 AM, Dmitry Kan solrexp...@gmail.com wrote:
Hi Roman,
Something bad happened in fresh checkout:
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
I have a solr 3.6 infrastructure with 4 server 24 cores/128GB (~15 shards at
every server), 70 million documents.
Now I set up a new solr 4 infrastructure with the same hardware. I reduce the
shards and have only 6 shards.
But I don't understand the difference between solrcloud and a
On 8/13/2013 4:47 PM, Torsten Albrecht wrote:
I have a solr 3.6 infrastructure with 4 server 24 cores/128GB (~15 shards at
every server), 70 million documents.
Now I set up a new solr 4 infrastructure with the same hardware. I reduce the
shards and have only 6 shards.
But I don't understand
Hi Roy.
Using the example schema and data, and copying the store field to
store_rpt indexed with location_rpt field type, try this query:
While I don't have a past history of this issue to use as reference, if I were
in your shoes I would consider trying your updates with softCommit disabled.
My suspicion is you're experiencing some issue with the transaction logging and
how it's managed when your hard commit occurs.
If you can
63 matches
Mail list logo