On Fri, Sep 5, 2014 at 9:34 PM, Walter Underwood wun...@wunderwood.org
wrote:
What would be a high mm value, 75%?
Walter, I suppose that the length of the search result influence the run
time. So, for particular query and an index, the high mm value is that
one, which significantly reduces
Suggestion
In solrconfig.xml:
searchComponent name=suggest class=solr.SuggestComponent
lst name=suggester
str name=namemySuggester/str
str name=lookupImplFuzzyLookupFactory/str
str name=dictionaryImplDocumentDictionaryFactory/str
str name=fieldcontent/str
str
Hi Vaibhav,
Could you check with the directory *suggest.dictionary* mySuggester is present
or not, try making it with mkdir, if still problem persist try giving full path.
I found good article in below link check with that too.
Is there an API to manipulate/consolidate the schema(.xml) of a Solr-core?
Through SolrJ?
Context:
We already have a generic indexing/searching framework (based on lucene) where
any component can act as a so called IndexDataPorvider. This provider delivers
the field-types and also the
Hello,
I've dropped solr-4.10.0.war in Tomcat 7's webapp directory.
When I start the Java web server, the following message appears in catalina.out:
---
INFO: Starting Servlet Engine: Apache Tomcat/7.0.55
Sep 17, 2014 11:35:59 AM org.apache.catalina.startup.HostConfig
Yes, this is a nasty error. You have not set up logging libraries properly:
https://cwiki.apache.org/confluence/display/solr/Configuring+Logging
-Original message-
From:phi...@free.fr phi...@free.fr
Sent: Wednesday 17th September 2014 11:51
To: solr-user@lucene.apache.org
Subject:
As far as I can see, when a Solr instance is started (whether standalone
or SolrCloud), a PingRequestHandler will wait until index warmup is
complete before returning (at least with useColdSearcher=false) which
may take a while. This poses a problem in that a load balancer either
needs to wait
I'm processing a zip file with an xml file. The TikaEntityProcessor opens
the zip, reads the file but is stripping the xml tags even though I have
supplied the htmlMapper=identity attribute. It maintains any html that is
contained in a CDATA section but seems to strip the other xml tags. Is
Sorry...adding more information.
Note that it does wrap my data in html but it is after it strips all my xml
tags out. So the data I am interested in parsing which would be
namesomething/name
descriptionsomething/description
coordinates12345,12345,0/coordinates
end up like p/n something /t/n
: second, and assuming your problem is really that you're looking at the
: _display_, you should get back exactly what you put in so I'm guessing
Not quite ... With the numeric types, the numeric value is both indexed
and stored so that there is no search/sort inconsistency between 1.1,
1.10,
Right, you can create new cores over the rest api.
As far as changing the schema, there's no good way to do that that I
know of programmatically. In the SolrCloud world, you can upload the
schema to ZooKeeper and have it automatically distributed to all the
nodes though.
Best,
Erick
On Wed, Sep
Really! Ya learn something new every day.
On Wed, Sep 17, 2014 at 10:48 AM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: second, and assuming your problem is really that you're looking at the
: _display_, you should get back exactly what you put in so I'm guessing
Not quite ... With the
See if SOLR-5831 https://issues.apache.org/jira/browse/SOLR-5831 helps.
Peter
On Tue, Sep 16, 2014 at 11:32 PM, William Bell billnb...@gmail.com wrote:
What we need is a function like scale(field,min,max) but only operates on
the results that come back from the search results.
scale() takes
Hello
I have generated a lucene index (with 6 shards) using Map Reduce. I want
to load this into a SolrCloud Cluster inside a collection.
Is there any out of the box way of doing this? Any ideas are much
appreciated
Thanks
Nitin
The Solr wiki says A repeated question is how can I have the
original term contribute
more to the score than the stemmed version? In Solr 4.3, the
KeywordRepeatFilterFactory has been added to assist this
functionality.
https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Stemming
I'm not 100% on this, but I imagine this is what happens:
(using - to mean tokenized to)
Suppose that you index:
I am running home - am run running home
If you then query running home - run running home and thus give a higher
score than if you query runs home - run runs home
- Original
Details please. You say MapReduce. Is this the
MapReduceIndexerTool? If so, you can use
the --go-live option to auto-merge them. Your
Solr instances need to be running over HDFS
though.
If you don't have Solr running over HDFS, you can
just copy the results for each shard to the right place.
What
FWIW, I do a lot of moving Lucene indexes around and as long as the core is
unloaded it's never been an issue for Solr to be running at the same time.
If you move a core into the correct hierarchy for a replica, you can call
the Collections API's CREATESHARD action with the appropriate params
If each token have a languageattribute on it, when I search by word and
language and if hightlighting is switched on, each word of sentence will be
highlighted. Because of it this solution not fit.
--
View this message in context:
Hi, my case is a little simpler. For example, I have 100 collections now in my
solr cloud, and I want to backup 20 of them so I can restore them later. I
think I can just copy the index and log for each shard/core to another
location, then delete the collections. Later, I can create new
On 9/17/2014 7:06 AM, Ere Maijala wrote:
As far as I can see, when a Solr instance is started (whether
standalone or SolrCloud), a PingRequestHandler will wait until index
warmup is complete before returning (at least with
useColdSearcher=false) which may take a while. This poses a problem in
On 9/17/2014 8:07 PM, Shawn Heisey wrote:
I've got haproxy in front of my solr servers. My checks happen every
five seconds, with a 4990 millisecond timeout. My ping handler query
(defined in solrconfig.xml) is q=*:*rows=1 ... so it's very simple
and fast. Because of efficiencies in the *:*
If you are updating or deleting from your indexes I don't believe it is
possible to get a consistent copy of the index from the file system
directly without monkeying with hard links. The safest thing is to use the
ADDREPLICA command in the Collections API and then an UNLOAD from the CORE
API if
I'm using SOLR-hs_0.06 based on SOLR 4.10
I have SolrCloud with external ZooKeepers.
I manually indexed with DIH from mySQL on each instance - we have lot of
dbs, so It's one db per solr instace.
All was just fine - I could search and so on.
Then I sended update queries (lot of, about 1 or
24 matches
Mail list logo