Answer to myself:
using the solr.KeywordTokenizerFactory and solr.WordDelimiterFilterFactory can
preserve the original phone number and can add a token without containing
spaces.
input: 12345 67890
tokens: 12345 67890, 12345, 67890, 1234567890
Two advantages: I don't need another field and
Hi,
I'm using edismax parser to perform a runtime boosting. Here's my sample
request handler entry.
str name=qftext^2 title^3/str
str name=bqSource:Blog^3 Source2:Videos^2/str
str name=bfrecip(ms(NOW/DAY,PublishDate),3.16e-11,1,1)^2.0/str
As you can see, I'm adding weights to text and title,
Thanks for the reply. I found one solution to modify DocList and DocSet after
searching. Look At the following code snippet.
private void sortByRecordIDNew(SolrIndexSearcher.QueryResult result,
ResponseBuilder rb) throws IOException {
DocList docList = result.getDocListAndSet().docList;
Hi Shamik,
Yes it is possible with map and query functions.
Please see Jan's example :
http://www.cominvent.com/2012/01/25/super-flexible-autocomplete-with-solr/
On Wednesday, June 11, 2014 9:34 AM, Shamik Bandopadhyay sham...@gmail.com
wrote:
Hi,
I'm using edismax parser to perform a
Would Solr use multithreading to process the records of a function
query as described above? In my scenario concurrent searches are not
the issue, rather the speed of one query will be the optimization
target. Or will I have to set up distributed search to achieve that?
Thanks,
Robert
On Tue,
Thanks for the input!
Erick - To clarify, we see the No Uncommitted Changes message repeatedly
for a number of commits (not a consistent number each time this happens)
and then eventually we see a commit that successfully finds changes, at
which point the documents are available.
Shalin - That
Hello,
I just moved from Solr 4.6 to Solr 4.8.1 and I notice differences in the way
Hunspell work.
Some changes are fixes (due to
https://issues.apache.org/jira/browse/LUCENE-5483 I assume) but other changes
look like regressions.
To check this, I have compared the results obtained in the
Hello,
I just moved from Solr 4.6 to Solr 4.8.1 and I notice differences in the way
Hunspell work.
Some changes are fixes (due to
https://issues.apache.org/jira/browse/LUCENE-5483 I assume) but other changes
look like regressions.
To check this, I have compared the results obtained in the
While running a Solr-based Web application on Tomcat 6, we have been
repeatedly running into Out of Memory issues. However, these OOM errors are
not related to the Java heap. A snapshot of our Solr dashboard just before
the OOM error reported:
Physical memory: 7.13/7.29 GB
JVM-Memory: 57.90 MB
On Wed, Jun 11, 2014 at 7:46 AM, Robert Krüger krue...@lesspain.de wrote:
Or will I have to set up distributed search to achieve that?
Yes — you have to shard it to achieve that. The shards could be on the
same node.
There were some discussions this year in JIRA about being able to do
In Solr 4.9 there is a feature called RankQueries, that allows you to
plugin your own ranking collector. So, if you wanted to write a
ranking/sorting collector that used a thread per segment, you could cleanly
plug it in.
Joel Bernstein
Search Engineer at Heliosearch
On Wed, Jun 11, 2014 at
Hi,
Any suggestion for tokenizer / filter / other solutions that support search in
Solr as following -
Use Case
Input
Solr should return
All Results
*
All results
Prefix Search
Text*
All data started by Text* (Prefix search)
Exact Search
Auto Text
Exact match. Only Auto Text
Partial
Hi,
Any suggestion for tokenizer / filter / other solutions that support
search in Solr as following -
Use Case
Input
Solr should return
All Results
*
All results
Prefix Search
Text*
All data started by Text* (Prefix search)
Exact Search
Auto Text
Exact match. Only Auto
I have a text_general field and want to use its value in a custom function.
I'm unable to do so. It seems that the tokenizer messes this up and only a
fraction of the entire value is being retrieved. See below for more details.
doc str name=id1/str str name=field_tterm1 term2 term3/str
long
Hello everyone.
I’m having problems with the performance of queries with facets, the temp
expend to resolve a query is very high.
The index has 10Millions of documents, each one with 100 fields.
The server has 8 cores and 56 Gb of ram, running with jetty with this
memory configuration:
On 6/11/2014 9:30 AM, Costi Muraru wrote:
I have a text_general field and want to use its value in a custom function.
I'm unable to do so. It seems that the tokenizer messes this up and only a
fraction of the entire value is being retrieved. See below for more details.
Low-level Lucene details
I have configured many tomcat+solrCloud setups but I'm trying now to
research the new solr.properties configuration.
I have a functioning zookeeper to which I manually loaded a
configuration using:
zkcli.sh -cmd upconfig \
-zkhost xx.xx.xx.xx:2181 \
-d /test/conf \
-n test
My
Thanks Ahmet, I'll give it a shot.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-we-do-conditional-boosting-using-edismax-tp4141131p4141268.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
My requirements is to execute this query(hive) in solr:
select SUM(Primary_cause_vaR),collect_set(skuType),RiskType,market,
collect_set(primary_cause) from bil_tos Where skuType='Product' group by
RiskType,market;
I can implement sum and groupBy operations in solr using StatsComponent
19 matches
Mail list logo