Need help with Nested docs situation
Hello, I have a situation and i'm a little bit stuck on the way how to fix it. For example the following data structure: *Deal* All Coca Cola 20% off *Products* Coca Cola light Coca Cola Zero 1L Coca Cola Zero 20CL Coca Cola 1L When somebody search to Cola discount i want the result of the deal with related products. Solution #1: I could index it with nested docs(solr 4.9). But the problem is when a product has some changes(let's say Zero gets a new name Extra Light) i have to re-index every deal with these products. Solution #2: I could make 2 collections, one with deals and one with products. A Product will get a parentid(dealid). Then i have to do 2 queries to get the information? When i have a resultpage with 10 deals i want to preview the first 2 products. That means a lot of queries but it's doesn't have the update problem from solution #1. Does anyone have a good solution for this? Thanks, any help is appreciated. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Need-help-with-Nested-docs-situation-tp4203190.html Sent from the Solr - User mailing list archive at Nabble.com.
Add core in solr.xml | Problem with starting SOLRcloud
Hello, Our platform has 4 solr instances and 3 zookeepers(solr 4.1.0). I want to add a new core in my solrcloud. I add the new core to the solr.xml file: core name=collection2 instanceDir=collection2 / I put the config files in the directory collection2. I uploaded the new config to zookeeper and start solr. Solr did not start up and gives the following error: Oct 16, 2014 4:57:06 PM org.apache.solr.cloud.ZkController publish INFO: publishing core=collection1 state=recovering Oct 16, 2014 4:57:06 PM org.apache.solr.cloud.ZkController publish INFO: numShards not found on descriptor - reading it from system property Oct 16, 2014 4:57:06 PM org.apache.solr.client.solrj.impl.HttpClientUtil createClient INFO: Creating new http client, config:maxConnections=128maxConnectionsPerHost=32followRedirects=false Oct 16, 2014 4:59:06 PM org.apache.solr.common.SolrException log SEVERE: Error while trying to recover. core=collection1:org.apache.solr.common.SolrException: I was asked to wait on state recovering for 31.114.2.237:8910_solr but I still do not see the requested state. I see state: active live:true at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181) at org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:202) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:346) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223) Oct 16, 2014 4:59:06 PM org.apache.solr.cloud.RecoveryStrategy doRecovery SEVERE: Recovery failed - trying again... (0) core=collection1 Oct 16, 2014 4:59:06 PM org.apache.solr.cloud.RecoveryStrategy doRecovery INFO: Wait 2.0 seconds before trying to recover again (1) Oct 16, 2014 4:59:08 PM org.apache.solr.cloud.ZkController publish INFO: publishing core=collection1 state=recovering Oct 16, 2014 4:59:08 PM org.apache.solr.cloud.ZkController publish INFO: numShards not found on descriptor - reading it from system property Oct 16, 2014 4:59:08 PM org.apache.solr.client.solrj.impl.HttpClientUtil createClient INFO: Creating new http client, config:maxConnections=128maxConnectionsPerHost=32followRedirects=false What's wrong with my setup? Any help would be appreciated! Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Add-core-in-solr-xml-Problem-with-starting-SOLRcloud-tp4164524.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR4 Spatial sorting and query string
Hello David, The first months there will be not that many points in a doc, i will keep the topic in mind! The next step is that i want to now which location matched my query. Example: Product A is available in 3 stores, the doc looks like this: /doc str name=productnameProduct A/str arr name='store_coordinates' strstore1_geo/str strstore2_geo/str strstore3_geo/str /arr arr name=storeinfo strLondon#store1_geo/str strAmsterdam#store2_geo/str strNew York#store3_geo/str /arr /doc/ I query the index with my location set to Berlin and a radius of 250km. I know that this result gets back on the first place because it's close to Amsterdam(store2_geo). But normally, How can i know which one matched my query as closest point? Is it possible to get this back? I can do it in my application but with 200 stores i don't think it's the best solution. Thanks, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318p4084795.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR4 Spatial sorting and query string
Great, it works very well. In solr 4.5 i will use geodist() again! Thanks David -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318p4084487.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR4 Spatial sorting and query string
Hello, I have a question about performance with a lot of points and spatial search. First i will explain my situation: We have some products data and want to store every geo location of stores that sells this product. I use a multivalued coordinates field with the geo data: arr name=store_coordinates strlat,long/str strlat,long/str strlat,long/str strlat,long/str strlat,long/str /arr The config: *field name=store_coordinates type=location_rpt indexed=true stored=true multiValued=true/ fieldType name=location_rpt class=solr.SpatialRecursivePrefixTreeFieldType distErrPct=0.025 maxDistErr=0.09 units=degrees/* When i search for a product term i want only the products that are nearby the given location. So i used to following query: fq=_query_:{!geofilt sfield=store_coordinates pt=locatonlat,locationlong d=25} It works great but my question is: Will it work quick and smooth with +1000 stores in my store_coordinates field? Any help is appreciated Thanks, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318p4084521.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR4 Spatial sorting and query string
Hello, I use the following distance sorting of SOLR 4(solr.SpatialRecursivePrefixTreeFieldType): fl=*,scoresort=score ascq={!geofilt score=distance filter=false sfield=coords pt=54.729696,-98.525391 d=10} (from the tutorial on http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4) Now i want to query on a searchstring and still want to sort on distance. How can i combine this in above solr request? When i add something to the q= it doesn't work. I tried _query_ subquery and other stuff but i don't get it working I appreciate any help, Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318.html Sent from the Solr - User mailing list archive at Nabble.com.
Realtime updates solrcloud
Hello Guys, I want to use the realtime updates mechanism of solrcloud. My setup is as follow: 3 solr engines, 3 zookeeper instances(ensemble) The setup works great, recovery, leader election etc. The problem is the realtime updates, it's slow after the servers gets some traffic. I try to explain it: I test the realtime update with the following command: *curl http://SOLRURL:SOLRPORT/solr/update -H Content-Type: text/xml --data-binary 'adddocfield name=id3504811/fieldfield name=websitehttp://www.google.nl/add/doc'* I see this in logs of solr server: *Mar 29, 2013 12:38:51 PM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [collection1] webapp=/solr path=/update params={} {add=[3504811 (1430841858290876416)]} 0 35 * The other solr servers get the following lines in the log: *INFO: [collection1] webapp=/solr path=/update params={distrib.from=http://SOLRIP:SOLRPORT/solr/collection1/update.distrib=FROMLEADERwt=javabinversion=2} {add=[3504811 (1430844456234385408)]} 0 14* This looks good, the doc is added and the leader send this doc to the other solr servers. First times it takes 1 sec to make the update visible:) When i send some traffic to the server(200q/s), the update takes +- 30 sec to make it visible. I stopped the traffic it's still takes 30 sec's to make the update visible. How is it possible? The solrconfig parts: *autoCommit maxTime60/maxTime openSearcherfalse/openSearcher /autoCommit autoSoftCommit maxTime2000/maxTime /autoSoftCommit* Did i miss something? Best Regards, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Realtime-updates-solrcloud-tp4052370.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Advice: solrCloud + DIH
Thans for the support so far, I was running the dataimport on a replica! Now i start it on the leader and it goes with 590 doc/s. I think all docs were going to another node and then came back. Is there a way to get the leader? If there is, i can detect the leader with a script and start the DIH every night on the right server. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Advice-solrCloud-DIH-tp4047339p4047627.html Sent from the Solr - User mailing list archive at Nabble.com.
Advice: solrCloud + DIH
Hello, I need some advice with my solrcloud cluster and the DIH. I have a cluster with 3 cloud servers. Every server has an solr instance and a zookeeper instance. I start it with the -Dzkhost parameter. It works great, i send updates by an curl(xml) like this: curl http:/ip:SOLRport/solr/update -H Content-Type: text/xml --data-binary 'adddocfield name=id223232/fieldfield name=contenttest/field/doc/add' Solr has 2 million docs in the index. Now i want a extra field: content2. I add this in my schema and upload this again to the cluster with -Dbootstrap_confdir and -Dcollection.configName. It's replicated to the whole cluster. Now i need a re-index to add the field to every doc. I have a database with all the data and want to use the full-import of DIH(this was the way i did this in previous solr versions). When i run this it goes with 3 doc/s(Really slow). When i run solr alone(not solrcloud) it goes 600 docs/sec. What's the best way to do a full re-index with solrcloud? Does solrcloud support DIH? Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/Advice-solrCloud-DIH-tp4047339.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud: port out of range:-1
On the end i want 3 servers, this was only a test. I now that a majority of servers is needed to provide service. I read some tutorials about zookeeper and looked at the wiki. I installed zookeeper seperate on the server and connect them with eachother(zoo.cfg). In the log i see the zookeeper know eachother. When i start SOLR, i used the -Dzkhost parameter to declare the zookeepers of the servers: -Dzkhost=ip:2181,ip:2181,ip:2181 It works great:) ps. With embedded zookeepers i can't get it working. With a second server in the zkhost it returns a error. Strange, but for me the seperate zookeepers is a great solution, seperate logs and easy to use on other zookeeper servers(in the future i want to seperate in 3 solr instances and 5 zookeeper instances). THANKS -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-port-out-of-range-1-tp4045804p4046278.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud: port out of range:-1
Hello, I have some problems with Solrcloud and Zookeeper. I have 2 servers and i want to have a solr instance on both servers. Both solr instances runs an embedded zookeeper. When i try to start the first one i get the error: port out of range:-1. The command i run to start solr with embedded zookeeper: java -Djetty.port=4110 -DzkRun=10.100.10.101:5110 -DzkHost=10.100.10.101:5110,10.100.10.102:5120 -Dbootstrap_conf=true -DnumShards=1 -Xmx1024M -Xms512M -jar start.jar It runs Solr on port 4110, the embedded zk on 5110. The -DzkHost gives the urls of the localhost zk(5110) and the url of the other server(zk port). When i try to start this it give the error: port out of range:-1. What's wrong? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-port-out-of-range-1-tp4045804.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fieldnorm solr 4 - specialchars(worddelimiter)
I have done some more testing with different examples. It's really the worddelimiter that influence the fieldnorm. When i search for Barcelona the doc with FC Barcelona scores higher than FC-Barcelona. Fieldnorm for FC Barcelona = 0.625 and the fieldnorm for FC-Barcelona = 0.5. Analyze: fc | barcelona fc | barcelona | fcbarcelona So it's 2 terms against 3 and this explains the difference in score. In solr 3.1 the score is the same, fieldnorm is 0.625 for both docs. It looks like the catenatewords has no influence in solr 3.1. I want that the score is the same, with or without the catenatewords, like it's in solr 3.1. Is this possible? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248p4036679.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fieldnorm solr 4 - specialchars(worddelimiter)
Hello Jack, I'm using exactly the same fieldtype: fieldType name=text_delimiter class=solr.TextField analyzer type=index charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=1 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0 / /analyzer analyzer type=query charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=0 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0 / /analyzer /fieldType It looks like the catenatewords has another influence in solr 4.1 than in previous version.(3.1) The analyze is the same in both versions. I want exactly the same results but can't get it. -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248p4036749.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fieldnorm solr 4 - specialchars(worddelimiter)
Hello Jack, Thanks for your answer. It's clear, i think it was a bug in 3.1. The difference in fieldnorm was just not what i expected. I will tweak the schema to get it closer to the expected results. Thanks Jack, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248p4036759.html Sent from the Solr - User mailing list archive at Nabble.com.
Fieldnorm solr 4 - specialchars(worddelimiter)
Hello, I had some differences in solr score between solr 3.1 and solr 4. I have a searchfield with the following type: fieldType name=text_delimiter class=solr.TextField analyzer type=index charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=1 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0 / /analyzer analyzer type=query charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=0 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0 / /analyzer /fieldType An example of fieldnorms: SearchTerm = *barcelona* solr 3.1: fc *barcelona* soccer club - 0.5 fc-*barcelona* soccer club - 0.5 solr 4: fc *barcelona* soccer club - 0.5 fc-*barcelona* soccer club - 0.4375 It could be the catenateWords of the fieldtype conf: fc,barcelona,fcbarcelona,soccer,club(5 terms = 0.4375) Strange that in solr 3.1 it was just counting for 4 terms with the same filter. Why is fieldnorm different? I need some help with this:) Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Scoring differences solr versions
Hello Shawn, Thanks for the help: Indented format: *SOLR4* str name=1697058 3.3243241 = (MATCH) sum of: 0.20717455 = (MATCH) max plus 1.0 times others of: 0.19920631 = (MATCH) weight(firstname_search:g^50.0 in 783453) [DefaultSimilarity], result of: 0.19920631 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.11625154 = queryWeight, product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 1.7135799 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.5 = fieldNorm(doc=783453) 0.007968252 = (MATCH) weight(name_first_letter:g in 783453) [DefaultSimilarity], result of: 0.007968252 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.0023250307 = queryWeight, product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 783453) [DefaultSimilarity], result of: 3.1171496 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.3251704 = queryWeight, product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) /str *SOLR 3.1* str name=1697058 3.3741257 = (MATCH) sum of: 0.25697616 = (MATCH) max plus 1.0 times others of: 0.2490079 = (MATCH) weight(firstname_search:g^50.0 in 1697008), product of: 0.11625154 = queryWeight(firstname_search:g^50.0), product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 2.141975 = (MATCH) fieldWeight(firstname_search:g in 1697008), product of: 1.0 = tf(termFreq(firstname_search:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.625 = fieldNorm(field=firstname_search, doc=1697008) 0.007968252 = (MATCH) weight(name_first_letter:g in 1697008), product of: 0.0023250307 = queryWeight(name_first_letter:g), product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = (MATCH) fieldWeight(name_first_letter:g in 1697008), product of: 1.0 = tf(termFreq(name_first_letter:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(field=name_first_letter, doc=1697008) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 1697008), product of: 0.3251704 = queryWeight(lastname_search:aalbers^50.0), product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = (MATCH) fieldWeight(lastname_search:aalbers in 1697008), product of: 1.0 = tf(termFreq(lastname_search:aalbers)=1) 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(field=lastname_search, doc=1697008) /str Why scores this doc higher in solr 3.1? -- View this message in context: http://lucene.472066.n3.nabble.com/Scoring-differences-solr-versions-tp4035106p4035334.html Sent from the Solr - User mailing list archive at Nabble.com.
Scoring differences solr versions
Hi, I have some question about the scoring in SOLR4. I have the same query on 2 versions of SOLR(same indexed docs). The debug of the scoring: *SOLR4:* 3.3243241 = (MATCH) sum of: 0.20717455 = (MATCH) max plus 1.0 times others of: 0.19920631 = (MATCH) weight(firstname_search:g^50.0 in 783453) [DefaultSimilarity], result of: 0.19920631 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.11625154 = queryWeight, product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 1.7135799 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.5 = fieldNorm(doc=783453) 0.007968252 = (MATCH) weight(name_first_letter:g in 783453) [DefaultSimilarity], result of: 0.007968252 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.0023250307 = queryWeight, product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 783453) [DefaultSimilarity], result of: 3.1171496 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.3251704 = queryWeight, product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) *SOLR3.1:* 3.3741257 = (MATCH) sum of: 0.25697616 = (MATCH) max plus 1.0 times others of: 0.2490079 = (MATCH) weight(firstname_search:g^50.0 in 1697008), product of: 0.11625154 = queryWeight(firstname_search:g^50.0), product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 2.141975 = (MATCH) fieldWeight(firstname_search:g in 1697008), product of: 1.0 = tf(termFreq(firstname_search:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.625 = fieldNorm(field=firstname_search, doc=1697008) 0.007968252 = (MATCH) weight(name_first_letter:g in 1697008), product of: 0.0023250307 = queryWeight(name_first_letter:g), product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = (MATCH) fieldWeight(name_first_letter:g in 1697008), product of: 1.0 = tf(termFreq(name_first_letter:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(field=name_first_letter, doc=1697008) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 1697008), product of: 0.3251704 = queryWeight(lastname_search:aalbers^50.0), product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = (MATCH) fieldWeight(lastname_search:aalbers in 1697008), product of: 1.0 = tf(termFreq(lastname_search:aalbers)=1) 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(field=lastname_search, doc=1697008) What is the reason for differences in score? Is there something really different in calculating scores in SOLR 4? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Scoring-differences-solr-versions-tp4035106.html Sent from the Solr - User mailing list archive at Nabble.com.
Order SOLR 4 output
Hello, I have a really simple question i think: What is the order of the fields that are in the SOLR response? In SOLR 3.1 it was alfabetic but in SOLR 4 it isn't anymore. Is it configurable? I want to know this because i have test script that checks differences in output between the SOLR versions. When the order of the output fields is different it's really hard to check/test. Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/Order-SOLR-4-output-tp4027711.html Sent from the Solr - User mailing list archive at Nabble.com.
Spelling output solr 4
Hello, It looks like the directspelling component returns the correction between (correction). Why is this and are there some other differences in the spelling component in solr 4(towards solr 3.1) Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Spelling-output-solr-4-tp4025043.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Ok, that's important for the traffic. Some questions about zookeeper. I have done some tests and i have the following questions: - How can i delete configs from zookeeper? - I see some nodes in the clusterstate that are already gone. Why is this not up-to-date? Same for graph. Thanks again! -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4022311.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Mark: I'm using a separate zookeeper instance. I don't use the embedded zk in solr. I can't find the location where the configs are stored, i can login to zookeeper and see the configs. delete commando works but i can't delete the whole config directory in once, only file by file. Erick, The nodes aren't live anymore and not visible in live_nodes but still in the cloud graph. Why is this and how can i remove it from there? I was testing with 10 nodes and now only with 4. I see 6 nodes that aren't there anymore. -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4022358.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Thanks Tomás for the information so far. You said: You can effectively run with only one zk instance, the problem with this is that if that instance dies, then your whole cluster will go down. When the cluster goes down i can still send queries to the solr instances? We have a lb that's choose a solr instance round robin. Can Solr still handles query when there is no zookeeper up? Only updates wille be a problem? -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4021991.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
I run a separate Zookeeper instance right now. Works great, nodes are visible in admin. Two more questions: - I change my synonyms.txt on a solr node. How can i get zookeeper in sync and the other solr nodes without restart? - I read something more about zookeeper ensemble. When i need to run with 4 solr nodes(replicas) i need 3 zookeepers in ensemble(50% live). When zookeeper and solr are separated it will takes 7 servers to get it live. In the past we only needed 4 servers. Are there some other options because the costs will grow? 3 zookeeper servers sounds like overkill. Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4021849.html Sent from the Solr - User mailing list archive at Nabble.com.
From Solr3.1 to SolrCloud
hello, We are using solr 3.1 for searching on our webpage right now. We want to use the nice features of solr 4: realtime search. Our current configuration looks like this: Master Slave1 Slave2 Slave3 We have 3 slaves and 1 master and the data is replication every night. In the future we want to update every ~5 seconds. I was looking to SOLRCLOUD and got a few questions: - We aren't using shards because our index only contains 1 mil simple docs. We only need multiple server because the amount of traffic. In the examples of solrCloud i see only examples with shards. Is numshards=1 possible? One big index is faster than multiple shards? I need 1 collection with multiple nodes? - Should i run a single zookeeper instance(without solr) on a seperate server? - Is the DIH still there in solr 4? Any help is welcome! Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Thanks Tomás, I will use numshards=1. Are there some instructions on how to install only zookeeper on a separate server? Or do i have to install solr 4 on that server? How make the connection between the solr instances and the zk instance(server)? Thanks so far, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4021583.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr backup replication - restore from snapshot
Hello, I have some question about restoring from a snapshot backup. I have a master and do the following command: http://solr.test.uk:/solr/replication?command=backup It created a directory in my data directory: snapshot.20120911224532 When i want to use this backup on master i replace the index directory with the snapshot directory. I restart the master and it works! Now i want to replicate this to my slaves(live) but the slaves doesn't recognize the changes. I think the problem is the index-version. The index version of the master(created from snapshot) is lower than the index versions on the slaves. How can i fix this? Can i force the slaves to replicate without looking to index versions? Can i upgrade the index version on the master? Any help will be appreciated! Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-backup-replication-restore-from-snapshot-tp4006821.html Sent from the Solr - User mailing list archive at Nabble.com.
Boosting score by Geo distance
Hello, I want to boost the score of the founded documents by geo distance. I use this: bf=recip(geodist(),2,1000,30) It works but i don't know what the parameters mean? (2,1000,30) Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Boosting-score-by-Geo-distance-tp3984035.html Sent from the Solr - User mailing list archive at Nabble.com.
Must match and terms with only one letter
Hello, I use the MM function on my edismax requesthandler(70%). This works great but i have one problem: When is search for A Cole there has to been only one term match(mm = 70%). The problem is the A, It returns 9200 documents with an A in it. Is there a posssibility to skip terms with only one character? The MM value is ok(2 terms - 1 match), but not when a term is only one character. Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Must-match-and-terms-with-only-one-letter-tp3984139.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Re:how to monitor solr in newrelic
Try this when you start SOLR java -javaagent:/NEWRELICPATH/newrelic.jar -jar start.jar Normally you will see your SOLR installation on your newrelic dashboard in 2 minutes. -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-monitor-solr-in-newrelic-tp3739567p3743488.html Sent from the Solr - User mailing list archive at Nabble.com.
Differences in debugQuery and results
Hello, I have some configuration problems and can't get it working. I see some differences with the debugQuery. I search for: w.j ((DisjunctionMaxQueryname1_search:w name1_search:j)^5.0) | ((name2_search:w name2_search:j)^5.0))~1.0) I search for: w j ((DisjunctionMaxQuery((name1_search:w^5.0 | name2_search:w^5.0)~1.0 ((DisjunctionMaxQuery((name1_search:j^5.0 | name2_search:j^5.0)~1.0) I use the worddelimiter to split on a dot. Why is there a difference? I want that SOLR handles this issues on the same way? How can i fix this? CONFIG: fieldType name=text_delimiter class=solr.TextField positionIncrementGap=100 autoGeneratePhraseQueries=false analyzer type=index charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=0 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0/ /analyzer analyzer type=query charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=0 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0/ /analyzer /fieldType Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Differences-in-debugQuery-and-results-tp3603817p3603817.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Replication downtime?? - master slave
Thanks Erick, It's good to hear the slave doesn't notice anything. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-downtime-master-slave-tp3561031p3572969.html Sent from the Solr - User mailing list archive at Nabble.com.
Replication downtime?? - master slave
Hello, I have one solr instance and i'm very happy with that. Now we have multiple daily updates and is see the response time is slower when doing a update. I think i need some master slave replication. Now my question is: Is a slave slower when there is an replication running from master to slave? Is there any downtime when switching from old to new data? I only need 1 slave for performace but when replication makes it slower i probably need 2. Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-downtime-master-slave-tp3561031p3561031.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) performance problem
Thanks, it looks great! In the nearby future i will give it a try. -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3533066.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) performance problem
Thanks for your answer Nagendra, The problem is i want to do some infix searches. When i search for sisco i want the autocomplete with san fran*sisco*. In the example you gave me it's also not possible. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3530891.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: One field must match with edismax
Thanks, it was that easy. I was thinking about a variant of the mm option in dismax but this works great! -- View this message in context: http://lucene.472066.n3.nabble.com/One-field-must-match-with-edismax-tp3496232p3499312.html Sent from the Solr - User mailing list archive at Nabble.com.
One field must match with edismax
Hello, I have some problems with my application. I have some fields and use edismax to search between them. Now i want to configure that one field must match. Let's give an example: firstname lastname Nicknames Lionel messi loe,pulga When i search i want only results that match on lastname and maybe other fields. So lionel pulga gives no results but messi leo will matched. So what i want is to configure that there always must be a match in one field(lastname). Is this possible? -- View this message in context: http://lucene.472066.n3.nabble.com/One-field-must-match-with-edismax-tp3496232p3496232.html Sent from the Solr - User mailing list archive at Nabble.com.
solr.PatternReplaceFilterFactory AND endoffset
Hi, I have some problems with the patternreplaceFilter. I can't use the worddelimiter because i only want to replace some special chars given by myself. Some example: Tottemham-hotspur (london) Arsenal (london) I want this: replace - with ( OR ) with . In the analytics i see this: position1 term text tottemham hotspur london startOffset 0 endOffset 26 So the replacefilter works. Now i want to search tottemham hotspur london. This gives no results. position1 term text tottemham hotspur london startOffset 0 endOffset 24 It works when i search for tottemham-hotspur (london). I think the problem is the difference in offset(24 vs 26). I need some help... -- View this message in context: http://lucene.472066.n3.nabble.com/solr-PatternReplaceFilterFactory-AND-endoffset-tp3454049p3454049.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Performance issue: Frange with geodist()
Hi Yonik, I have used your suggestion to implement a better radius searcher: facet.query={!geofilt d=10 key=d10} facet.query={!geofilt d=20 key=d20} facet.query={!geofilt d=50 key=d50} It is a little bit faster than with geodist() but still a bottleneck i think. -- View this message in context: http://lucene.472066.n3.nabble.com/Performance-issue-Frange-with-geodist-tp3417962p3427820.html Sent from the Solr - User mailing list archive at Nabble.com.
Errors in requesthandler statistics
Hello, I was taking a look to my SOLR statistics and i see in part of the requesthandler a count of 23 by errors. How can i see which requests returns this errors? Can i log this somewhere? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Errors-in-requesthandler-statistics-tp3379163p3379163.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Errors in requesthandler statistics
Hi, Thanks for your answer. I have some logging by jetty. Every request looks like this: record date2011-09-29T12:28:47/date millis1317292127479/millis sequence18470/sequence loggerorg.apache.solr.core.SolrCore/logger levelINFO/level classorg.apache.solr.core.SolrCore/class methodexecute/method thread20/thread message[] webapp=/solr path=/select/ params={spellcheck=trueamp;facet=trueamp;sort=geodist()+ascamp;sfield=coordamp;spellcheck.q=testamp;facet.limit=20amp;version=2.2amp;fl=id,what,where} hits=0 status=0 QTime=12 /message /record How can i see which record gives an error? The file has stored 94000 requests Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Errors-in-requesthandler-statistics-tp3379163p3379288.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Slow autocomplete(terms)
Thanks for helping me so far, Yes i have seen the edgeNGrams possiblity. Correct me if i'm wrong, but i thought it isn't possible to do infix searches with edgeNGrams? Like chest gives suggestion manchester. -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3361155.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Slow autocomplete(terms)
Hello Erick, Thanks for your answer but i have some problems with the ngramfilter. My conf look like this: fieldType name=autocomplete class=solr.TextField positionIncrementGap=100 analyzer type=index tokenizer class=solr.KeywordTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.NGramFilterFactory minGramSize=2 maxGramSize=8/ /analyzer analyzer type=query tokenizer class=solr.KeywordTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.TrimFilterFactory/ /analyzer /fieldType I see this in the analysis: manchester ma an nc ch he es st te er man anc nch che hes est ste ter mancanchnche cheshestestestermanch anche nches chest heste ester manche anches nchest cheste hester manches anchest ncheste chester manchestancheste nchester When i use terms i see all this results back in the response. So i type ches i got this: ches nches anches nchest ncheste I want one suggestion with a total keyword: manchester. Is this possible? -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3358126.html Sent from the Solr - User mailing list archive at Nabble.com.
Slow autocomplete(terms)
Hello, I used the terms request for autocomplete. It works fine with 200.000 records but with 2 million docs it's very slow.. I use some regex to fix autocomplete in the middle of words, example: chest - manchester. My call(pecl PHP solr): $query = new SolrQuery(); $query-setTermsLimit(10); $query-setTerms(true); $query-setTermsField($field); $term = SolrUtils::escapeQueryChars ($term); $query-set(terms.regex,(.*)$term(.*)); $query-set(terms.regex.flag,case_insensitive); URL: /solr/terms?terms.fl=autocompletewhatterms.regex=(.*)chest(.*)terms.regex.flag=case_insensitiveterms=true I think the regex is the reason for the very high query time: Solr search between 2 million docs with a regex. The query takes 2 seconds, this is to much for the autocomplete. A user typed manchester united and solr needs to do 16 query's from 2 seconds. Are there some other options? Faster solutions? I use solr 3.1 -- View this message in context: http://lucene.472066.n3.nabble.com/Slow-autocomplete-terms-tp3351352p3351352.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) performance problem
thanks Klein, If I understand correctly there is for now no solution for this problem. The best solution for me is to limit the count of suggestions. I still want to use the regex and with 100.000 docs it looks like it's no problem. -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3351621.html Sent from the Solr - User mailing list archive at Nabble.com.
Synonyms problem
hello, I have some problems with synonyms. I will show some examples to descripe the problem: Data: High school Lissabon High school Barcelona University of applied science When a user search for IFD i want all the results back. So i want to use this synonyms at query time: IFD = high school lissabon, high school barcelona,University of applied science The data is stored in the field schools. Schools type looks like this: fieldType name=schools class=solr.TextField positionIncrementGap=100 analyzer type=index charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.PatternTokenizerFactory pattern=\s|,|- / filter class=solr.LowerCaseFilterFactory/ /analyzer analyzer type=query charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.PatternTokenizerFactory pattern=\s|,|- / filter class=solr.LowerCaseFilterFactory/ filter class=solr.SynonymFilterFactory synonyms=synonyms.txt ignoreCase=true expand=false/ /analyzer /fieldType AS you can see i use some pattern tokenizer which splits on whitespace. When i use the synonyms at query time the analytics show me this: high | school | lissabon| science high | school | barcelona | university | of | applied | When i search for IFD i get no results. I found this in debugQuery: schools:(high high university) (school school of) (lissaban barcelona applied) (science) With this i see the problem: solr tries a lot of combinations but not the right one. I thought i could escape the whitespaces in the synonyms(High\ school\ Lissabon). Then the analytics shows me better results: High school Lissabon High school Barcelona University of applied science Then SOLR search for high school Lissabon but in my index it is tokenized on whitespace, still no results. I'm stuck, can someone help me?? Thanks R -- View this message in context: http://lucene.472066.n3.nabble.com/Synonyms-problem-tp3316287p3316287.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Find results with or without whitespace
Frankie, Have you fixes this issue? I'm interested in your solution,, -- View this message in context: http://lucene.472066.n3.nabble.com/Find-results-with-or-without-whitespace-tp3117144p3298298.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Strip special chars like -
Erick, you're right. It's working, my schema looks like this: fieldType name=name_type class=solr.TextField positionIncrementGap=100 analyzer type=index charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=1 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0/ /analyzer analyzer type=query charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.WhitespaceTokenizerFactory/ filter class=solr.LowerCaseFilterFactory/ filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.WordDelimiterFilterFactory generateWordParts=1 catenateWords=0 catenateNumbers=0 splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0 / /analyzer /fieldType Thanks for helping me!! -- View this message in context: http://lucene.472066.n3.nabble.com/Strip-special-chars-like-tp3238942p3248545.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Strip special chars like -
Yes, i understand the difference between generateWordParts and catenateWords. But i can't fix my problem with these options, It doesn't fix all the possibilities. -- View this message in context: http://lucene.472066.n3.nabble.com/Strip-special-chars-like-tp3238942p3239186.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Strip special chars like -
Ok, i there are three query possibilities: Manchester-united Manchester united Manchesterunited The original name of the club is manchester-united. generateWordParts will fixes two of these possibilities: Manchester-united = manchester,united I can search for Manchester-united and manchester united. When i search for manchesterunited i get no results. To fix this i could use catenateWords: Manchester-united = manchesterunited In this situation i can search for Manchester-united and manchesterunited. When i search for manchester united i get no results. The catenateWords option will also fixes only 2 situations. -- View this message in context: http://lucene.472066.n3.nabble.com/Strip-special-chars-like-tp3238942p3239256.html Sent from the Solr - User mailing list archive at Nabble.com.
Updating opinion
Hello, I want some opinions for the updating process of my application. Users can edit there own data. This data will be validated and must be updated every 24 hours. I want to do this at night(0:00). Now lets say 50.000 documents are edited. The delta import will take ~20 minutes. So the indexing proces is ready at 0:20. Some data is depending on day. So the index has wrong data for 20 minutes. Now i thought i can fix this problem this way: I can do every hour a delta import without a commit. I do this 24 times and on the end of the day i do a commit and optimize the index. Is this possible? Is it faster to do the updates in parts? -- View this message in context: http://lucene.472066.n3.nabble.com/Updating-opinion-tp3209251p3209251.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: - character in search query
Here is my complete fieldtype: fieldType name=name class=solr.TextField positionIncrementGap=100 analyzer charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.PatternTokenizerFactory pattern=\s|, / filter class=solr.LowerCaseFilterFactory/ filter class=solr.PatternReplaceFilterFactory pattern=- replacement=/ filter class=solr.ASCIIFoldingFilterFactory/ /analyzer /fieldType In the Field Analysis i see that the - is removed by the patternreplaceFilter. When i escaped the term($q = SolrUtils::escapeQueryChars($q);) i see in my debugQuery something like this(term = arsenal - london): +((DisjunctionMaxQuery((name:arsenal)~1.0) DisjunctionMaxQuery((name:\ london~1.0))~2) () When i don't escaped the query i get something like this: +((DisjunctionMaxQuery((name:arsenal)~1.0) -DisjunctionMaxQuery((name:london)~1.0))~1) () The - is my term is used by the -DisjunctionMaxQuery. How can i fix this problem? What is the Easiest way? -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3184805.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: - character in search query
When i use the edismax handler the escaping works great(before i used the dismax handler).The debugQuery shows me this: +((DisjunctionMaxQuery((name:arsenal)~1.0) DisjunctionMaxQuery((name:london)~1.0))~2 The \ is not in the parsedquery, so i get the results i wanted. I don't know why the dismax handler working this way. Can someone tells me the difference between the dismax and edismax handler? -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3184941.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: - character in search query
Anybody? -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3182228.html Sent from the Solr - User mailing list archive at Nabble.com.
- character in search query
Hello, I have some problem with characters in the search term. I have some query's like this: Arsenal - london Ajax - amsterdam Arsenal - moskou Arsenal - China When i send arsenal - london to SOLR i get 2 results, China and moskou. I looked in the debugQuery and it looks like solr is searching for Arsenal that's not in london. How can i fix that SOLR handle the - as normal text? i tried something like this but it's not working: charFilter class=solr.PatternReplaceCharFilterFactory pattern=- replacement=/ Quotes is working(Arsenal - london) but then i cannot search for london arsenal anymore. -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3168604.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: - character in search query
thanks! I use the escape function of the solr pecl package to escape special characters http://docs.php.net/manual/kr/solrutils.escapequerychars.php -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3168638.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: - character in search query
It looks like it's still not working. I send this to SOLR: q=arsenal \- london I get no results. When i look at the debugQuery i see this: (name: arsenal | city:arsenal)~1.0 (name: \ | city:\)~1.0 (name: london | city: london)~1.0 my requesthandler: requestHandler name=dismax class=solr.SearchHandler default=true lst name=defaults str name=defTypedismax/str str name=qf name city /str /lst /requestHandler What is going wrong? -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3168666.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: - character in search query
Yes, i had some tokenizer like this: tokenizer class=solr.PatternTokenizerFactory pattern=\s|-|, / Now i removed the - from this tokenizer and the debugQuery looks like this: (name:arsenal | city:arsenal)~1.0 (name:\- | city:\-)~1.0 (name:london | city:london)~1.0 Still i get no results.. -- View this message in context: http://lucene.472066.n3.nabble.com/character-in-search-query-tp3168604p3168885.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How many fields can SOLR handle?
Yes i use something like that. I make a db connection to get the facets for the chosen category. With this data i add facet.fields dynamically: example: foreach(results as result){ qStr = facet.field= . result; } I was searching for a solution that i don't need to get the facets from db. Now i cache this list(PHP) so i don't need a request every time to the db. Thanks for all the help!! -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3150970.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How many fields can SOLR handle?
Hello Erik, I need the *_facets also for searching so stored must be true. Then, and I used *_facet similar to you, kept a list of all *_facet actual field names and used those in all subsequent search requests. Is this not bad for performance? I only need a few facets, not all.(only the facets for the chosen category) -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3147520.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Problem with spellchecking, dont want multiple request to SOLR
What should the query look like?? I can't define 2 spellchecker in one query. I want something like this: Search: Soccerclub(what) Manchester(where) select/?q=socerclub macnchesterspellcheck=truespellcheck.dictionary=spell_whatspellcheck.dictionary=spell_wherespell_what=socerclubspell_where=macnchester Now i have 2 spellcheckers in my requesthandler but i can't set them correct in my query. My config looks like this: arr name=last-components strspellcheck1/str strspellcheck2/str /arr searchComponent name=spellcheck1 class=solr.SpellCheckComponent lst name=spellchecker str name=namespell_what/str str name=fieldspell_search1/str str name=buildOnOptimizetrue/str str name=spellcheckIndexDirspellchecker1/str /lst /searchComponent searchComponent name=spellcheck2 class=solr.SpellCheckComponent lst name=spellchecker str name=namespell_where/str str name=fieldspell_search2/str str name=buildOnOptimizetrue/str str name=spellcheckIndexDirspellchecker2/str /lst /searchComponent -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-spellchecking-dont-want-multiple-request-to-SOLR-tp2988167p3147545.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How many fields can SOLR handle?
Hi, I know i can add components to my requesthandler. In this situation facets are dependent of there category. So if a user choose for the category TV: Inch: 32 inch(5) 34 inch(3) 40 inch(1) Resolution: Full HD(5) HD ready(2) When a user search for category Computer: CPU: Intel(12) AMD(10) GPU: Ati(5) Nvidia(2) So i can't put it in my requesthandler as default. Every search there can be other facets. Do you understand what i mean? -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3139833.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How many fields can SOLR handle?
Thanks Bill, That's exactly what i mean. But first i do a request to get the right facetFields from a category. So a user search for TV, i do request to a db to get tv_size and resolution. The next step is to add this to my query like this: facet.field=tv_sizefacet.field=resolution. I thought maybe it's possible to add the FACET fields automatically to my query(based on category). I understand this isn't possible and i need to do first a request to get the facet.fields. -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3139921.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: faceting on field with two values
Are you using the DIH?? You can use the transformer to concat the two fields -- View this message in context: http://lucene.472066.n3.nabble.com/faceting-on-field-with-two-values-tp3139870p3139934.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How many fields can SOLR handle?
Nobody? I'm still confused about this -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3137301.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR and SQL functions
Hello, I have made my own sql function(isSoccerClub). In my sql query browser this works fine. My query looks like: select * from soccer where isSoccerClub(id,name) = 1; Now i try to use this with the DIH. It looks like this: entity name=soccerclubs_entity query=select * from soccer where isSoccerClub(id,name) = 1 /entity Now i get some error with the full-import: Indexing failed. Rolled back all changes. without where isSoccerClub(id,name) = 1; it works fine. Does SOLR not support sql functions(transact-sql)?? -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129175.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR and SQL functions
Ok, i checked my error logs and find some problems. SET NAMES latin1 SET character_set_results = NULL SHOW VARIABLES SHOW COLLATION SET autocommit=1 SET sql_mode='STRICT_TRANS_TABLES' SET autocommit=0 select * from soccer where isSoccerClub(id,name) = 1; I see that the sql_mode is set to STRICT TRANS TABLES. When i run this part in mysql i get errors. Without sql_mode it works. Can i change this variable or do you know a better solution? -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129342.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR and SQL functions
I have found the problem. Some records has incorrect data. Thanks for your help so far!! -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129409.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Find results with or without whitespace
I had the same problem: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-td2934742.html#a2964942 -- View this message in context: http://lucene.472066.n3.nabble.com/Find-results-with-or-without-whitespace-tp3117144p3117386.html Sent from the Solr - User mailing list archive at Nabble.com.
Query may only contain [a-z][0-9]
Hello, Is it possible to configure into SOLR that only numbers and letters are accepted([a-z][0-9])?? When a user gives a term like + or - i get some SOLR errors. How can i exclude this characters? -- View this message in context: http://lucene.472066.n3.nabble.com/Query-may-only-contain-a-z-0-9-tp3103553p3103553.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Query may only contain [a-z][0-9]
Yes i use the dismax handler, but i will fix this in my application layer. Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/Query-may-only-contain-a-z-0-9-tp3103553p3103945.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Hello, I have change my db dates to the correct format like 2011-01-11T00:00:00Z. Now i have the following data: Manchester Store2011-01-01T00:00:00Z 2011-31-03T00:00:00Z 18:00 Manchester Store2011-01-04T00:00:00Z 2011-31-12T00:00:00Z 20:00 The Manchester Store has 2 seasons with different closing times(18:00 and 20:00). Now i have 4 fields in SOLR. Companyname Manchester Store startdate(multiV) 2011-01-01T00:00:00Z, 2011-01-04T00:00:00Z enddate(multiV) 2011-31-03T00:00:00Z, 2011-31-12T00:00:00Z closingTime(multiV) 18:00, 20:00 I want some facets like this: Open today(2011-23-06): 20:00(1) The facet query needs to look what's the current date and needs to use that closing time. My facet.query look like this: facet.query=startdate:[* TO NOW] AND enddate:[NOW TO *] AND closingTime:18:00 This returns 1 count like this: 18:00(1) When i use this facet.query it returns also 1 result: facet.query=startdate:[* TO NOW] AND enddate:[NOW TO *] AND closingTime:20:00 This result is not correct because NOW(2011-23-06) it's not open till 20:00. It looks like there is no link between the season and the closingTime. Can somebody helps me?? The fields in SOLR are not correct? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3098875.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Hello Lee, I thought maybe this is a solution: I can index every night the correct openinghours for next day. So tonight(00:01) i can index the openinghours for 2011-24-06. My query in my dih can looks like this: select * from OPENINGHOURS o where o.startdate = NOW() AND o.enddate = NOW() AND o.companyid = '${OTHER_ENTITY.companyid}' With this query i only save the openinghours for today. So i have only one field(openinghours). Openinghours 18:00 Then i can facets easilty on openinghours(facet.field=openinghours). I don't know if i can update it every night without problems? I can use the delte import? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3099468.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Thanks it works!! I want to change the format of the NOW in SOLR. Is it possible? Now date format looks like this: -MM-dd T HH:mm:sss Z In my db the format is dd-MM. How can i fix the NOW so i can do something like * TO NOW(dd-mm)?? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3089632.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Yes, current year. I understand that something like dd-mm-yy isn't possible. I will fix this in my db, Thanks for your help! -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3090247.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: [ANNOUNCEMENT] PHP Solr Extension 1.0.1 Stable Has Been Released
Are you working on some changes to support earlier versions of PHP? -- View this message in context: http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3090702.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Hi, I want all the results, not only the results for current season. Let say i search for supermarket, i get results 1, 2 and 3 in my response(previous post) Then i want on the left part some facets with openinghours. Let's say it is today 02/08/2011. Then my facets looks like this: 18:00(2) 20:00(1) Company 2 is open till 20:00. Company 1 and 3 are open till 18:00. So there are different openinghours dependent on season. I hope you understand with i mean. -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3085129.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
with this facet.query=startdate:[* TO NOW] AND enddate:[NOW TO *] i will get all the results?? Now i get the startdate and enddate from my db with the DIH. My schema.xml looks like this: fieldType name=date class=solr.TrieDateField omitNorms=true precisionStep=0 positionIncrementGap=0/ field name=startDate type=date indexed=true stored=true/ field name=endDate type=date indexed=true stored=true/ When i use the facet.query i only get a count with companies. What i want is a count for openinghours. Maybe i forgot something? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3086455.html Sent from the Solr - User mailing list archive at Nabble.com.
Complex situation
Hello, First i will try to explain the situation: I have some companies with openinghours. Some companies has multiple seasons with different openinghours. I wil show some example data : Companyid Startdate(d-m) Enddate(d-m) Openinghours_end 101-0101-04 17:00 101-0401-08 18:00 101-0831-12 17:30 201-0131-12 20:00 301-0101-06 17:00 301-0631-12 18:00 What i want is some facets on the left site of my page. They have to look like this: Closing today on: 17:00(23) 18:00(2) 20:00(1) So i need to get the NOW to know which openinghours(seasons) i need in my facet results. How should my index look like? Can anybody helps me how i can save this data in the solr index? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3071936.html Sent from the Solr - User mailing list archive at Nabble.com.
SnowballPorterFilterFactory and apostrophes
Hello, I use the snowballPorterFilter(dutch) to stem the words in my index. Like this: restaurants = restaurant restauranten = restaurant apples = apple Now i see on my solr analytics page that this happens with mcdonald's: mcdonald's = mcdonald' I don't want stemming for apostrophes. Is it possible? I could use keepword.txt but there are thousand of words with apostrophes -- View this message in context: http://lucene.472066.n3.nabble.com/SnowballPorterFilterFactory-and-apostrophes-tp3066709p3066709.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Query on Synonyms feature in Solr
Maybe you can try to escape the synonyms so it's no tokized by whitespace.. Private\ schools,NGO\ Schools,Unaided\ schools -- View this message in context: http://lucene.472066.n3.nabble.com/Query-on-Synonyms-feature-in-Solr-tp3058197p3062392.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WordDelimiter and stemEnglishPossessive doesn't work
Ok, with catenatewords the index term will be mcdonalds. But that's not what i want. I only use the wordDelimiter to split on whitespace. I have already used the PatternTokenizerFactory so i can't use the whitespacetokenizer. I want my index looks like this: dataset: mcdonald's#burgerking#Free record shop#hm mcdonald's burgerking free record shop hm Can i configure the wordDelimiter as an whitespaceTokenizer? So it only splits on whitespaces and nothing more(not removing 's etc).. -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiter-and-stemEnglishPossessive-doesn-t-work-tp3047678p3062461.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WordDelimiter and stemEnglishPossessive doesn't work
THANK YOU!! I thought i only could use one character for the pattern.. Now i use a regular expression:) tokenizer class=solr.PatternTokenizerFactory pattern=#|\s / I don't need the wordDelimiter anymore. It's split on # and whitespace dataset: mcdonald's#burgerking#Free record shop#hm mcdonald's burgerking free record shop hm This is exactly how we want it. -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiter-and-stemEnglishPossessive-doesn-t-work-tp3047678p3062984.html Sent from the Solr - User mailing list archive at Nabble.com.
WordDelimiter and stemEnglishPossessive doesn't work
Hello, I have some problem with the wordDelimiter. My data looks like this: mcdonald's#burgerking#Free record shop#hm I want to tokenize this on #. After that it has to split on whitespace. I use the wordDelimiter for that(can't use 2 tokenizers) Now this works but there is one problem, it removes the '. My index looks like this: mcdonald burgerking free record shop hm I don't want this so i use the stemEnglishPossessive. The description from this part of the filter looks like this: stemEnglishPossessive=1 causes trailing 's to be removed for each subword. Doug's = Doug default is true (1); set to 0 to turn off My Field looks like this: fieldType name=Test_field class=solr.TextField analyzer charFilter class=solr.HTMLStripCharFilterFactory/ tokenizer class=solr.PatternTokenizerFactory pattern=# / filter class=solr.WordDelimiterFilterFactory splitOnCaseChange=0 splitOnNumerics=0 stemEnglishPossessive=0 catenateWords=0 / /analyzer /fieldType It looks like the stemEnglishPossessive=0 is not working. How can i fix this problem? Other filter? Did i forget something? -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiter-and-stemEnglishPossessive-doesn-t-work-tp3047678p3047678.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr monitoring: Newrelic
Hello, I found this tool to monitor solr querys, cache etc. http://newrelic.com/ http://newrelic.com/ I have some problems with the installation of it. I get the following errors: Could not locate a Tomcat, Jetty or JBoss instance in /var/www/sites/royr Try re-running the install command from AppServerRootDirectory/newrelic. If that doesn't work, locate and edit the start script manually. Generated New Relic configuration file /var/www/sites/royr/newrelic/newrelic.yml * Install incomplete Does anybody have experience with Newrelic in combination with Solr? -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-monitoring-Newrelic-tp3042889p3042889.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr monitoring: Newrelic
I use Jetty, it's standard in the solr package. Where can i find the jetty folder? then i can start this command: java -jar newrelic.jar install -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-monitoring-Newrelic-tp3042889p3042981.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr monitoring: Newrelic
Yes, that's the problem. There is no jetty folder. I have try the example/lib directory, it's not working. There is no jetty war file, only jetty-***.jar files Same error, could not locate a jetty instance. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-monitoring-Newrelic-tp3042889p3043080.html Sent from the Solr - User mailing list archive at Nabble.com.
How many fields can SOLR handle?
Hello, I have a SOLR implementation with 1m products. Every products has some information, lets say a television has some information about pixels and inches, a computer has information about harddisk, cpu, gpu. When a user search for computer i want to show the correct facets. An example: User search for Computer Facets: CPU AMD(10) Intel(300) GPU Nvidia(20) Ati(290) Every product has different facets. I have something like this in my schema: dynamicField name=*_FACET type=facetType indexed=true stored=true multiValued=true/ In SOLR i have now a lot of fields: CPU_FACET, GPU_FACET etc. How many fields can SOLR handle? Another question: Is it possible to add the FACET fields automatically to my query? facet.field=*_FACET? Now i do first a request to a DB to get the FACET titles and add this to the request: facet.field=cpu_FACET,gpu_FACET. I'm affraid that *_FACET is a overkill solution. -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3033910.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: [ANNOUNCEMENT] PHP Solr Extension 1.0.1 Stable Has Been Released
Hello, I have some problems with the installation of the new PECL package solr-1.0.1. I run this command: pecl uninstall solr-beta ( to uninstall old version, 0.9.11) pecl install solr The installing is running but then it gives the following error message: /tmp/tmpKUExET/solr-1.0.1/solr_functions_helpers.c: In function 'solr_json_to_php_native': /tmp/tmpKUExET/solr-1.0.1/solr_functions_helpers.c:1123: error: too many arguments to function 'php_json_decode' make: *** [solr_functions_helpers.lo] Error 1 ERROR: `make' failed I have php version 5.2.17. How can i fix this? -- View this message in context: http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3034350.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Problem with spellchecking, dont want multiple request to SOLR
mm ok. I configure 2 spellcheckers: searchComponent name=spellcheck class=solr.SpellCheckComponent lst name=spellchecker str name=namespell_what/str str name=fieldspell_what/str str name=buildOnOptimizetrue/str str name=spellcheckIndexDirspellchecker_what/str /lst lst name=spellchecker str name=namespell_where/str str name=fieldspell_where/str str name=buildOnOptimizetrue/str str name=spellcheckIndexDirspellchecker_where/str /lst /searchComponent How can i enable it in my search request handler and search both in one request? -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-spellchecking-dont-want-multiple-request-to-SOLR-tp2988167p2992076.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Spellcheck: Two dictionaries
That uber dictionary is not what i want. I get also suggestions form the where in the what. An example: what where chelseaLondon Soccerclub Bondon London When i type soccerclub london i want the suggestion from the what dictionary. Did you mean Soccerclub Bondon. With the uber dictionary i don't get this suggestion because it is spelled correctly.(based on the where) -- View this message in context: http://lucene.472066.n3.nabble.com/Spellcheck-Two-dictionaries-tp2931458p2992093.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Spellcheck: Two dictionaries
?? -- View this message in context: http://lucene.472066.n3.nabble.com/Spellcheck-Two-dictionaries-tp2931458p2987915.html Sent from the Solr - User mailing list archive at Nabble.com.
Problem with spellchecking, dont want multiple request to SOLR
Hello, First i will explain my situation. I have a 2 fields on my website: What and Where. When a user search i want spellcheck on both fields. Now i have 2 dictionaries, one for what and one for where. I want to search with one request and spellcheck both fields. Is it possible and how? -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-spellchecking-dont-want-multiple-request-to-SOLR-tp2988167p2988167.html Sent from the Solr - User mailing list archive at Nabble.com.
Dynamic facet field
Hello, I have some problem with dynamic facets. I have a database with 1mil. products and have indexed this with DIH. Now i have facets that are connected to one category of products. Example: Category Facet Television type(hd,plasma), Inches(38,42), Color(black,grey) Mobile phone brand(HTC,APPLE), OS(android,ios,bb) When a user search for television i want this facets: Type hd(203) plasma(32) Inches 42(39) 38(213) Color black(200) grey(30) URL facet.field=typefacet.field=inchesfacet.field=color Now i get from another db the titles of the facets that i want for this category and use it in the URL(i don't want this anymore) I have dynamic fields to fill the index(type_facet,inches_facet,color_facet). I thought maybe something like this is possible: facet.field=*_facet All the fields with _facet will be a faceted?? -- View this message in context: http://lucene.472066.n3.nabble.com/Dynamic-facet-field-tp2979407p2979407.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitspace(soccer club and soccerclub)
Ok, I will do it with synonyms. What does the list look like? soccerclub,soccer club The index looks like this: Manchester united soccerclub Chelsea soccer club I want them both in my results if i search for soccer club or soccerclub. How can i configure this in schema.xml? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2979577.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Faceting on distance in Solr: how do you generate links that search withing a given range of distance?
He I had the same problem. It's fix now. But it comes with a new issue: I get results for all company's in London. Now i want the following facets: 10km(20) 20km(34) 40km(55) 40km can be outside of London. There are only company's in london in my results. The company's outside of London are not counted in my facets. Is it possible? -- View this message in context: http://lucene.472066.n3.nabble.com/Faceting-on-distance-in-Solr-how-do-you-generate-links-that-search-withing-a-given-range-of-distance-tp2953806p2964591.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitspace(soccer club and soccerclub)
Hello, Thanks, i think both are good options. I prefer the option with the filter. What does a charfilter look like? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2964950.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitspace(soccer club and soccerclub)
Thanks for the help so far, I don't think this solves the problem. What if my data look like this: soccer club Manchester united if i search for soccerclub manchester and for soccer club manchester i want this result back. A copyfield that removes whitespaces is not an option. With the charfilter i get something like this: 1. Index time: soccer club Manchester united-- soccerclubManchesterunited indexed. 2. Search time: soccer club OR soccerclub -- soccerclub searched. In this situation i still get no result if i search soccerclub. The index is soccerclubManchesterunited. How can i fix it? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2965389.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
Try this in your query: TIME_FORMAT(timeDb, '%H:%i') as timefield http://www.java2s.com/Tutorial/MySQL/0280__Date-Time-Functions/TIMEFORMATtimeformat.htm -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2961591.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Exact match
Try this: search term -- View this message in context: http://lucene.472066.n3.nabble.com/Exact-match-tp2952591p2952699.html Sent from the Solr - User mailing list archive at Nabble.com.
Results with and without whitspace(soccer club and soccerclub)
Hello, My index looks like this: Soccer club Football club etc. Now i want that a user can search for soccer club and soccerclub. Soccer club works but without the whitespace it's not a match. How can i fix this? How does my configuration looks like? Is there a filter or something? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitspace-soccer-club-and-soccerclub-tp2934742p2934742.html Sent from the Solr - User mailing list archive at Nabble.com.