solr5 not woring in axis2
Hi Before upgrade version === Tomcat : 6 Jdk 1.6 Solr3.1 Axis2 1.6.2 We are using like Server == import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; CommonsHttpSolrServer solrServer = new CommonsHttpSolrServer(solrWebServer); Connection == import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.response.QueryResponse; SolrQuery query = new SolrQuery(); query.setQuery(solrMainQuery.toString()); QueryResponse qr = solrServer.query(query); Both code is working well. After upgrade version == Tomcat : 8 Jdk 1.8 Solr5.1 Axis2 1.6.2 Server == import org.apache.solr.client.solrj.impl.HttpSolrClient; HttpSolrClient solrServer = new HttpSolrClient (solrWebServer); Connection == import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.response.QueryResponse; SolrQuery query = new SolrQuery(); query.setQuery(solrMainQuery.toString()); QueryResponse qr = solrServer.query(query); the query not kid to solr server. once I command yellow mark package. The query kid to server. But throw exception. I am using 3 (root,solr,axis2) appication in my tomcat, Root to solr à Same code working fine Axis2 to solr à Same code not working Where I go wrong. If any aternaive package or I missed dependency libraries With Regards, RSK
Re: Indexed field to schema field
Feels like an XY problem. Why do you want to do this? What's the use-case? Perhaps there's an alternative approach that satisfies the need. Best, Erick On Tue, Jul 7, 2015 at 4:21 AM, Mikhail Khludnev mkhlud...@griddynamics.com wrote: Just an idea, Solr Admin/Schema Browser reports some info like this, hence, you can trace the way in which it does it. On Tue, Jul 7, 2015 at 10:34 AM, Gajendra Dadheech gajju3...@gmail.com wrote: Hi, Can i some how translate fields which i read from newSearcher.getAtomicReader().fields(), to schema fields ? Does solr expose any method to do this translation ? Alternative approach i am thinking will involved lots of regex computation as the fields would be _string, _float etc and i would have to remove those suffixes, this becomes little tricky when fields are dynamic. Thanks and regards, Gajendra Dadheech -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com
Jetty in Solr 5.2.0
Hi Everyone, This may be a question to be posted on Jetty mailing list, but I figured I should start here first. Using Solr 5.2.0, when I start Solr, http://localhost:8983/solr/ is the entry point. My question is: 1) Where is solr on the file system? 2) How can I add http://localhost:8983/MyHandler/ to Jetty? For #2, I'm exploring the possibility of using the existing Web Server to see if I can have an additional application running on the same host as Solr. Thanks Steve
Re: Windows Version
: However, when I try to follow the instructions for loading the examples : I find that there is a file that I am supposed to have called post.jar : which I cannot find in the directory specified, exampledocs. There is a : file called post in another directory but it does not seem to be a : .jar file. the specifics of what you have done, what you are looking at, and what you see are really important. 1) what exactly did you download? ie: what filename? from what URL? 2) what instructions are you looking at? 3) do you see an exampledocs directory at all? what's in it? For example... I just clicked on the DOWNLOAD button on the solr website, where the URL https://lucene.apache.org/solr/mirrors-solr-latest-redir.html redirected me to a solr mirror, and i downloaded the file solr-5.2.1.zip After i unzipped it, i followed along with this tutorial... https://lucene.apache.org/solr/quickstart.html and noted a link for windows users that pointed to this doc... https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows and confirmed that my solr-5.2.1/example/exampledocs/ directory contains a post.jar file which i can use to index the same documents. See also: https://wiki.apache.org/solr/UsingMailingLists -Hoss http://www.lucidworks.com/
Re: Jetty in Solr 5.2.0
: The other option I looked at is writing my own handler for my crawler and : plugging it into Solr's solrconfig.xml. If I do this, then my crawler will : run in the same JVM space as Solr, this is something I want to avoid. If you don't want you crawler to run in the same JVM as solr, then asking how to deploy it to the same jetty instance makes no sense -- it would still be in the same JVM. : Yet another option is for me deploy a second instance of Jetty on the Solr : server just for my crawler. This is over kill in my opinion. Can you explain the basis of your opinion? You want your application to run in a daemon and be accesible over HTTP -- so implement it as an HTTP daemon. the fact that your app will talk to Solr on the same machine via HTTP doesn't make that design a wrong, any more then talking to a JDBC daemon on the same machine to fetch data from would mean you should skip having your own process and embed your crawler directly in the database server. : issue? Deploy my crawler on a separate server is not an option and for my : use case Solr will be used in a lightweight so there is plenty of CPU / RAM : on this one server to host Solr and my crawler. no one ever suggested you had to deploy to a completley seperate server -- but attempting to hook into the jetty powering Solr isn't supported by Solr, will almost certainly cause you problems down the road when you upgrade, and aparently isn't what you want anyway (since you don't want to be in the same JVM as solr). FWIW: I have built lots of small, lightweight, webapps that i built for exactly the same purpose you described deployed to isolated tomcat or jetty containers running on the same machines as seperate containers running Solr (even back when solr was shipped war and could be deployed to any servlet container) for the precise reason that i wanted ot manage the JVMs resources independently, and i had the added bonus of being able to better scale my deployment in the long run (by reloacating apps on diff boxes and changing host aliases) ... zero problems. -Hoss http://www.lucidworks.com/
Re: Jetty in Solr 5.2.0
What I am faced with is this. I have to create my own crawler, similar to DIH. I have to deploy this on the same server as Solr (this is given, I cannot change it). I have to manage this crawler just like I have to manage my Solr deployment using Solr API through HTTP request. I figured if I deploy my application under Jetty, with Solr, then problem is solved. The other option I looked at is writing my own handler for my crawler and plugging it into Solr's solrconfig.xml. If I do this, then my crawler will run in the same JVM space as Solr, this is something I want to avoid. Yet another option is for me deploy a second instance of Jetty on the Solr server just for my crawler. This is over kill in my opinion. What do folks think about this and what's the best way to approach this issue? Deploy my crawler on a separate server is not an option and for my use case Solr will be used in a lightweight so there is plenty of CPU / RAM on this one server to host Solr and my crawler. Steve On Tue, Jul 7, 2015 at 12:31 PM, Upayavira u...@odoko.co.uk wrote: On Tue, Jul 7, 2015, at 05:03 PM, Steven White wrote: Hi Everyone, This may be a question to be posted on Jetty mailing list, but I figured I should start here first. Using Solr 5.2.0, when I start Solr, http://localhost:8983/solr/ is the entry point. My question is: 1) Where is solr on the file system? 2) How can I add http://localhost:8983/MyHandler/ to Jetty? For #2, I'm exploring the possibility of using the existing Web Server to see if I can have an additional application running on the same host as Solr. Why do you want to do this? It is not recommended as you are not supposed to think of Solr as a servlet container. If you *must*, then you can place a war file in the webapps directory next to solr.war and it will expand and be available when you start Jetty. You cannot be sure that this behaviour will work long term. Upayavira
Re: Jetty in Solr 5.2.0
On Tue, Jul 7, 2015, at 05:03 PM, Steven White wrote: Hi Everyone, This may be a question to be posted on Jetty mailing list, but I figured I should start here first. Using Solr 5.2.0, when I start Solr, http://localhost:8983/solr/ is the entry point. My question is: 1) Where is solr on the file system? 2) How can I add http://localhost:8983/MyHandler/ to Jetty? For #2, I'm exploring the possibility of using the existing Web Server to see if I can have an additional application running on the same host as Solr. Why do you want to do this? It is not recommended as you are not supposed to think of Solr as a servlet container. If you *must*, then you can place a war file in the webapps directory next to solr.war and it will expand and be available when you start Jetty. You cannot be sure that this behaviour will work long term. Upayavira
Re: Jetty in Solr 5.2.0
Jetty is an implementation detail in Solr 5.0 -- modifying the underlying jetty configs, or directly adding handlers isn't supported by Solr. I nthe future, jetty may be ripped out completely and replaced with some other networking stack w/o advanced notice (probably unlikely, but smaller changes with similar impacts on users who depend on jetty specifics are very plausable -- example: switching how the HTTP connectors are initialized to be done via java code) in general, users should treat Solr as a black box that speaks HTTP more details... https://cwiki.apache.org/confluence/display/solr/Major+Changes+from+Solr+4+to+Solr+5 http://wiki.apache.org/solr/WhyNoWar : Date: Tue, 7 Jul 2015 12:03:37 -0400 : From: Steven White swhite4...@gmail.com : Reply-To: solr-user@lucene.apache.org : To: solr-user@lucene.apache.org : Subject: Jetty in Solr 5.2.0 : : Hi Everyone, : : This may be a question to be posted on Jetty mailing list, but I figured I : should start here first. : : Using Solr 5.2.0, when I start Solr, http://localhost:8983/solr/ is the : entry point. My question is: : : 1) Where is solr on the file system? : 2) How can I add http://localhost:8983/MyHandler/ to Jetty? : : For #2, I'm exploring the possibility of using the existing Web Server to : see if I can have an additional application running on the same host as : Solr. : : Thanks : : Steve : -Hoss http://www.lucidworks.com/
Re: Setup Solr Multiserver Environment
You're conflating creating collections with starting Solr instances. Ports are not associated with replicas, they're associated with instances of Solr. So what you're really trying to do is start three Solr instances with different ports. Collections and replicas will come later. Have you tried just using the canned example and examining the results? Just start Solr with the '-e cloud' example and take a look at the results, that should shed some light on the structure. Of course you need the -c and -z options too Best, Erick On Tue, Jul 7, 2015 at 2:08 AM, Adrian Liew adrian.l...@avanade.com wrote: Thanks Eric. I have a scenario whereby I will like to setup a minimal 1 shard with 3 replica all of which each replica sits in its one machine. I also have an external ZK ensemble setup with zk1, zk2 and zk3. A simple example: I create say collection1 in the following order to satisfy the scenario above using just solr.cmd. (note I am using a windows server do perform the below as this is a requirement) 1. Create collection1 with numshards of 1 and with port 5000 which points to say example/cloud/node1/solr 2. Next create the replica for collection1 (without specifying numshards this time as it will tell Solr to create a replica) with port 5001 which points to say example/cloud/node2/solr 3. Repeat step 2 and point this to say example/cloud/node3/solr with port 5002 I am currently stuck at Step 1 as I only managed to create the collection with Shard 1 pointing at port 5000. However, how can I go about creating the replica for that existing collection, collection1? Best regards, Adrian -Original Message- From: Erick Erickson [mailto:erickerick...@gmail.com] Sent: Tuesday, July 7, 2015 1:25 AM To: solr-user@lucene.apache.org Subject: Re: Setup Solr Multiserver Environment How are you starting this? WARNING: I haven't worked on a windows box for some time so beware my advice ;). You shouldn't have to edit these at all if you're using the start scripts (now preferred). -h host -p port -s dir All this does is set an env variable that gets substituted in for things like ${jetty.port:5002}. That syntax really says Use the env var 'jetty.port' if it's defined, otherwise default to 5002 Try bin/solr.cmd --help and see what the options are.. Best, Erick On Mon, Jul 6, 2015 at 1:41 AM, Adrian Liew adrian.l...@avanade.com wrote: Has anyone tried setting up a multiserver environment for Solr on a Windows machine? I was just wondering how you will go about say setting up a single shard with 3 replicas (one of them is the leader) using the solr.cmd or infact java.cmd on three different virtual machines for example. Note, I can only run solr.cmd on Windows Machine. I won't be able to use shell scripting using the sh command if I wanted to have an automated script to startup. This is the way I see it for SolrCloud scenario: 1 replica = 1 machine So for a three server instance/node setup with a single shard, it will be: Shard 1: Replica 1 (this is the Leader for the shard for example) : solrnode1.cloudapp.net:5000 Replica 2: solrnode1.cloudapp.net:5001 Replica 3: solrnode1.cloudapp.net:5002 May it be possible to go about changing the hostport in the solr.xml of each of those solr server instance/nodes above? So like each of them will be pointing to like a directory called solr/node1, solr/node2 and solr/node3. So the only place which makes sense to configure the host addresses is modifying the sections below: The default section that comes with the solr.xml is within solrcloud section is: str name=host${host:}/str int name=hostPort${jetty.port:8983}/int str name=hostContext${hostContext:solr}/str So in my case for example, can I do the following? Replica 1: str name=hostsolrnode1.cloudapp.net/str int name=hostPort${jetty.port:5000}/int str name=hostContext${hostContext:solr}/str Replica 2: str name=host solrnode1.cloudapp.net /str int name=hostPort${jetty.port:5001}/int str name=hostContext${hostContext:solr}/str Replica 3: str name=host solrnode1.cloudapp.net /str int name=hostPort${jetty.port:5002}/int str name=hostContext${hostContext:solr}/str Any suggestions are welcomed. Regards, Adrian Best regards, Adrian Liew | Consultant Application Developer Avanade Malaysia Sdn. Bhd..| Consulting Services (: Direct: +(603) 2382 5668 È: +6010-2288030
Re: Solr UIMA Part Of Speech Tagging
@Erik Hatcher-4 I had no issues using it. To achieve indexing as I did you need to use below AEs in your *AE.xml file: / delegateAnalysisEngineSpecifiers delegateAnalysisEngine key=WhitespaceTokenizer import name=WhitespaceTokenizer/ /delegateAnalysisEngine delegateAnalysisEngine key=HmmTagger import name=HmmTagger/ /delegateAnalysisEngine /delegateAnalysisEngineSpecifiers/ and in solrconfig.xml add below analyzeFields + fieldMapping in uima updateRequestProcessorChain: /lst name=analyzeFields bool name=mergefalse/bool arr name=fields strtext/str /arr /lst/ and /lst name=fieldMappings lst name=type str name=nameorg.apache.uima.TokenAnnotation/str lst name=mapping str name=featurecoveredText/str str name=field texttokens/str /lst lst name=mapping str name=featureposTag/str str name=field textpos/str /lst /lst /lst/ rest stays per standard template; hope this helps. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-UIMA-Part-Of-Speech-Tagging-tp4215904p4216205.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr edismax always using the default fields?
Hello, I'm trying to tune a search handler to get the results that I want. In the solrconfig.xml I specify several different query fields for the edismax query parser but it always seems to use the default fields instead. For example and clarification, when I remove Author from the df list of fields the search results do not come up with the author I was looking for. But when I add Author back in as a df it returns 19/20 of his documents as the top search results. I guess I'm really confused because when you specify the query fields solr shouldn't use those default fields. I guess it's not a real super problem because I can define all the fields I need as defaults. I'm just trying to figure out where I'm going wrong here. Is there a reason that this would happen? I have the search handler xml below requestHandler name=/select class=solr.SearchHandler lst name=defaults str name=echoParamsall/str str name=defTypeedismax/str str name=dfAuthor/str --ADDING THIS GIVES DESIRED RESULT, REMOVING BAD-- str name=dftext/str -RESULT. NOTHING ELSE CHANGES str name=qftext^0.5 Title^1.2 Author^2.0/str str name=rows20/str str name=q.alt*some author name*/str /lst /requestHandler And here is the echoed params of the search responseHeader: { status: 0, QTime: 48, params: { lowercaseOperators: true, df: [ Author, text ], echoParams: all, indent: true, qf: text^0.5 Title^1.2 Author^2.0, _: 1436291181730, q.alt: *some author name*, stopwords: true, wt: json, defType: edismax, rows: 20 } -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: unsubscribe
Hi Jacob, See https://lucene.apache.org/solr/resources.html#mailing-lists for unsubscribe info Notice also that every email from the solr-user mailing list contains the following header: List-Unsubscribe: mailto:solr-user-unsubscr...@lucene.apache.org Steve On Jul 7, 2015, at 11:46 AM, Jacob Singh jacobsi...@gmail.com wrote: Unsubscribe On Jul 7, 2015 11:39 AM, Jacob Singh jacobsi...@gmail.com wrote: -- +1 512-522-6281 twitter: @JacobSingh ( http://twitter.com/#!/JacobSingh ) web: http://www.jacobsingh.name Skype: pajamadesign gTalk: jacobsi...@gmail.com
Windows Version
So I am a newbie at Solr and am having trouble getting the examples working on Windows 7. I downloaded and unzipped the distribution and have been able to get Solr up and running. I can access the admin page. However, when I try to follow the instructions for loading the examples I find that there is a file that I am supposed to have called post.jar which I cannot find in the directory specified, exampledocs. There is a file called post in another directory but it does not seem to be a .jar file. Two questions: 1. Has this been addressed on some site that I am not yet aware of?2. What am I missing here? Allan elkowitzelkow...@alumni.caltech.edu
Re: Windows Version
Does https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows solve the problem? — Erik Hatcher, Senior Solutions Architect http://www.lucidworks.com http://www.lucidworks.com/ On Jul 7, 2015, at 12:43 PM, Allan Elkowitz elkow...@alumni.caltech.edu wrote: So I am a newbie at Solr and am having trouble getting the examples working on Windows 7. I downloaded and unzipped the distribution and have been able to get Solr up and running. I can access the admin page. However, when I try to follow the instructions for loading the examples I find that there is a file that I am supposed to have called post.jar which I cannot find in the directory specified, exampledocs. There is a file called post in another directory but it does not seem to be a .jar file. Two questions: 1. Has this been addressed on some site that I am not yet aware of?2. What am I missing here? Allan elkowitzelkow...@alumni.caltech.edu
RE: Tokenizer and Filter Factory to index Chinese characters
Yes, but it is a small change :) M. -Original message- From:Zheng Lin Edwin Yeo edwinye...@gmail.com Sent: Tuesday 7th July 2015 4:50 To: solr-user@lucene.apache.org Subject: Re: Tokenizer and Filter Factory to index Chinese characters So we have to recompile the analysers ourselves before we can use it in 5.x? Regards, Edwin On 6 July 2015 at 18:44, Markus Jelsma markus.jel...@openindex.io wrote: Yes, analyzers slightly changed since 5.x. https://issues.apache.org/jira/browse/LUCENE-5388 -Original message- From:Zheng Lin Edwin Yeo edwinye...@gmail.com Sent: Monday 6th July 2015 12:31 To: solr-user@lucene.apache.org Subject: Re: Tokenizer and Filter Factory to index Chinese characters Yes, I tried that also, but I faced some compatibility issues with Solr 5.2.1, as the libs that I found and downloaded seems to be for Solr 3.x versions. I got the following error when I tried to start Solr with Paoding configured: java.lang.VerifyError: class net.paoding.analysis.analyzer.PaodingAnalyzerBean overrides final method tokenStream.(Ljava/lang/String;Ljava/io/Reader;)Lorg/apache/lucene/analysis/TokenStream; at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$100(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:421) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:383) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$100(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:421) at java.lang.ClassLoader.loadClass(Unknown Source) at java.net.FactoryURLClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.net.FactoryURLClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:476) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:423) at org.apache.solr.schema.FieldTypePluginLoader.readAnalyzer(FieldTypePluginLoader.java:262) at org.apache.solr.schema.FieldTypePluginLoader.create(FieldTypePluginLoader.java:94) at org.apache.solr.schema.FieldTypePluginLoader.create(FieldTypePluginLoader.java:42) at org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:151) at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:489) at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:175) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:55) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:69) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:102) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:516) at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:283) at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:277) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Regards, Edwin 2015-07-06 16:37 GMT+08:00 davidphilip cherian davidphilipcher...@gmail.com : Hi Edwin, Have you tried the Paoding analyzer? It is not out of the box shipped with Solr jars. You may have to download it and add it to solr libs.
Fwd: Search inconsistency on Solr
Hi All, I started using solr a while back. Now I'm writing jUnit test for the solr application but I encounter inconsistency in my results causing my jUnit tests to fail. There are times that the order of the documents returned will be different because of the score. Do you have any pointers to solve this? I've read this two issues; which I see to be similar so mine. https://issues.apache.org/jira/browse/SOLR-5821 https://issues.apache.org/jira/browse/SOLR-4260 Regards, jpmantuano
Solr Boost Search word before Specific Content
I am using Solr 5.0.0, I have one question in relevance boost: If I search for laptop table like words, is there any way to boost results search word before the words like by with or without etc. I used this query: ? defType = dismax q = foo bar bq = (*:* -by)^999 But, this will boost negatively those documents having the word by or with etc. How can i avoid this problem? For example, if I search for laptop table then by the above query the result DGB Cooling Laptop Table by GDB won't boost. I just need to give a boost to the search words before certain words like by, with, etc. Is it possible? Example 2 If i search for Laptop bag It should boost and return Results having search words before these words with,by,without,etc first. Lets say dell laptop with laptop bag laptop bag with cover laptop bag and table Results like laptop bag with cover laptop bag and table dell laptop without laptop bag, In the Results search word laptop bag is before with, without,and. Search results containing search word before these word should come first. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Boost-Search-word-before-Specific-Content-tp4216072.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Setup Solr Multiserver Environment
Thanks Eric. I have a scenario whereby I will like to setup a minimal 1 shard with 3 replica all of which each replica sits in its one machine. I also have an external ZK ensemble setup with zk1, zk2 and zk3. A simple example: I create say collection1 in the following order to satisfy the scenario above using just solr.cmd. (note I am using a windows server do perform the below as this is a requirement) 1. Create collection1 with numshards of 1 and with port 5000 which points to say example/cloud/node1/solr 2. Next create the replica for collection1 (without specifying numshards this time as it will tell Solr to create a replica) with port 5001 which points to say example/cloud/node2/solr 3. Repeat step 2 and point this to say example/cloud/node3/solr with port 5002 I am currently stuck at Step 1 as I only managed to create the collection with Shard 1 pointing at port 5000. However, how can I go about creating the replica for that existing collection, collection1? Best regards, Adrian -Original Message- From: Erick Erickson [mailto:erickerick...@gmail.com] Sent: Tuesday, July 7, 2015 1:25 AM To: solr-user@lucene.apache.org Subject: Re: Setup Solr Multiserver Environment How are you starting this? WARNING: I haven't worked on a windows box for some time so beware my advice ;). You shouldn't have to edit these at all if you're using the start scripts (now preferred). -h host -p port -s dir All this does is set an env variable that gets substituted in for things like ${jetty.port:5002}. That syntax really says Use the env var 'jetty.port' if it's defined, otherwise default to 5002 Try bin/solr.cmd --help and see what the options are.. Best, Erick On Mon, Jul 6, 2015 at 1:41 AM, Adrian Liew adrian.l...@avanade.com wrote: Has anyone tried setting up a multiserver environment for Solr on a Windows machine? I was just wondering how you will go about say setting up a single shard with 3 replicas (one of them is the leader) using the solr.cmd or infact java.cmd on three different virtual machines for example. Note, I can only run solr.cmd on Windows Machine. I won't be able to use shell scripting using the sh command if I wanted to have an automated script to startup. This is the way I see it for SolrCloud scenario: 1 replica = 1 machine So for a three server instance/node setup with a single shard, it will be: Shard 1: Replica 1 (this is the Leader for the shard for example) : solrnode1.cloudapp.net:5000 Replica 2: solrnode1.cloudapp.net:5001 Replica 3: solrnode1.cloudapp.net:5002 May it be possible to go about changing the hostport in the solr.xml of each of those solr server instance/nodes above? So like each of them will be pointing to like a directory called solr/node1, solr/node2 and solr/node3. So the only place which makes sense to configure the host addresses is modifying the sections below: The default section that comes with the solr.xml is within solrcloud section is: str name=host${host:}/str int name=hostPort${jetty.port:8983}/int str name=hostContext${hostContext:solr}/str So in my case for example, can I do the following? Replica 1: str name=hostsolrnode1.cloudapp.net/str int name=hostPort${jetty.port:5000}/int str name=hostContext${hostContext:solr}/str Replica 2: str name=host solrnode1.cloudapp.net /str int name=hostPort${jetty.port:5001}/int str name=hostContext${hostContext:solr}/str Replica 3: str name=host solrnode1.cloudapp.net /str int name=hostPort${jetty.port:5002}/int str name=hostContext${hostContext:solr}/str Any suggestions are welcomed. Regards, Adrian Best regards, Adrian Liew | Consultant Application Developer Avanade Malaysia Sdn. Bhd..| Consulting Services (: Direct: +(603) 2382 5668 È: +6010-2288030
RE: Solr 5.2.1 setup zookeeper ensemble problem
Thanks David. That was helpful. I have managed to upload my configuration to ZooKeeper with the below. Best regards, Adrian -Original Message- From: davidphilip cherian [mailto:davidphilipcher...@gmail.com] Sent: Monday, July 6, 2015 4:35 PM To: solr-user@lucene.apache.org Subject: Re: Solr 5.2.1 setup zookeeper ensemble problem Hi Adrian, What is the error that you are getting? In order to upload configs files, you could use zkcli.sh script that will be shipped with solr and use the upconfig command. ./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 \ -cmd upconfig -confname my_new_config -confdir server/solr/configsets/basic_configs/conf https://cwiki.apache.org/confluence/display/solr/Command+Line+Utilities On Mon, Jul 6, 2015 at 1:43 PM, Adrian Liew adrian.l...@avanade.com wrote: There seems to be an issue running the following command using solr.cmd as below: - bin\solr.cmd -e cloud -z localhost:2181,localhost:2182,localhost:2183 Anyone can please advise. Also, Is there a way to upload a configuration file (containing schema.xml and solrconfig.xml) to ZooKeeper easily using solr.cmd in solr 5.2.1? Best regards, Adrian Liew | Consultant Application Developer Avanade Malaysia Sdn. Bhd..| Consulting Services (: Direct: +(603) 2382 5668 È: +6010-2288030
Indexed field to schema field
Hi, Can i some how translate fields which i read from newSearcher.getAtomicReader().fields(), to schema fields ? Does solr expose any method to do this translation ? Alternative approach i am thinking will involved lots of regex computation as the fields would be _string, _float etc and i would have to remove those suffixes, this becomes little tricky when fields are dynamic. Thanks and regards, Gajendra Dadheech
Solr transactions naming using New Relic - anyone did it?
Hi, I want to have fine grained Solr calls naming using New Relic https://docs.newrelic.com/docs/agents/java-agent/instrumentation/naming-web-transactions. So I can focus on monitoring those requests to Solr that mean end-user wait time. Anyone already implemented such a solution? In more details: OOTB, New Relic monitoring a Solr server groups all Solr calls queries/updates under 'SolrRequestFilter', also grouping together regardless of the target collection. I want to be able to separate these end-user calls from other types of requests not directly related to the end-user experience like: 1. SolrJ client querying Solr as part of a batch computation. 2. SolrJ indexing to Solr. 3. Solr cores calling each other as part of a distributed query/indexing or recovery. Thanks.
Re: Problems with loading core when starting Solr 5.2.1
If the zookeeper used isnt visible via the ui, it should be. Does it show on the main dashboard under 'args'? Upayavira On Tue, Jul 7, 2015, at 04:30 AM, Zheng Lin Edwin Yeo wrote: Thanks Erick for the info. So mine should be running on external ZooKeeper, since I'm using - bin\solr.cmd -e cloud -z localhost:2181 and not -DzkRun, and Solr is able to read the updated the configuration after I update it in the external ZooKeeper. Regards, Edwin On 7 July 2015 at 11:17, Erick Erickson erickerick...@gmail.com wrote: A Zookeeper quorum is not necessary for searching. Well, that's stating it a little too strongly. A Solr cluster will do its best to search given the last state of the cluster each Solr node got from Zookeeper. You will not be able to index without a quorum though. bq: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? You have to specify the -DzkRun to run embedded ZK. If you didn't do that, it's not running. Best, Erick On Mon, Jul 6, 2015 at 8:13 PM, Zheng Lin Edwin Yeo edwinye...@gmail.com wrote: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? Currently I can still do a search even though I've shut down 2 out of 3 ZooKeeper. Only the Cloud tab is down, but the rest of the search are still working. Regards, Edwin On 6 July 2015 at 20:59, Shawn Heisey apa...@elyograg.org wrote: On 7/5/2015 10:17 PM, Zheng Lin Edwin Yeo wrote: I've just migrated to Solr 5.2.1 with external ZooKeeper 3.4.6. Whenever I tried to start Solr using these command, the Solr servers gets started, but none of the cores is actually loaded. - bin\solr.cmd start -cloud -z localhost:2181 - bin\solr.cmd -cloud -p 8983 -s server\solr -z localhost:2181 I can only get the core to be loaded when I use the following command - bin\solr.cmd -e cloud -z localhost:2181 After you have created the cloud example, you need different commands to restart or start those nodes. See the Restarting Nodes section of this page for some help with doing this: https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud Thanks, Shawn
Search inconsistency on Solr
Hi All, I started using solr a while back. Now I'm writing jUnit test for the solr application but I encounter inconsistency in my results causing my jUnit tests to fail. There are times that the order of the documents returned will be different because of the score. Do you have any pointers to solve this? I've read this two issues; which I see to be similar so mine. https://issues.apache.org/jira/browse/SOLR-5821 https://issues.apache.org/jira/browse/SOLR-4260 Regards, jpmantuano
Re: Problems with loading core when starting Solr 5.2.1
Yes, it shows the following under 'args'. - -DzkHost=localhost:2181,localhost:2182,localhost:2183 Regards, Edwin On 7 July 2015 at 15:22, Upayavira u...@odoko.co.uk wrote: If the zookeeper used isnt visible via the ui, it should be. Does it show on the main dashboard under 'args'? Upayavira On Tue, Jul 7, 2015, at 04:30 AM, Zheng Lin Edwin Yeo wrote: Thanks Erick for the info. So mine should be running on external ZooKeeper, since I'm using - bin\solr.cmd -e cloud -z localhost:2181 and not -DzkRun, and Solr is able to read the updated the configuration after I update it in the external ZooKeeper. Regards, Edwin On 7 July 2015 at 11:17, Erick Erickson erickerick...@gmail.com wrote: A Zookeeper quorum is not necessary for searching. Well, that's stating it a little too strongly. A Solr cluster will do its best to search given the last state of the cluster each Solr node got from Zookeeper. You will not be able to index without a quorum though. bq: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? You have to specify the -DzkRun to run embedded ZK. If you didn't do that, it's not running. Best, Erick On Mon, Jul 6, 2015 at 8:13 PM, Zheng Lin Edwin Yeo edwinye...@gmail.com wrote: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? Currently I can still do a search even though I've shut down 2 out of 3 ZooKeeper. Only the Cloud tab is down, but the rest of the search are still working. Regards, Edwin On 6 July 2015 at 20:59, Shawn Heisey apa...@elyograg.org wrote: On 7/5/2015 10:17 PM, Zheng Lin Edwin Yeo wrote: I've just migrated to Solr 5.2.1 with external ZooKeeper 3.4.6. Whenever I tried to start Solr using these command, the Solr servers gets started, but none of the cores is actually loaded. - bin\solr.cmd start -cloud -z localhost:2181 - bin\solr.cmd -cloud -p 8983 -s server\solr -z localhost:2181 I can only get the core to be loaded when I use the following command - bin\solr.cmd -e cloud -z localhost:2181 After you have created the cloud example, you need different commands to restart or start those nodes. See the Restarting Nodes section of this page for some help with doing this: https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud Thanks, Shawn
[ANNOUNCE] Luke 5.2.0 released
This is a major release supporting lucene / solr 5.2.0. Download the zip here: https://github.com/DmitryKey/luke/releases/tag/luke-5.2.0 It supports elasticsearch 1.6.0 (lucene 4.10.4) Issues fixed: #20 https://github.com/DmitryKey/luke/issues/20 Added support for reconstructing field values of indexed and not stored fields, that do not expose positions. Pull requests: #23 https://github.com/DmitryKey/luke/pull/23 Elasticsearch support and Shade plugin for assembly #26 https://github.com/DmitryKey/luke/pull/26 added .gitignore to project #27 https://github.com/DmitryKey/luke/pull/27 Lucene 5x support #28 https://github.com/DmitryKey/luke/pull/28 Added LUKE_PATH env variable to luke.sh #30 https://github.com/DmitryKey/luke/pull/30 Luke 5.2 -- Dmitry Kan Luke Toolbox: http://github.com/DmitryKey/luke Blog: http://dmitrykan.blogspot.com Twitter: http://twitter.com/dmitrykan SemanticAnalyzer: www.semanticanalyzer.info
Re: Solr edismax always using the default fields?
What is your query? Note that edismax uses the Lucene query parser by default, and dismax if it encounters an error. Add debugQuery=true to your query, and look at the parsed query output to see what your query has been parsed into. This can often be very illuminating as you will see which fields are used and when. Upayavira On Tue, Jul 7, 2015, at 06:55 PM, Paden wrote: Hello, I'm trying to tune a search handler to get the results that I want. In the solrconfig.xml I specify several different query fields for the edismax query parser but it always seems to use the default fields instead. For example and clarification, when I remove Author from the df list of fields the search results do not come up with the author I was looking for. But when I add Author back in as a df it returns 19/20 of his documents as the top search results. I guess I'm really confused because when you specify the query fields solr shouldn't use those default fields. I guess it's not a real super problem because I can define all the fields I need as defaults. I'm just trying to figure out where I'm going wrong here. Is there a reason that this would happen? I have the search handler xml below requestHandler name=/select class=solr.SearchHandler lst name=defaults str name=echoParamsall/str str name=defTypeedismax/str str name=dfAuthor/str --ADDING THIS GIVES DESIRED RESULT, REMOVING BAD-- str name=dftext/str -RESULT. NOTHING ELSE CHANGES str name=qftext^0.5 Title^1.2 Author^2.0/str str name=rows20/str str name=q.alt*some author name*/str /lst /requestHandler And here is the echoed params of the search responseHeader: { status: 0, QTime: 48, params: { lowercaseOperators: true, df: [ Author, text ], echoParams: all, indent: true, qf: text^0.5 Title^1.2 Author^2.0, _: 1436291181730, q.alt: *some author name*, stopwords: true, wt: json, defType: edismax, rows: 20 } -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr edismax always using the default fields?
Well I've just been using an authors name. Last Name, First Name Middle Initial. Like *Snowman, Frosty T.* As for the debugging I'm not really seeing anything that would help me understand why the query fields aren't kicking in. And instead only the default fields are. I do see that is parsing the name as a text field though org.apache.lucene.search.BooleanClause:text:*+lastname,, org.apache.lucene.search.BooleanClause:+text:firstname, org.apache.lucene.search.BooleanClause:+text:middleinitial* But I already knew it was parsing it as text. I just don't understand why it's going to the default field when other fields have been specified -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204p4216211.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: unsubscribe
Unsubscribe On Jul 7, 2015 11:39 AM, Jacob Singh jacobsi...@gmail.com wrote: -- +1 512-522-6281 twitter: @JacobSingh ( http://twitter.com/#!/JacobSingh ) web: http://www.jacobsingh.name Skype: pajamadesign gTalk: jacobsi...@gmail.com
Too many Soft commits and opening searchers realtime
HI All, Can someone help me understand the following behavior. I have the following maxTimes on hard and soft commits yet I see a lot of Opening Searchers in the log org.apache.solr.search.SolrIndexSearcher- Opening Searcher@1656a258[main] realtime also I see a soft commit happening almost every 30 secs org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false} autoCommit maxTime48/maxTime openSearcherfalse/openSearcher /autoCommit autoSoftCommit maxTime18/maxTime /autoSoftCommit I tried disabling softCommit by setting maxTime to -1. On startup solrCore recognized it and logged Soft AutoCommit: disabled but I could still see softCommit=true org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false} autoSoftCommit maxTime-1/maxTime /autoSoftCommit Thanks, Summer
Re: Too many Soft commits and opening searchers realtime
Are the clients that are posting updates requesting commits? On Tue, Jul 7, 2015 at 4:29 PM, Summer Shire shiresum...@gmail.com wrote: HI All, Can someone help me understand the following behavior. I have the following maxTimes on hard and soft commits yet I see a lot of Opening Searchers in the log org.apache.solr.search.SolrIndexSearcher- Opening Searcher@1656a258[main] realtime also I see a soft commit happening almost every 30 secs org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false} autoCommit maxTime48/maxTime openSearcherfalse/openSearcher /autoCommit autoSoftCommit maxTime18/maxTime /autoSoftCommit I tried disabling softCommit by setting maxTime to -1. On startup solrCore recognized it and logged Soft AutoCommit: disabled but I could still see softCommit=true org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false} autoSoftCommit maxTime-1/maxTime /autoSoftCommit Thanks, Summer
boolean for if highlighting snippet contains complete original value or is truncated
I've been playing around with highlighting snippets and there are times when I'd like to know if the snippet returned contains the entire value of the original field or if the snippet is a truncated version of the original field. For example, when I search for xms below I can tell that the snippet returned is truncated: - original: CORSAIR XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Memory - Retail - snippet: CORSAIR emXMS/em 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Obviously, I can compare the original to the snippet (stripping out the em tags) to see if they are the same, but does Solr natively support returning a boolean if the values are equal? I couldn't find anything at https://wiki.apache.org/solr/HighlightingParameters Maybe the boolean would say truncated=true or something. Here's the example: $ curl 'http://localhost:8983/solr/collection1/select?wt=jsonindent=truehl=truehl.fl=*q=xms' { responseHeader:{ status:0, QTime:2, params:{ indent:true, q:xms, hl.fl:*, wt:json, hl:true}}, response:{numFound:1,start:0,docs:[ { id:TWINX2048-3200PRO, name:CORSAIR XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Memory - Retail, manu:Corsair Microsystems Inc., manu_id_s:corsair, cat:[electronics, memory], features:[CAS latency 2,\t2-3-3-6 timing, 2.75v, unbuffered, heat-spreader], price:185.0, price_c:185,USD, popularity:5, inStock:true, store:37.7752,-122.4232, manufacturedate_dt:2006-02-13T15:26:37Z, payloads:electronics|6.0 memory|3.0, _version_:1506070286991097856}] }, highlighting:{ TWINX2048-3200PRO:{ name:[CORSAIR emXMS/em 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System]}}} My use case is only including ellipses (...) when the snippet is truncated: https://github.com/IQSS/dataverse/issues/537 Thanks, Phil -- Philip Durbin Software Developer for http://dataverse.org http://www.iq.harvard.edu/people/philip-durbin
Re: Solr edismax always using the default fields?
It just defaults to text anyway. I remove it entirely from the solrconfig and never specify it in the solr query portion but it still defaults to text anyway. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204p4216224.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr edismax always using the default fields?
according to hte echParams output, you aren't specifying a q param. You seem to be trying to specify your query input using the q.alt param -- but the q.alt param doesn't use the edismax parser specified by the defType param -- q.alt is a feature *of* the edismax parser that is used to completley override all of the options used to configure edismax when there is no q param, and it (by default) uses the defualt solr parser. (the intended usecase is along the lines of if the user doesn't specify a query string, default to a search for all the books for sale via 'q.alt=inStock:true') stop configuring a q.alt that matches you authors name, and instead specify a default q param (or enture your client always sends a q param) and see if that solves things for you. : Date: Tue, 7 Jul 2015 10:55:24 -0700 (MST) : From: Paden rumsey...@gmail.com : Reply-To: solr-user@lucene.apache.org : To: solr-user@lucene.apache.org : Subject: Solr edismax always using the default fields? : : Hello, : : I'm trying to tune a search handler to get the results that I want. In the : solrconfig.xml I specify several different query fields for the edismax : query parser but it always seems to use the default fields instead. : : For example and clarification, when I remove Author from the df list of : fields the search results do not come up with the author I was looking for. : But when I add Author back in as a df it returns 19/20 of his documents as : the top search results. I guess I'm really confused because when you specify : the query fields solr shouldn't use those default fields. I guess it's not a : real super problem because I can define all the fields I need as defaults. : I'm just trying to figure out where I'm going wrong here. Is there a reason : that this would happen? I have the search handler xml below : : requestHandler name=/select class=solr.SearchHandler : : lst name=defaults :str name=echoParamsall/str :str name=defTypeedismax/str :str name=dfAuthor/str --ADDING THIS GIVES DESIRED RESULT, : REMOVING BAD-- :str name=dftext/str -RESULT. NOTHING ELSE CHANGES :str name=qftext^0.5 Title^1.2 Author^2.0/str :str name=rows20/str :str name=q.alt*some author name*/str : /lst : : /requestHandler : : And here is the echoed params of the search : : responseHeader: { : status: 0, : QTime: 48, : params: { : lowercaseOperators: true, : df: [ : Author, : text : ], : echoParams: all, : indent: true, : qf: text^0.5 Title^1.2 Author^2.0, : _: 1436291181730, : q.alt: *some author name*, : stopwords: true, : wt: json, : defType: edismax, : rows: 20 : } : : : : -- : View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204.html : Sent from the Solr - User mailing list archive at Nabble.com. : -Hoss http://www.lucidworks.com/
Re: Solr edismax always using the default fields?
Thank you! Thank you, thank you, thank you. That worked and it brought the right results. Thanks. It was driving me crazy. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204p4216228.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Sorting documents by nested / child docs with FunctionQueries
Now I'm trying to sort the children by normal field and not by function values. Is there any way to do so? Thanks, Dor -- View this message in context: http://lucene.472066.n3.nabble.com/Sorting-documents-by-nested-child-docs-with-FunctionQueries-tp4209940p4216169.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Search inconsistency on Solr
Hi, you pointed to a couple of issues related the Solr Cloud architecture. Is your problem involving querying different Solr nodes ? in the case you are testing a single node I find really hard to believe Solr scoring is not consistent ( assuming the index is not changing). Can you give us details ? If the index changes, the result for a query can change of course. Cheers 2015-07-07 15:00 GMT+01:00 Walter Underwood wun...@wunderwood.org: We test the order of results, not the exact score. Score values depend on the number of documents in the index. Also, the order is the only thing we care about. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) On Jul 7, 2015, at 12:40 AM, joseph paulo mantuano paulo.mantu...@gmail.com wrote: Hi All, I started using solr a while back. Now I'm writing jUnit test for the solr application but I encounter inconsistency in my results causing my jUnit tests to fail. There are times that the order of the documents returned will be different because of the score. Do you have any pointers to solve this? I've read this two issues; which I see to be similar so mine. https://issues.apache.org/jira/browse/SOLR-5821 https://issues.apache.org/jira/browse/SOLR-4260 Regards, jpmantuano -- -- Benedetti Alessandro Visiting card : http://about.me/alessandro_benedetti Tyger, tyger burning bright In the forests of the night, What immortal hand or eye Could frame thy fearful symmetry? William Blake - Songs of Experience -1794 England
Re: solr5 not woring in axis2
it is actually not so easy to understand this mail, let me try anyway … Which version of SolrJ are you using ? With Solr 5.x the SolrServer actually became SolrClient ( which makes more sense) . This would actually produce a simple compilation exception so i guess it is not the problem… But i can't do more with this strange details… If you give more info, maybe we can try to help! Cheers 2015-07-07 7:05 GMT+01:00 Saravanakumar Ramasamy r...@zoniac.com: Hi Before upgrade version === Tomcat : 6 Jdk 1.6 Solr3.1 Axis2 1.6.2 We are using like Server == import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; CommonsHttpSolrServer solrServer = new CommonsHttpSolrServer(solrWebServer); Connection == import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.response.QueryResponse; SolrQuery query = new SolrQuery(); query.setQuery(solrMainQuery.toString()); QueryResponse qr = solrServer.query(query); Both code is working well. After upgrade version == Tomcat : 8 Jdk 1.8 Solr5.1 Axis2 1.6.2 Server == import org.apache.solr.client.solrj.impl.HttpSolrClient; HttpSolrClient solrServer = new HttpSolrClient (solrWebServer); Connection == import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.response.QueryResponse; SolrQuery query = new SolrQuery(); query.setQuery(solrMainQuery.toString()); QueryResponse qr = solrServer.query(query); the query not kid to solr server. once I command yellow mark package. The query kid to server. But throw exception. I am using 3 (root,solr,axis2) appication in my tomcat, Root to solr à Same code working fine Axis2 to solr à Same code not working Where I go wrong. If any aternaive package or I missed dependency libraries With Regards, RSK -- -- Benedetti Alessandro Visiting card : http://about.me/alessandro_benedetti Tyger, tyger burning bright In the forests of the night, What immortal hand or eye Could frame thy fearful symmetry? William Blake - Songs of Experience -1794 England
Re: Problems with loading core when starting Solr 5.2.1
Found the answer here: http://lucidworks.com/blog/call-maybe-solrcloud-jepsen-flaky-networks/ /Fadi On 07 Jul 2015, at 11:48, Fadi Mohsen fadi.moh...@gmail.com wrote: Hi, Would Solr give inconsistent results if ZK quorum not achieved + one leader (last state) goes down? /Fadi On 07 Jul 2015, at 05:17, Erick Erickson erickerick...@gmail.com wrote: A Zookeeper quorum is not necessary for searching. Well, that's stating it a little too strongly. A Solr cluster will do its best to search given the last state of the cluster each Solr node got from Zookeeper. You will not be able to index without a quorum though. bq: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? You have to specify the -DzkRun to run embedded ZK. If you didn't do that, it's not running. Best, Erick On Mon, Jul 6, 2015 at 8:13 PM, Zheng Lin Edwin Yeo edwinye...@gmail.com wrote: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? Currently I can still do a search even though I've shut down 2 out of 3 ZooKeeper. Only the Cloud tab is down, but the rest of the search are still working. Regards, Edwin On 6 July 2015 at 20:59, Shawn Heisey apa...@elyograg.org wrote: On 7/5/2015 10:17 PM, Zheng Lin Edwin Yeo wrote: I've just migrated to Solr 5.2.1 with external ZooKeeper 3.4.6. Whenever I tried to start Solr using these command, the Solr servers gets started, but none of the cores is actually loaded. - bin\solr.cmd start -cloud -z localhost:2181 - bin\solr.cmd -cloud -p 8983 -s server\solr -z localhost:2181 I can only get the core to be loaded when I use the following command - bin\solr.cmd -e cloud -z localhost:2181 After you have created the cloud example, you need different commands to restart or start those nodes. See the Restarting Nodes section of this page for some help with doing this: https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud Thanks, Shawn
Re: Migrating from solr 1.4.1 to 4.7
On 7/7/2015 4:08 AM, anil.chandgude wrote: currently I am using apache-solr1.4 which does not contain lucene internally .So w used lucene 2.9 jar for that to get work. Now we want to migrate from solr version 1.4 to 4.7 due to jdk version issue(we have jdk1.6 only ) But before migrating to 4.7 , I am trying solr3.1.0 version. But it seems that there is no any big changes between two. so please guide me for version upgradation. Which steps should I take for that. Solr 1.4 should have Lucene in it. I used Solr 1.4.0 and 1.4.1 without downloading any Lucene binaries, they were included in the .war file. This version does use Lucene 2.9. The only place I know of where Solr and Lucene are separated is in Debian/Ubuntu packages. Starting in 3.1.0, the Lucene version and Solr version in each release are the same number. Solr 3.x is largely similar to 1.4.x. Version 4.0 is where things became very different. The jump from 4.x to 5.0 is also quite large. I strongly recommend that you rebuild your index entirely when you upgrade. If you can't do that for some reason, be aware that each version of Lucene can only read indexes as far back as the previous major version, so Solr 4.7 cannot read an index built by the Lucene 2.9 that's used in Solr 1.4. If you upgrade to a 3.x version first, you'll need to optimize each index so that it is completely re-written in the new format. When upgrading, you should look closely at the CHANGES.txt and read the section for each version between the version you start with and the version you are going to. You should also look at the example solrconfig and schema included with the old version and the new version, and compare them to see what's changed, so you will know what you might want to change in your own config/schema. Even better would be to start with the example config and change it to meet your requirements. It would be a good idea to upgrade your Java version and run a newer Solr version than 4.7. Only Java 7 is required, but there are some very good performance and memory handling improvements in Java 8. Both Java 6 and Java 7 have reached End of Support from Oracle, so there won't be any new versions unless a major security problem is found. Only Java 8 will continue to see new versions. Thanks, Shawn
RE: function query result without queryNorm
Hello - you can either use a similarity that does not use query normalization, or you can just ignore it, it is relative anyway. Also, consider using boost parameter instead of bf, it is multiplicative where bf is just additive, which offers less control. You may also want to reduce time resolution by using NOW/HOUR or NOW/DAY. It saves you a lot of bad cache entries. Markus -Original message- From:Lee Chunki lck7...@coupang.com Sent: Tuesday 7th July 2015 13:29 To: solr-user@lucene.apache.org Subject: function query result without queryNorm Hi, I want to add some value to score. so, I tried to use “bf” but it returns value * queryNorm for example, when I use q=iphonebf=div(ms(NOW,start_time),360) solr returns 799.5687 = (MATCH) FunctionQuery(div(ms(const(1436268218591),date(start_time)),const(360))), product of: 6988.394 = div(ms(const(1436268218591),date(start_time)=2014-09-19T07:00:00Z),const(360)) 1.0 = boost 0.114413805 = queryNorm how can I get only FunctionQuery() result ? Thanks, Calvin.
Re: autosuggest with solr.EdgeNGramFilterFactory no result found
Thanx Erick, Your blog article was the perfect answer to my problem. Rgds, Roland 2015-07-03 18:57 GMT+02:00 Erick Erickson erickerick...@gmail.com: OK, I think you took a wrong turn at the bakery The FST-based suggesters are intended to look at the beginnings of fields. It is totally unnecessary to use ngrams, the FST that gets built does that _for_ you. Actually it builds an internal FST structure that does this en passant. For getting whole fields that are anywhere in the input field, you probably want to think about AnalyzingInfixSuggester or FreeTextSuggester. The important bit here is that you shouldn't have to do so much work... This might help: http://lucidworks.com/blog/solr-suggester/ Best, Erick On Fri, Jul 3, 2015 at 4:40 AM, Roland Szűcs roland.sz...@bookandwalk.com wrote: I tried to setup an autosuggest feature with multiple dictionaries for title , author and publisher fields. I used the solr.EdgeNGramFilterFactory to optimize the performance of the auto suggest. I have a document in the index with title: Romana. When I test the text analysis for auto suggest (on filed of title_suggest_ngram): ENGTF textraw_bytesstartendpositionLengthtypeposition rom[72 6f 6d]061word1roma[72 6f 6d 61]061word1roman[72 6f 6d 61 6e]061word1 romana[72 6f 6d 61 6e 61]061word1 If I try to run http://localhost:8983/solr/bandw/suggest?q=Roma, I get: response lst name=responseHeader int name=status0/int int name=QTime1/int /lst lst name=suggest lst name=suggest_publisher lst name=Roma int name=numFound0/int arr name=suggestions/ /lst /lst lst name=suggest_title lst name=Roma int name=numFound0/int arr name=suggestions/ /lst /lst lst name=suggest_author lst name=Roma int name=numFound0/int arr name=suggestions/ /lst /lst /lst /response my relevant field definitions: field name=id type=string indexed=true stored=true required=true multiValued=false omitNorms=true / field name=author type=text_hu indexed=true stored=true multiValued=true/ field name=title type=text_hu indexed=true stored=true multiValued=false/ field name=subtitle type=text_hu indexed=true stored=true multiValued=false/ field name=publisher type=text_hu indexed=true stored=true multiValued=false/ field name=title_suggest_ngram type=text_hu_suggest_ngram indexed=true stored=false multiValued=false omitNorms=true/ field name=author_suggest_ngram type=text_hu_suggest_ngram indexed=true stored=false multiValued=false omitNorms=true/ field name=publisher_suggest_ngram type=text_hu_suggest_ngram indexed=true stored=false multiValued=false omitNorms=true/ copyField source=title dest=title_suggest_ngram/ copyField source=author dest=author_suggest_ngram/ copyField source=publisher dest=publisher_suggest_ngram/ My EdgeNGram related field type definition: fieldType name=text_hu_suggest_ngram class=solr.TextField positionIncrementGap=100 analyzer type=index tokenizer class=solr.StandardTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=stopwords_hu.txt / filter class=solr.LowerCaseFilterFactory/ filter class=solr.EdgeNGramFilterFactory minGramSize=3 maxGramSize=8/ /analyzer analyzer type=query tokenizer class=solr.StandardTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=stopwords_hu.txt / filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType My requesthandler for suggest: requestHandler name=/suggest class=solr.SearchHandler startup=lazy lst name=defaults str name=suggesttrue/str str name=suggest.count5/str str name=suggest.dictionarysuggest_author/str str name=suggest.dictionarysuggest_title/str str name=suggest.dictionarysuggest_publisher/str /lst arr name=components strsuggest/str /arr /requestHandler And finally my searchcomponent: searchComponent name=suggest class=solr.SuggestComponent lst name=suggester str name=namesuggest_title/str str name=lookupImplFSTLookupFactory/str str name=dictionaryImplDocumentDictionaryFactory/str str name=fieldtitle_suggest_ngram/str str name=wightFieldprice/str str name=builOnStartuptrue/str str name=buildOnCommittrue/str /lst lst name=suggester str name=namesuggest_author/str str name=lookupImplFSTLookupFactory/str str name=dictionaryImplDocumentDictionaryFactory/str str name=fieldauthor_suggest_ngram/str str name=wightFieldprice/str str name=builOnStartuptrue/str str name=buildOnCommittrue/str /lst lst name=suggester str name=namesuggest_publisher/str str name=lookupImplFSTLookupFactory/str str name=dictionaryImplDocumentDictionaryFactory/str str name=fieldpublisher_suggest_ngram/str str
Re: Problems with loading core when starting Solr 5.2.1
Hi, Would Solr give inconsistent results if ZK quorum not achieved + one leader (last state) goes down? /Fadi On 07 Jul 2015, at 05:17, Erick Erickson erickerick...@gmail.com wrote: A Zookeeper quorum is not necessary for searching. Well, that's stating it a little too strongly. A Solr cluster will do its best to search given the last state of the cluster each Solr node got from Zookeeper. You will not be able to index without a quorum though. bq: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? You have to specify the -DzkRun to run embedded ZK. If you didn't do that, it's not running. Best, Erick On Mon, Jul 6, 2015 at 8:13 PM, Zheng Lin Edwin Yeo edwinye...@gmail.com wrote: How do I check if my Solr is running based on the external ZooKeeper or the embedded ZooKeeper in Solr 5.2.1? Currently I can still do a search even though I've shut down 2 out of 3 ZooKeeper. Only the Cloud tab is down, but the rest of the search are still working. Regards, Edwin On 6 July 2015 at 20:59, Shawn Heisey apa...@elyograg.org wrote: On 7/5/2015 10:17 PM, Zheng Lin Edwin Yeo wrote: I've just migrated to Solr 5.2.1 with external ZooKeeper 3.4.6. Whenever I tried to start Solr using these command, the Solr servers gets started, but none of the cores is actually loaded. - bin\solr.cmd start -cloud -z localhost:2181 - bin\solr.cmd -cloud -p 8983 -s server\solr -z localhost:2181 I can only get the core to be loaded when I use the following command - bin\solr.cmd -e cloud -z localhost:2181 After you have created the cloud example, you need different commands to restart or start those nodes. See the Restarting Nodes section of this page for some help with doing this: https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud Thanks, Shawn
Is there a difference in Solr 4.6 Spellchecker Solr 5.2 Spellchecker results
Hi, The following is my configuration *Solr 4.6 - schema.xml* field name=suggest type=text_suggest indexed=true stored=false multiValued=true / copyField source=text dest=suggest/ fieldType name=text_suggest class=solr.TextField positionIncrementGap=100 analyzer type=index tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.RemoveDuplicatesTokenFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer analyzer type=query tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType *Solr 4.6 - solrconfig.xml* requestHandler class=org.apache.solr.handler.component.SearchHandler name=/suggest lst name=defaults str name=echoParamsnone/str str name=wtjson/str str name=indentfalse/str str name=spellchecktrue/str str name=spellcheck.dictionarysuggestDictionary/str str name=spellcheck.onlyMorePopulartrue/str str name=spellcheck.count5/str str name=spellcheck.collatefalse/str /lst arr name=components strsuggest/str /arr /requestHandler searchComponent class=solr.SpellCheckComponent name=suggest lst name=spellchecker str name=namesuggestDictionary/str str name=classnameorg.apache.solr.spelling.suggest.Suggester/str str name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookupFactory/str str name=fieldsuggest/str float name=threshold0./float str name=buildOnCommittrue/str /lst /searchComponent *Solr 4.6 Spellcheck query* http://localhost:8983/solr/portal_documents/suggest?wt=xmlspellcheck.q=wh *Solr 4.6 Spellcheck results* response lst name=responseHeader int name=status0/int int name=QTime0/int /lst lst name=spellcheck lst name=suggestions lst name=wh int name=numFound5/int int name=startOffset0/int int name=endOffset2/int arr name=suggestion *strwhen/str strwhat/str strwhere/str strwhich/str strwho/str* /arr /lst /lst /lst /response Now, we are migrating to Solr 5.1 have the following configuration details: *Solr 5.1 - schema.xml* field name=suggest type=c_suggest indexed=true stored=false multiValued=true / copyField source=text dest=suggest/ fieldType name=c_suggest class=solr.TextField positionIncrementGap=100 analyzer type=index tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.RemoveDuplicatesTokenFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer analyzer type=query tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType *Solr 5.1 - solrconfig.xml* searchComponent name=spellcheck class=solr.SpellCheckComponent str name=queryAnalyzerFieldTypec_suggest/str lst name=spellchecker str name=namedefault/str str name=fieldsuggest/str str name=classnamesolr.DirectSolrSpellChecker/str str name=distanceMeasureinternal/str float name=accuracy0.01/float int name=maxEdits2/int int name=minPrefix1/int int name=maxInspections5/int int name=minQueryLength1/int
Migrating from solr 1.4.1 to 4.7
hello, currently I am using apache-solr1.4 which does not contain lucene internally .So w used lucene 2.9 jar for that to get work. Now we want to migrate from solr version 1.4 to 4.7 due to jdk version issue(we have jdk1.6 only ) But before migrating to 4.7 , I am trying solr3.1.0 version. But it seems that there is no any big changes between two. so please guide me for version upgradation. Which steps should I take for that. thank you. Disclaimer: This e-mail and any documents, files, or previous e-mail messages appended or attached to it may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.
Difference in Solr 4.6 Solr 5.1 spellchecker results.
Hi, I have a working version of Solr 4.6 Spellchecker, the configuration details are as mentioned below: *Solr 4.6 - schema.xml* field name=suggest type=text_suggest indexed=true stored=false multiValued=true / copyField source=text dest=suggest/ fieldType name=text_suggest class=solr.TextField positionIncrementGap=100 analyzer type=index tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.RemoveDuplicatesTokenFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer analyzer type=query tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType *Solr 4.6 - solrconfig.xml* requestHandler class=org.apache.solr.handler.component.SearchHandler name=/suggest lst name=defaults str name=echoParamsnone/str str name=wtjson/str str name=indentfalse/str str name=spellchecktrue/str str name=spellcheck.dictionarysuggestDictionary/str str name=spellcheck.onlyMorePopulartrue/str str name=spellcheck.count5/str str name=spellcheck.collatefalse/str /lst arr name=components strsuggest/str /arr /requestHandler searchComponent class=solr.SpellCheckComponent name=suggest lst name=spellchecker str name=namesuggestDictionary/str str name=classnameorg.apache.solr.spelling.suggest.Suggester/str str name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookupFactory/str str name=fieldsuggest/str float name=threshold0./float str name=buildOnCommittrue/str /lst /searchComponent *Solr 4.6 Spellcheck query* http://localhost:8983/solr/portal_documents/suggest?wt=xmlspellcheck.q=wh *Solr 4.6 Spellcheck results* response lst name=responseHeader int name=status0/int int name=QTime0/int /lst lst name=spellcheck lst name=suggestions lst name=wh int name=numFound5/int int name=startOffset0/int int name=endOffset2/int arr name=suggestion *strwhen/str strwhat/str strwhere/str strwhich/str strwho/str* /arr /lst /lst /lst /response Now, we are migrating to Solr 5.1 have the following configuration details: *Solr 5.1 - schema.xml* field name=suggest type=c_suggest indexed=true stored=false multiValued=true / copyField source=text dest=suggest/ fieldType name=c_suggest class=solr.TextField positionIncrementGap=100 analyzer type=index tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.RemoveDuplicatesTokenFilterFactory/ filter class=solr.TrimFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer analyzer type=query tokenizer class=solr.UAX29URLEmailTokenizerFactory/ filter class=solr.StopFilterFactory ignoreCase=true words=lang/stopwords_en.txt / filter class=solr.ASCIIFoldingFilterFactory/ filter class=solr.EnglishPossessiveFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType *Solr 5.1 - solrconfig.xml* searchComponent name=spellcheck class=solr.SpellCheckComponent str name=queryAnalyzerFieldTypec_suggest/str lst name=spellchecker str name=namedefault/str str name=fieldsuggest/str str name=classnamesolr.DirectSolrSpellChecker/str str name=distanceMeasureinternal/str float name=accuracy0.01/float int name=maxEdits2/int int name=minPrefix1/int int
function query result without queryNorm
Hi, I want to add some value to score. so, I tried to use “bf” but it returns value * queryNorm for example, when I use q=iphonebf=div(ms(NOW,start_time),360) solr returns 799.5687 = (MATCH) FunctionQuery(div(ms(const(1436268218591),date(start_time)),const(360))), product of: 6988.394 = div(ms(const(1436268218591),date(start_time)=2014-09-19T07:00:00Z),const(360)) 1.0 = boost 0.114413805 = queryNorm how can I get only FunctionQuery() result ? Thanks, Calvin.
Re: Solr Boost Search word before Specific Content
Hi Jack, You may boost first n words of the product title via something like : https://issues.apache.org/jira/browse/SOLR-3925 However, I would create an additional boostable field. You can do this at index time in your indexing code. It will be easy to divide product title into two, and apply boost on the newly added field. Ahmet On Tuesday, July 7, 2015 11:44 AM, JACK mfal...@gmail.com wrote: I am using Solr 5.0.0, I have one question in relevance boost: If I search for laptop table like words, is there any way to boost results search word before the words like by with or without etc. I used this query: ? defType = dismax q = foo bar bq = (*:* -by)^999 But, this will boost negatively those documents having the word by or with etc. How can i avoid this problem? For example, if I search for laptop table then by the above query the result DGB Cooling Laptop Table by GDB won't boost. I just need to give a boost to the search words before certain words like by, with, etc. Is it possible? Example 2 If i search for Laptop bag It should boost and return Results having search words before these words with,by,without,etc first. Lets say dell laptop with laptop bag laptop bag with cover laptop bag and table Results like laptop bag with cover laptop bag and table dell laptop without laptop bag, In the Results search word laptop bag is before with, without,and. Search results containing search word before these word should come first. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Boost-Search-word-before-Specific-Content-tp4216072.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Indexed field to schema field
Just an idea, Solr Admin/Schema Browser reports some info like this, hence, you can trace the way in which it does it. On Tue, Jul 7, 2015 at 10:34 AM, Gajendra Dadheech gajju3...@gmail.com wrote: Hi, Can i some how translate fields which i read from newSearcher.getAtomicReader().fields(), to schema fields ? Does solr expose any method to do this translation ? Alternative approach i am thinking will involved lots of regex computation as the fields would be _string, _float etc and i would have to remove those suffixes, this becomes little tricky when fields are dynamic. Thanks and regards, Gajendra Dadheech -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com
Re: Too many Soft commits and opening searchers realtime
No the client lets solr handle it. On Jul 7, 2015, at 2:38 PM, Mike Drob mad...@cloudera.com wrote: Are the clients that are posting updates requesting commits? On Tue, Jul 7, 2015 at 4:29 PM, Summer Shire shiresum...@gmail.com wrote: HI All, Can someone help me understand the following behavior. I have the following maxTimes on hard and soft commits yet I see a lot of Opening Searchers in the log org.apache.solr.search.SolrIndexSearcher - Opening Searcher@1656a258[main] realtime also I see a soft commit happening almost every 30 secs org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false} autoCommit maxTime48/maxTime openSearcherfalse/openSearcher /autoCommit autoSoftCommit maxTime18/maxTime /autoSoftCommit I tried disabling softCommit by setting maxTime to -1. On startup solrCore recognized it and logged Soft AutoCommit: disabled but I could still see softCommit=true org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false} autoSoftCommit maxTime-1/maxTime /autoSoftCommit Thanks, Summer
Tlog replay
Hi, When I restart my solr core the log replay starts and just before it finishes I see the following commit start commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} what does the “flags=2” param do ? when I try to send that param to the updateHandler manually solr does not like it curl http://localhost:6600/solr/main/update -H Content-Type: text/xml --data-binary 'commit openSearcher=true flags=2 waitSearcher=false/' ?xml version=1.0 encoding=UTF-8? response lst name=responseHeaderint name=status400/intint name=QTime0/int/lstlst name=errorstr name=msgUnknown commit parameter 'flags'/strint name=code400/int/lst /response thanks, Summer
Re: IndexWriter error
On 7/7/2015 7:53 AM, jon kerling wrote: I'm using Lucene 4.3.1, from time to time I'm loosing data while indexing.I get an ERROR:Lucene add index:this IndexWriter is closed. I'm using IndexWriter.commit most of the time and when I finish I'm using IndexWriter.close. What can cause this Error?How can I shorten the time of the IndexWriter closing state? thanks,Jon You're on the solr-user mailing list, where most of us don't know much about writing Lucene code. I think your question would be more appropriate for the java-user mailing list: https://lucene.apache.org/core/discussion.html Thanks, Shawn
Re: Search inconsistency on Solr
We test the order of results, not the exact score. Score values depend on the number of documents in the index. Also, the order is the only thing we care about. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) On Jul 7, 2015, at 12:40 AM, joseph paulo mantuano paulo.mantu...@gmail.com wrote: Hi All, I started using solr a while back. Now I'm writing jUnit test for the solr application but I encounter inconsistency in my results causing my jUnit tests to fail. There are times that the order of the documents returned will be different because of the score. Do you have any pointers to solve this? I've read this two issues; which I see to be similar so mine. https://issues.apache.org/jira/browse/SOLR-5821 https://issues.apache.org/jira/browse/SOLR-4260 Regards, jpmantuano
Re: Remove operation of partial update doesn't work
The add operation adds a value to a multivaled field. The remove operation removes a value from a multivalued field. If you believe that something is not working, please state clearly why you believe that something is not working. Start by describing the symptom. -- Jack Krupansky On Mon, Jul 6, 2015 at 9:22 PM, Mohsen Saboorian mohs...@gmail.com wrote: I can partially 'add' fields to my Solr index, but 'remove' operation seems not working. I'm on Solr 4.10. Here is my SolrJ snippet: SolrInputDocument doc = new SolrInputDocument(); MapString, Object partialUpdate = new HashMap(); partialUpdate.put(operation, value); // value can be object (string, number, etc) or list. operation can be add, set or remove. doc.addField(id, id); // document id doc.addField(fieldName, partialUpdate); getSolrServer().add(doc, commitWithin); Is there anything wrong with my code?
IndexWriter error
Hi, I'm using Lucene 4.3.1, from time to time I'm loosing data while indexing.I get an ERROR:Lucene add index:this IndexWriter is closed. I'm using IndexWriter.commit most of the time and when I finish I'm using IndexWriter.close. What can cause this Error?How can I shorten the time of the IndexWriter closing state? thanks,Jon
Re: Solr Boost Search word before Specific Content
Hi Ahmet, Can you elaborate it more? Is it possible to solve my problem in Solr 5.0.0? if yes can just explain how? -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Boost-Search-word-before-Specific-Content-tp4216072p4216257.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: function query result without queryNorm
But why do you want that? On Wed, Jul 8, 2015, at 05:31 AM, Lee Chunki wrote: Hi Markus, Thank you for your reply. I have more questions. what I want to do is sort document by tfidf score + function query score” there are problems to do this : * if I use function query ( https://wiki.apache.org/solr/FunctionQuery https://wiki.apache.org/solr/FunctionQuery ), there is no function for tfidf score. * if I use bf, solr product queryNorm * if I use bq, solr product tfidf what can I do? example what I want to do is when query is like ?q=iphonebf=recip(ms(NOW,mydatefield),3.16e-11,1,1) get score tfidf + bf score = 2.7 + 1.1 when tfidf =2.7 , bf score = 1.1 not 2.7 + 1.1 * 0.12 when queryNorm =0.12 Best, Calvin. On Jul 7, 2015, at 8:51 PM, Markus Jelsma markus.jel...@openindex.io wrote: Hello - you can either use a similarity that does not use query normalization, or you can just ignore it, it is relative anyway. Also, consider using boost parameter instead of bf, it is multiplicative where bf is just additive, which offers less control. You may also want to reduce time resolution by using NOW/HOUR or NOW/DAY. It saves you a lot of bad cache entries. Markus -Original message- From:Lee Chunki lck7...@coupang.com Sent: Tuesday 7th July 2015 13:29 To: solr-user@lucene.apache.org Subject: function query result without queryNorm Hi, I want to add some value to score. so, I tried to use “bf” but it returns value * queryNorm for example, when I use q=iphonebf=div(ms(NOW,start_time),360) solr returns 799.5687 = (MATCH) FunctionQuery(div(ms(const(1436268218591),date(start_time)),const(360))), product of: 6988.394 = div(ms(const(1436268218591),date(start_time)=2014-09-19T07:00:00Z),const(360)) 1.0 = boost 0.114413805 = queryNorm how can I get only FunctionQuery() result ? Thanks, Calvin.
Re: function query result without queryNorm
Hi Markus, Thank you for your reply. I have more questions. what I want to do is sort document by tfidf score + function query score” there are problems to do this : * if I use function query ( https://wiki.apache.org/solr/FunctionQuery https://wiki.apache.org/solr/FunctionQuery ), there is no function for tfidf score. * if I use bf, solr product queryNorm * if I use bq, solr product tfidf what can I do? example what I want to do is when query is like ?q=iphonebf=recip(ms(NOW,mydatefield),3.16e-11,1,1) get score tfidf + bf score = 2.7 + 1.1 when tfidf =2.7 , bf score = 1.1 not 2.7 + 1.1 * 0.12 when queryNorm =0.12 Best, Calvin. On Jul 7, 2015, at 8:51 PM, Markus Jelsma markus.jel...@openindex.io wrote: Hello - you can either use a similarity that does not use query normalization, or you can just ignore it, it is relative anyway. Also, consider using boost parameter instead of bf, it is multiplicative where bf is just additive, which offers less control. You may also want to reduce time resolution by using NOW/HOUR or NOW/DAY. It saves you a lot of bad cache entries. Markus -Original message- From:Lee Chunki lck7...@coupang.com Sent: Tuesday 7th July 2015 13:29 To: solr-user@lucene.apache.org Subject: function query result without queryNorm Hi, I want to add some value to score. so, I tried to use “bf” but it returns value * queryNorm for example, when I use q=iphonebf=div(ms(NOW,start_time),360) solr returns 799.5687 = (MATCH) FunctionQuery(div(ms(const(1436268218591),date(start_time)),const(360))), product of: 6988.394 = div(ms(const(1436268218591),date(start_time)=2014-09-19T07:00:00Z),const(360)) 1.0 = boost 0.114413805 = queryNorm how can I get only FunctionQuery() result ? Thanks, Calvin.
Re: Solr edismax always using the default fields?
What happens if you don't specify the df? On Tue, Jul 7, 2015, at 08:36 PM, Paden wrote: Well I've just been using an authors name. Last Name, First Name Middle Initial. Like *Snowman, Frosty T.* As for the debugging I'm not really seeing anything that would help me understand why the query fields aren't kicking in. And instead only the default fields are. I do see that is parsing the name as a text field though org.apache.lucene.search.BooleanClause:text:*+lastname,, org.apache.lucene.search.BooleanClause:+text:firstname, org.apache.lucene.search.BooleanClause:+text:middleinitial* But I already knew it was parsing it as text. I just don't understand why it's going to the default field when other fields have been specified -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204p4216211.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr edismax always using the default fields?
Forgot the relevent documentation... https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser : Date: Tue, 7 Jul 2015 13:57:25 -0700 (MST) : From: Chris Hostetter hossman_luc...@fucit.org : To: solr-user@lucene.apache.org : Subject: Re: Solr edismax always using the default fields? : : : according to hte echParams output, you aren't specifying a q param. : : You seem to be trying to specify your query input using the q.alt param : -- but the q.alt param doesn't use the edismax parser specified by the : defType param -- q.alt is a feature *of* the edismax parser that is used : to completley override all of the options used to configure edismax when : there is no q param, and it (by default) uses the defualt solr parser. : : (the intended usecase is along the lines of if the user doesn't specify a : query string, default to a search for all the books for sale via : 'q.alt=inStock:true') : : stop configuring a q.alt that matches you authors name, and instead : specify a default q param (or enture your client always sends a q : param) and see if that solves things for you. : : : : Date: Tue, 7 Jul 2015 10:55:24 -0700 (MST) : : From: Paden rumsey...@gmail.com : : Reply-To: solr-user@lucene.apache.org : : To: solr-user@lucene.apache.org : : Subject: Solr edismax always using the default fields? : : : : Hello, : : : : I'm trying to tune a search handler to get the results that I want. In the : : solrconfig.xml I specify several different query fields for the edismax : : query parser but it always seems to use the default fields instead. : : : : For example and clarification, when I remove Author from the df list of : : fields the search results do not come up with the author I was looking for. : : But when I add Author back in as a df it returns 19/20 of his documents as : : the top search results. I guess I'm really confused because when you specify : : the query fields solr shouldn't use those default fields. I guess it's not a : : real super problem because I can define all the fields I need as defaults. : : I'm just trying to figure out where I'm going wrong here. Is there a reason : : that this would happen? I have the search handler xml below : : : : requestHandler name=/select class=solr.SearchHandler : : : : lst name=defaults : :str name=echoParamsall/str : :str name=defTypeedismax/str : :str name=dfAuthor/str --ADDING THIS GIVES DESIRED RESULT, : : REMOVING BAD-- : :str name=dftext/str -RESULT. NOTHING ELSE CHANGES : :str name=qftext^0.5 Title^1.2 Author^2.0/str : :str name=rows20/str : :str name=q.alt*some author name*/str : : /lst : : : : /requestHandler : : : : And here is the echoed params of the search : : : : responseHeader: { : : status: 0, : : QTime: 48, : : params: { : : lowercaseOperators: true, : : df: [ : : Author, : : text : : ], : : echoParams: all, : : indent: true, : : qf: text^0.5 Title^1.2 Author^2.0, : : _: 1436291181730, : : q.alt: *some author name*, : : stopwords: true, : : wt: json, : : defType: edismax, : : rows: 20 : : } : : : : : : : : -- : : View this message in context: http://lucene.472066.n3.nabble.com/Solr-edismax-always-using-the-default-fields-tp4216204.html : : Sent from the Solr - User mailing list archive at Nabble.com. : : : : -Hoss : http://www.lucidworks.com/ : -Hoss http://www.lucidworks.com/
Re: function query result without queryNorm
well…. I wanna boost new document but queryNorm makes me to hard to tune weight for bf score. On Jul 8, 2015, at 1:48 PM, Upayavira u...@odoko.co.uk wrote: But why do you want that? On Wed, Jul 8, 2015, at 05:31 AM, Lee Chunki wrote: Hi Markus, Thank you for your reply. I have more questions. what I want to do is sort document by tfidf score + function query score” there are problems to do this : * if I use function query ( https://wiki.apache.org/solr/FunctionQuery https://wiki.apache.org/solr/FunctionQuery ), there is no function for tfidf score. * if I use bf, solr product queryNorm * if I use bq, solr product tfidf what can I do? example what I want to do is when query is like ?q=iphonebf=recip(ms(NOW,mydatefield),3.16e-11,1,1) get score tfidf + bf score = 2.7 + 1.1 when tfidf =2.7 , bf score = 1.1 not 2.7 + 1.1 * 0.12 when queryNorm =0.12 Best, Calvin. On Jul 7, 2015, at 8:51 PM, Markus Jelsma markus.jel...@openindex.io wrote: Hello - you can either use a similarity that does not use query normalization, or you can just ignore it, it is relative anyway. Also, consider using boost parameter instead of bf, it is multiplicative where bf is just additive, which offers less control. You may also want to reduce time resolution by using NOW/HOUR or NOW/DAY. It saves you a lot of bad cache entries. Markus -Original message- From:Lee Chunki lck7...@coupang.com Sent: Tuesday 7th July 2015 13:29 To: solr-user@lucene.apache.org Subject: function query result without queryNorm Hi, I want to add some value to score. so, I tried to use “bf” but it returns value * queryNorm for example, when I use q=iphonebf=div(ms(NOW,start_time),360) solr returns 799.5687 = (MATCH) FunctionQuery(div(ms(const(1436268218591),date(start_time)),const(360))), product of: 6988.394 = div(ms(const(1436268218591),date(start_time)=2014-09-19T07:00:00Z),const(360)) 1.0 = boost 0.114413805 = queryNorm how can I get only FunctionQuery() result ? Thanks, Calvin.