Re: Solr Multicore - cant seem to get it working?
Hi, Thanks for the reply. I sure did. I did a bit more research, and found out that java was still running, so I run: sudo /etc/init.d/solr stop ...then: killall java (just to make sure it was all closed) ...and finally: sudo /etc/init.d/solr start Yet I'm still getting the error about the URL not being accessable :( Could it be a syntax error in schema.xml maybe thats stopping it from loading that particular multicore? Cheers Andy On Sun, Feb 27, 2011 at 10:15 PM, Wilkes, Chris [via Lucene] ml-node+2591060-694286558-370...@n3.nabble.com wrote: Did you copy the files into the solr/graphics/ directory? That subdirectory should be the same name as the instanceDir. That is you should have this directory structure: solr/graphics/conf/ solr/graphics/data/index/ On Feb 27, 2011, at 5:20 AM, ultranerds [hidden email]http://user/SendEmail.jtp?type=nodenode=2591060i=0 wrote: Hi, I'm trying to get a Solr install to work with multicores, as I wanna use it on several sites (each totally different, and I don't obviously wanna have loads of seperate Solr installs) Here is what I've done so far (Solr was already uploaded and working normally, without multicore): 1) Added this to solr.xml: core name=graphics instanceDir=graphics / 2) Copied the files from /multicore/core1 3) Edited schema.xml to the fields I want 4) Restarted solr Then I did the perl stuff, but it doesn't seem to wanna load? use constant SOLR = 'http://localhost:210/solr/graphics'; my $title = foo bar test; my $desc = some short description to test it; my $link_id = 1234; use WebService::Solr; my $solr = WebService::Solr-new( SOLR ); 404 Not Found: html head meta http-equiv=Content-Type content=text/html; charset=ISO-8859-1/ titleError 404 /title /head bodyh2HTTP ERROR: 404/h2preNOT_FOUND/pre pRequestURI=/solr/graphics/update/ppsmall http://jetty.mortbay.org/ Powered by Jetty:// /small/pbr/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ /body /html at /home/user/site.com/cgi-bin/art/admin/WebService/Solr.pm line 180 WebService::Solr::_send_update('WebService::Solr=HASH(0x13d4b68)', 'XML::Generator::overload=ARRAY(0x13da3f8)') called at /home/user/site.com/cgi-bin/art/admin/WebService/Solr.pm line 73 WebService::Solr::add('WebService::Solr=HASH(0x13d4b68)', 'WebService::Solr::Document=HASH(0x6052b8)') called at solr_add_test.cgi line 42 rgillen@factory /home/rgillen/solr $ Am I doing something wrong? Still reading through the setup process in the Solr 1.4 book I brough a while back, but just wanted to make sure it wasn't something on the server end that wasn't quite working right =) If I don't use the multicore it seems to work ok (but it obviously doesn't use the multicore, which is what I wanted to do) TIA Andy -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.htmlhttp://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.html?by-user=t Sent from the Solr - User mailing list archive at Nabble.com. -- If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2591060.html To unsubscribe from Solr Multicore - cant seem to get it working?, click herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=2588149code=YW5keS5uZXdieUBnbWFpbC5jb218MjU4ODE0OXw3ODE4NzEzNjA=. -- Andy Newby a...@ultranerds.com -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2593220.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Problem with Solr and Nutch integration
Hi Anurag Sorry for missing that key piece of info out. I'm running Linux (Centos 5.5). Regards Paul On 28 February 2011 07:26, Anurag anurag.it.jo...@gmail.com wrote: Which os u are using?
Re: Question about Nested Span Near Query
Hi Bill Any update.. On Thu, Feb 24, 2011 at 8:58 PM, Ahsan |qbal ahsan.iqbal...@gmail.comwrote: Hi schema and document are attached. On Thu, Feb 24, 2011 at 8:24 PM, Bill Bell billnb...@gmail.com wrote: Send schema and document in XML format and I'll look at it Bill Bell Sent from mobile On Feb 24, 2011, at 7:26 AM, Ahsan |qbal ahsan.iqbal...@gmail.com wrote: Hi To narrow down the issue I indexed a single document with one of the sample queries (given below) which was giving issue. *evaluation of loan and lease portfolios for purposes of assessing the adequacy of * Now when i Perform a search query (*TextContents:evaluation of loan and lease portfolios for purposes of assessing the adequacy of*) the parsed query is *spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([spanNear([Contents:evaluation, Contents:of], 0, true), Contents:loan], 0, true), Contents:and], 0, true), Contents:lease], 0, true), Contents:portfolios], 0, true), Contents:for], 0, true), Contents:purposes], 0, true), Contents:of], 0, true), Contents:assessing], 0, true), Contents:the], 0, true), Contents:adequacy], 0, true), Contents:of], 0, true)* and search is not successful. If I remove '*evaluation*' from start OR *'assessing the adequacy of*' from end it works fine. Issue seems to come on relatively long phrases but I have not been able to find a pattern and its really mind boggling coz I thought this issue might be due to large position list but this is a single document with one phrase. So its definitely not related to size of index. Any ideas whats going on?? On Thu, Feb 24, 2011 at 10:25 AM, Ahsan |qbal ahsan.iqbal...@gmail.com wrote: Hi It didn't search.. (means no results found even results exist) one observation is that it works well even in the long phrases but when the long phrases contain stop words and same stop word exist two or more time in the phrase then, solr can't search with query parsed in this way. On Wed, Feb 23, 2011 at 11:49 PM, Otis Gospodnetic otis_gospodne...@yahoo.com wrote: Hi, What do you mean by this doesn't work fine? Does it not work correctly or is it slow or ... I was going to suggest you look at Surround QP, but it looks like you already did that. Wouldn't it be better to get Surround QP to work? Otis Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ - Original Message From: Ahsan |qbal ahsan.iqbal...@gmail.com To: solr-user@lucene.apache.org Sent: Tue, February 22, 2011 10:59:26 AM Subject: Question about Nested Span Near Query Hi All I had a requirement to implement queries that involves phrase proximity. like user should be able to search ab cd w/5 de fg, both phrases as whole should be with in 5 words of each other. For this I implement a query parser that make use of nested span queries, so above query would be parsed as spanNear([spanNear([Contents:ab, Contents:cd], 0, true), spanNear([Contents:de, Contents:fg], 0, true)], 5, false) Queries like this seems to work really good when phrases are small but when phrases are large this doesn't work fine. Now my question, Is there any limitation of SpanNearQuery. that we cannot handle large phrases in this way? please help Regards Ahsan
Re: Solr Multicore - cant seem to get it working?
Andy, try to answer the question other guys asked you .. and not just write something. On Sun, Feb 27, 2011 at 4:16 PM, Ahmet Arslan iori...@yahoo.com wrote: Can you see 'Admin graphics' listed in http://localhost:210/solr/ page? Regards Stefan On Mon, Feb 28, 2011 at 8:57 AM, ultranerds andy.ne...@gmail.com wrote: Hi, Thanks for the reply. I sure did. I did a bit more research, and found out that java was still running, so I run: sudo /etc/init.d/solr stop ...then: killall java (just to make sure it was all closed) ...and finally: sudo /etc/init.d/solr start Yet I'm still getting the error about the URL not being accessable :( Could it be a syntax error in schema.xml maybe thats stopping it from loading that particular multicore? Cheers Andy On Sun, Feb 27, 2011 at 10:15 PM, Wilkes, Chris [via Lucene] ml-node+2591060-694286558-370...@n3.nabble.com wrote: Did you copy the files into the solr/graphics/ directory? That subdirectory should be the same name as the instanceDir. That is you should have this directory structure: solr/graphics/conf/ solr/graphics/data/index/ On Feb 27, 2011, at 5:20 AM, ultranerds [hidden email]http://user/SendEmail.jtp?type=nodenode=2591060i=0 wrote: Hi, I'm trying to get a Solr install to work with multicores, as I wanna use it on several sites (each totally different, and I don't obviously wanna have loads of seperate Solr installs) Here is what I've done so far (Solr was already uploaded and working normally, without multicore): 1) Added this to solr.xml: core name=graphics instanceDir=graphics / 2) Copied the files from /multicore/core1 3) Edited schema.xml to the fields I want 4) Restarted solr Then I did the perl stuff, but it doesn't seem to wanna load? use constant SOLR = 'http://localhost:210/solr/graphics'; my $title = foo bar test; my $desc = some short description to test it; my $link_id = 1234; use WebService::Solr; my $solr = WebService::Solr-new( SOLR ); 404 Not Found: html head meta http-equiv=Content-Type content=text/html; charset=ISO-8859-1/ titleError 404 /title /head bodyh2HTTP ERROR: 404/h2preNOT_FOUND/pre pRequestURI=/solr/graphics/update/ppsmall http://jetty.mortbay.org/ Powered by Jetty:// /small/pbr/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ /body /html at /home/user/site.com/cgi-bin/art/admin/WebService/Solr.pm line 180 WebService::Solr::_send_update('WebService::Solr=HASH(0x13d4b68)', 'XML::Generator::overload=ARRAY(0x13da3f8)') called at /home/user/site.com/cgi-bin/art/admin/WebService/Solr.pm line 73 WebService::Solr::add('WebService::Solr=HASH(0x13d4b68)', 'WebService::Solr::Document=HASH(0x6052b8)') called at solr_add_test.cgi line 42 rgillen@factory /home/rgillen/solr $ Am I doing something wrong? Still reading through the setup process in the Solr 1.4 book I brough a while back, but just wanted to make sure it wasn't something on the server end that wasn't quite working right =) If I don't use the multicore it seems to work ok (but it obviously doesn't use the multicore, which is what I wanted to do) TIA Andy -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.htmlhttp://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.html?by-user=t Sent from the Solr - User mailing list archive at Nabble.com. -- If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2591060.html To unsubscribe from Solr Multicore - cant seem to get it working?, click herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=2588149code=YW5keS5uZXdieUBnbWFpbC5jb218MjU4ODE0OXw3ODE4NzEzNjA=. -- Andy Newby a...@ultranerds.com -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2593220.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Multicore - cant seem to get it working?
Hi, I already did answer that one =) I'm waiting on my host to add my IP to the firewall for me, so I can test that part of it (they setup the bog standard Solr for me, but I'm now trying to get it working with the multicore, as I think we are gonna need several Solr instances for different parts of the site) TIA Andy On Mon, Feb 28, 2011 at 10:12 AM, Stefan Matheis [via Lucene] ml-node+2593785-1884500610-370...@n3.nabble.com wrote: Andy, try to answer the question other guys asked you .. and not just write something. On Sun, Feb 27, 2011 at 4:16 PM, Ahmet Arslan [hidden email]http://user/SendEmail.jtp?type=nodenode=2593785i=0 wrote: Can you see 'Admin graphics' listed in http://localhost:210/solr/ page? Regards Stefan On Mon, Feb 28, 2011 at 8:57 AM, ultranerds [hidden email]http://user/SendEmail.jtp?type=nodenode=2593785i=1 wrote: Hi, Thanks for the reply. I sure did. I did a bit more research, and found out that java was still running, so I run: sudo /etc/init.d/solr stop ...then: killall java (just to make sure it was all closed) ...and finally: sudo /etc/init.d/solr start Yet I'm still getting the error about the URL not being accessable :( Could it be a syntax error in schema.xml maybe thats stopping it from loading that particular multicore? Cheers Andy On Sun, Feb 27, 2011 at 10:15 PM, Wilkes, Chris [via Lucene] [hidden email] http://user/SendEmail.jtp?type=nodenode=2593785i=2 wrote: Did you copy the files into the solr/graphics/ directory? That subdirectory should be the same name as the instanceDir. That is you should have this directory structure: solr/graphics/conf/ solr/graphics/data/index/ On Feb 27, 2011, at 5:20 AM, ultranerds [hidden email] http://user/SendEmail.jtp?type=nodenode=2591060i=0 wrote: Hi, I'm trying to get a Solr install to work with multicores, as I wanna use it on several sites (each totally different, and I don't obviously wanna have loads of seperate Solr installs) Here is what I've done so far (Solr was already uploaded and working normally, without multicore): 1) Added this to solr.xml: core name=graphics instanceDir=graphics / 2) Copied the files from /multicore/core1 3) Edited schema.xml to the fields I want 4) Restarted solr Then I did the perl stuff, but it doesn't seem to wanna load? use constant SOLR = 'http://localhost:210/solr/graphics'; my $title = foo bar test; my $desc = some short description to test it; my $link_id = 1234; use WebService::Solr; my $solr = WebService::Solr-new( SOLR ); 404 Not Found: html head meta http-equiv=Content-Type content=text/html; charset=ISO-8859-1/ titleError 404 /title /head bodyh2HTTP ERROR: 404/h2preNOT_FOUND/pre pRequestURI=/solr/graphics/update/ppsmall http://jetty.mortbay.org/ Powered by Jetty:// /small/pbr/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ br/ /body /html at /home/user/site.com/cgi-bin/art/admin/WebService/Solr.pm line 180 WebService::Solr::_send_update('WebService::Solr=HASH(0x13d4b68)', 'XML::Generator::overload=ARRAY(0x13da3f8)') called at /home/user/site.com/cgi-bin/art/admin/WebService/Solr.pm line 73 WebService::Solr::add('WebService::Solr=HASH(0x13d4b68)', 'WebService::Solr::Document=HASH(0x6052b8)') called at solr_add_test.cgi line 42 rgillen@factory /home/rgillen/solr $ Am I doing something wrong? Still reading through the setup process in the Solr 1.4 book I brough a while back, but just wanted to make sure it wasn't something on the server end that wasn't quite working right =) If I don't use the multicore it seems to work ok (but it obviously doesn't use the multicore, which is what I wanted to do) TIA Andy -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.htmlhttp://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.html?by-user=t http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.html?by-user=thttp://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2588149.html?by-user=tby-user=t Sent from the Solr - User mailing list archive at Nabble.com. -- If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2591060.htmlhttp://lucene.472066.n3.nabble.com/Solr-Multicore-cant-seem-to-get-it-working-tp2588149p2591060.html?by-user=t To unsubscribe from Solr Multicore - cant seem to get it working?, click here
Create a tomcat service.
Does anybody have a script to create a tomcat service? I'm trying to set my system up to run multiple instances of tomcat at the same time (on different ports, obviously), and can't get the service to create properly.I tried to follow the steps mentioned in this linkhttp://doc.ittrium.com/ittrium/visit/A1x66x1y1x10ddx1x68y1x1209x1x68y1x1214x1x7d.. But not successful in getting this thing done.. The service.bat file referring to an exe that is not available in the zip. Any help or suggestions? Thanks, Rajani.
Re: Solr Multicore - cant seem to get it working?
I'm waiting on my host to add my IP to the firewall for me, so I can test that part of it (they setup the bog standard Solr for me, but I'm now trying to get it working with the multicore, as I think we are gonna need several Solr instances for different parts of the If you have multicore setup, it is normal that below URL is not accessible. http://localhost:210/solr/graphics You should use the URL ending with 'graphics/admin/' instead. Even if you remove last slash, you will get 404 Error. Here is the list of valid URLs: http://localhost:210/solr/ http://localhost:210/solr http://localhost:210/solr/graphics/admin/ Invalid (HTTP ERROR: 404) URLs: http://localhost:210/solr/graphics http://localhost:210/solr/graphics/ http://localhost:210/solr/graphics/admin
Re: Getting sum of all terms count in dataset instead of document count using TermsComponent....(and TermsComponent vs Facets)
from head, but also tried on 1.4.1. The instructions i posted works with 1.4.1. You need to import TermDocs in TermsComponent.java : import org.apache.lucene.index.TermDocs; Ravish, did you get it working? Can you give us feedback?
Re: Problem with Solr and Nutch integration
Solr uses jetty server default, do u know that? you can run solr server without using Tomcat (using jetty server). Please describe the steps that led to the error. Which command u executed? On Mon, Feb 28, 2011 at 1:39 AM, Paul Rogers [via Lucene] ml-node+2590334-620834624-146...@n3.nabble.com wrote: Hi Guys I'm trying to integrate solr and nutch as per http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/, using the branch_3x from svn on Tomcat 6. After adding the nutch requestHandler to solrconfig.xml while the solr-example will start on accessing the admin page I get the following error message: HTTP Status 404 - missing core name in path -- *type* Status report *message* *missing core name in path* *description* *The requested resource (missing core name in path) is not available.* After googling I found the following: I think you are seeing the effects of SOLR-1743 masking another error ... have you checked your log for other errors/exceptions being logged when you startup solr with that solrconfig.xml? Chris Hostetter My initial question is this, am I right in thinking that solr should be using the Tomcat 6 logging and that logs should be found at $CATALINA_HOME/logs/catalina.out? Thanks Paul -- If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/Problem-with-Solr-and-Nutch-integration-tp2590334p2590334.html To start a new topic under Solr - User, email ml-node+472068-1941297125-146...@n3.nabble.com To unsubscribe from Solr - User, click herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=472068code=YW51cmFnLml0LmpvbGx5QGdtYWlsLmNvbXw0NzIwNjh8LTIwOTgzNDQxOTY=. -- Kumar Anurag - Kumar Anurag -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-Solr-and-Nutch-integration-tp2590334p2594293.html Sent from the Solr - User mailing list archive at Nabble.com.
MorelikeThis not working with Shards(distributed Search)
Hi, I am experimenting with the *morelikethis* to see if it also works with *distributed* search.But i did not get the solution yet.Can anyone help me regarding this. please provide me detailed description. as I didnt find it by updating MoreLikeThisComponent.java,MoreLikeThisHandler.java,ShardRequest.java specified in the AlternateDistributedMLT.patch . Thanks in advance.. Isha Garg
Re: Create a tomcat service.
You may have downloaded the wrong Tomcat package? http://lmgtfy.com/?q=tomcat+windows+service On 28. feb. 2011, at 12.25, rajini maski wrote: Does anybody have a script to create a tomcat service? I'm trying to set my system up to run multiple instances of tomcat at the same time (on different ports, obviously), and can't get the service to create properly.I tried to follow the steps mentioned in this linkhttp://doc.ittrium.com/ittrium/visit/A1x66x1y1x10ddx1x68y1x1209x1x68y1x1214x1x7d.. But not successful in getting this thing done.. The service.bat file referring to an exe that is not available in the zip. Any help or suggestions? Thanks, Rajani.
Re: Create a tomcat service.
Hi Rajini, We use the following script ran from within {TOMCAT_HOME}\bin directory to create service instances (assuming you are targeting Windows Server environments..): cd C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\bin set CATALINA_BASE=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME} rem Delete service if it already exists rem tomcat6 //DS//your_instance_name tomcat6 //IS//your_instance_name --DisplayName={TOMCAT_HOME} --Description=This is your instance description --Install=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\bin\tomcat6.exe --Classpath=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\bin\bootstrap.jar --Jvm=auto --Startup=auto --StartMode=jvm --StartPath=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME} --StopMode=jvm --StartClass=org.apache.catalina.startup.Bootstrap --StartParams=start --StopClass=org.apache.catalina.startup.Bootstrap --StopParams=stop --StopPath=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME} --Startup=auto --LogPath=%CATALINA_BASE%\logs --StdOutput=auto --StdError=auto --JvmOptions=-Dcatalina.home='C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}';-Dcatalina.base=%CATALINA_BASE%;-Djava.io.tmpdir=%CATALINA_BASE%\temp;-Djava.endorsed.dirs='C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\endorsed' rem Copy service applet tomcat6w.exe to instance name copy tomcat6w.exe your_instance_name.exe /Y Regards, - Savvas On 28 February 2011 12:15, Jan Høydahl jan@cominvent.com wrote: You may have downloaded the wrong Tomcat package? http://lmgtfy.com/?q=tomcat+windows+service On 28. feb. 2011, at 12.25, rajini maski wrote: Does anybody have a script to create a tomcat service? I'm trying to set my system up to run multiple instances of tomcat at the same time (on different ports, obviously), and can't get the service to create properly.I tried to follow the steps mentioned in this link http://doc.ittrium.com/ittrium/visit/A1x66x1y1x10ddx1x68y1x1209x1x68y1x1214x1x7d .. But not successful in getting this thing done.. The service.bat file referring to an exe that is not available in the zip. Any help or suggestions? Thanks, Rajani.
Re: Create a tomcat service.
..--DisplayName doesn't *have* to be {TOMCAT_HOME} of course..just a copy paste artifact.. :D On 28 February 2011 12:21, Savvas-Andreas Moysidis savvas.andreas.moysi...@googlemail.com wrote: Hi Rajini, We use the following script ran from within {TOMCAT_HOME}\bin directory to create service instances (assuming you are targeting Windows Server environments..): cd C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\bin set CATALINA_BASE=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME} rem Delete service if it already exists rem tomcat6 //DS//your_instance_name tomcat6 //IS//your_instance_name --DisplayName={TOMCAT_HOME} --Description=This is your instance description --Install=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\bin\tomcat6.exe --Classpath=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\bin\bootstrap.jar --Jvm=auto --Startup=auto --StartMode=jvm --StartPath=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME} --StopMode=jvm --StartClass=org.apache.catalina.startup.Bootstrap --StartParams=start --StopClass=org.apache.catalina.startup.Bootstrap --StopParams=stop --StopPath=C:\Program Files\Apache Software Foundation\{TOMCAT_HOME} --Startup=auto --LogPath=%CATALINA_BASE%\logs --StdOutput=auto --StdError=auto --JvmOptions=-Dcatalina.home='C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}';-Dcatalina.base=%CATALINA_BASE%;-Djava.io.tmpdir=%CATALINA_BASE%\temp;-Djava.endorsed.dirs='C:\Program Files\Apache Software Foundation\{TOMCAT_HOME}\endorsed' rem Copy service applet tomcat6w.exe to instance name copy tomcat6w.exe your_instance_name.exe /Y Regards, - Savvas On 28 February 2011 12:15, Jan Høydahl jan@cominvent.com wrote: You may have downloaded the wrong Tomcat package? http://lmgtfy.com/?q=tomcat+windows+service On 28. feb. 2011, at 12.25, rajini maski wrote: Does anybody have a script to create a tomcat service? I'm trying to set my system up to run multiple instances of tomcat at the same time (on different ports, obviously), and can't get the service to create properly.I tried to follow the steps mentioned in this link http://doc.ittrium.com/ittrium/visit/A1x66x1y1x10ddx1x68y1x1209x1x68y1x1214x1x7d .. But not successful in getting this thing done.. The service.bat file referring to an exe that is not available in the zip. Any help or suggestions? Thanks, Rajani.
Distances in spatial search (Solr 4.0)
Hi guys, We are implementing a separate index on our website, that will be dedicated to spatial search. I've downloaded a build of Solr 4.0 to try the spatial features and got the geodist working really fast. We now have 2 other features that will be needed on this project: 1. Returning the distance from the reference point to the search hit (in kilometers) 2. Sorting by the distance. On item 2, the wiki doc points that a distance function can be used but I was not able to find good info on how to accomplish it. Also, returning the distance (item 1) is noted as currently being in development and there is some workaround to get it. Anyone had experience with the spatial feature and could help with some pointers on how to achieve it? Thanks, Alexandre
Re: Understanding multi-field queries with q and fq
Hi, Otis. I have been playing with dismax (defType=dismax, not qt=dismax -- not sure about the difference). It looks like eDismax won't be available until Solr 3.1, correct? We actually have to pass hundreds of Oracle OLS labels in each request for each user (e.g., Loan Officer can see her customers' data, but VP can see all customer data). I've been passing them as an fq parameter, but have recently learned that's bad, since fq parameters participate in caching. We obviously *only* want the label comparisons performed against the label field. (Those values won't be present in the other search-able fields that the dismax would run all query parameters against.) Is there some dismax query magic that would allow us to match the labels in an uncached manner against only the labels field, but match the user-entered query against the qf fields? If not, I think we're stuck with moving the labels piece to q and the user query to fq and sticking with the standard handler. Thanks! Otis Gospodnetic-2 wrote: Hi mrw, It sounds like you (e)dismax is what you should look into. You didn't mention it/them, so I'm assuming you're not aware of them. See: http://search-lucene.com/?q=dismax+OR+edismaxfc_project=Solr Otis Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ - Original Message From: mrw mikerobertsw...@gmail.com To: solr-user@lucene.apache.org Sent: Fri, February 18, 2011 1:56:24 PM Subject: Understanding multi-field queries with q and fq After searching this list, Google, and looking through the Pugh book, I am a little confused about the right way to structure a query. The Packt book uses the example of the MusicBrainz DB full of song metadata. What if they also had the song lyrics in English and German as files on disk, and wanted to index them along with the metadata, so that each document would basically have song title, artist, publisher, date, ..., All_Metadata (copy field of all metadata fields), Text_English, and Text_German fields? There can only be one default field, correct? So if we want to search for all songs containing (zeppelin AND (dog OR merle)) do we repeat the entire query text for all three major fields in the 'q' clause (assuming we don't want to use the cache): q=(+All_Metadata:zeppelin AND (dog OR merle)+Text_English:zeppelin AND (dog OR merle)+Text_German:(zeppelin AND (dog OR merle)) or repeat the entire query text for all three major fields in the 'fq' clause (assuming we want to use the cache): q=*:*fq=(+All_Metadata:zeppelin AND (dog OR merle)+Text_English:zeppelin AND (dog OR merle)+Text_German:zeppelin AND (dog OR merle)) ? Thanks! -- View this message in context: http://lucene.472066.n3.nabble.com/Understanding-multi-field-queries-with-q-and-fq-tp2528866p2528866.html Sent from the Solr - User mailing list archive at Nabble.com. -- View this message in context: http://lucene.472066.n3.nabble.com/Understanding-multi-field-queries-with-q-and-fq-tp2528866p2596242.html Sent from the Solr - User mailing list archive at Nabble.com.
Basic Dismax syntax question
Say I have an index with first_name and last_name fields, and also a copy field for the full name called full_name. Say I add two employees: Napoleon Bonaparte and Napoleon Dynamite. If I search for just the first or last name, or both names, with mm=1, I get the expected results: q=NapoleondefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results However, if I try to search for both names with mm=2 (which I think means term1 AND term2), I get 0 results: q=napoleon%20bonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2 // 0 results q=napoleon%20bonapartedefType=dismaxtie=0.1qf=full_namemm=2 // 0 results I also see this when I put all fields (including the copy field) into the qf parameter. Thoughts? Thanks! -- View this message in context: http://lucene.472066.n3.nabble.com/Different-behavior-for-q-goo-com-vs-q-goo-com-in-queries-tp2168935p2596768.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Basic Dismax syntax question
i noticed that your search terms are using caps vs lower case, are your search fields perhaps not set to lowercase the terms and/or the search term? On Mon, Feb 28, 2011 at 10:41 AM, mrw mikerobertsw...@gmail.com wrote: Say I have an index with first_name and last_name fields, and also a copy field for the full name called full_name. Say I add two employees: Napoleon Bonaparte and Napoleon Dynamite. If I search for just the first or last name, or both names, with mm=1, I get the expected results: q=NapoleondefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results However, if I try to search for both names with mm=2 (which I think means term1 AND term2), I get 0 results: q=napoleon%20bonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2 // 0 results q=napoleon%20bonapartedefType=dismaxtie=0.1qf=full_namemm=2 // 0 results I also see this when I put all fields (including the copy field) into the qf parameter. Thoughts? Thanks! -- View this message in context: http://lucene.472066.n3.nabble.com/Different-behavior-for-q-goo-com-vs-q-goo-com-in-queries-tp2168935p2596768.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Payloads retrieval
Hi! I'm studying a migration from pure Lucene to Solr, but I need a crucial feature: Is it posible to retrieve payloads from Solr? I'm storing the coordinates from each term in its payload to highlight images in client-side. Thank you,
Re: Problems with JSP pages?
: I'm on Windows Vista, using the trunk. Some of the JSP pages do not : execute, but instead Jetty downloads them. : : solr/admin/get-properties.jsp for example. This is called by the 'JAVA : PROPERTIES' button in the main admin page. : : Is this a known problem/quirk for Windows? Or fallout from a jetty : change? Or...? I can't reproduce using trunk or 3x (as of r1075445) can any one else reproduce this on windows? Lance: anything interesting in the logs from jetty? are you using the plain vanilla solr example or something else? -Hoss
setting different solrconfig.xml for a core
So I think I ought to be able to set up a particular solr core to use a different file for solrconfig.xml. (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) Solr 1.4.1, using this for guidance: http://wiki.apache.org/solr/CoreAdmin But no matter what I try, while I get no errors in the log file (should I be looking for errors somewhere else?), the core doesn't successfully come up. I am trying in the solr.xml, to do this: core name=master_prod instanceDir=master_prod config=master-solrconfig.xml property name=dataDir value=/opt/solr/solr_indexer/master_prod/data / /core Or I try this instead: core name=master_prod instanceDir=master_prod config=master-solrconfig.xml property name=dataDir value=/opt/solr/solr_indexer/master_prod/data / property name=configName value=master-solrconfig.xml / /core With either of these, in the log file things look like they started up succesfully but it doesn't appear to actually be so, the core is actually inaccessible. Maybe there's an error in my master-solrconfig.xml, but I don't think so, and there's nothing in the log on that either. Or maybe I'm not doing things right as far as telling it to use the 'config file' solrconfig.xml in a different location. Can anyone confirm for me that this is possible, and what the right way to try and do it is?
Search Result from Multiple Cores
Hi All, Can anyone please help me in getting the results from multiple cores ( all cores maintain there separate indexes, no sharding). Suppose I had three cores: - Core A ( SQL Server DB), Core B ( FileSystem), Core C (MySQL) if i search for word Java then the combined results from all the cores should come up ordering by their ranking. I even found a patch for this in JIRA but the issue is still unresolved. Can any of you, please help me. / Pankaj
Re: Basic Dismax syntax question
--- On Mon, 2/28/11, mrw mikerobertsw...@gmail.com wrote: From: mrw mikerobertsw...@gmail.com Subject: Basic Dismax syntax question To: solr-user@lucene.apache.org Date: Monday, February 28, 2011, 7:41 PM Say I have an index with first_name and last_name fields, and also a copy field for the full name called full_name. Say I add two employees: Napoleon Bonaparte and Napoleon Dynamite. If I search for just the first or last name, or both names, with mm=1, I get the expected results: q=NapoleondefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results However, if I try to search for both names with mm=2 (which I think means term1 AND term2), I get 0 results: q=napoleon%20bonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2 // 0 results q=napoleon%20bonapartedefType=dismaxtie=0.1qf=full_namemm=2 // 0 results I also see this when I put all fields (including the copy field) into the qf parameter. debugQuery=on will dump useful information. What is the field types of first_name, last_name and full_name? What happens when you query first letter uppercased? q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2
Re: Solr Payloads retrieval
Yes, check out the field type payloads in the schema.xml file. If you set up one or more of your fields as type payloads (you would use the DelimitedPayloadTokenFilterFactory during indexing in your analyzer chain), you can then use the PayloadTermQuery to query it with, scoring can be done with a custom PayloadSimilarity implementation. Check out this (slightly dated) article for more information. http://www.lucidimagination.com/blog/2009/08/05/getting-started-with-payloads/ -sujit On Mon, 2011-02-28 at 14:49 -0300, Fabiano Nunes wrote: Hi! I'm studying a migration from pure Lucene to Solr, but I need a crucial feature: Is it posible to retrieve payloads from Solr? I'm storing the coordinates from each term in its payload to highlight images in client-side. Thank you,
Re: Basic Dismax syntax question
They're all set to LC. I was just coming up with a safe example to post. It sounds like you don't see an issue with the syntax we're using? Thanks tjpoe wrote: i noticed that your search terms are using caps vs lower case, are your search fields perhaps not set to lowercase the terms and/or the search term? On Mon, Feb 28, 2011 at 10:41 AM, mrw mikerobertsw...@gmail.com wrote: Say I have an index with first_name and last_name fields, and also a copy field for the full name called full_name. Say I add two employees: Napoleon Bonaparte and Napoleon Dynamite. If I search for just the first or last name, or both names, with mm=1, I get the expected results: q=NapoleondefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results However, if I try to search for both names with mm=2 (which I think means term1 AND term2), I get 0 results: q=napoleon%20bonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2 // 0 results q=napoleon%20bonapartedefType=dismaxtie=0.1qf=full_namemm=2 // 0 results I also see this when I put all fields (including the copy field) into the qf parameter. Thoughts? Thanks! -- View this message in context: http://lucene.472066.n3.nabble.com/Different-behavior-for-q-goo-com-vs-q-goo-com-in-queries-tp2168935p2596768.html Sent from the Solr - User mailing list archive at Nabble.com. -- View this message in context: http://lucene.472066.n3.nabble.com/Different-behavior-for-q-goo-com-vs-q-goo-com-in-queries-tp2168935p2597447.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Tika metadata extracted per supported document format?
Chris: Yes, I only see the output below. I'm familiar with the information in http://wiki.apache.org/solr/ExtractingRequestHandler, except for the tika.config part, which I haven't touched. Even when running documents through Tika directly, the output of metadata is highly dependent on what metadata the document contains (obviously). I haven't found the right place in the Tika source code yet either. Would digging into POI, PDFBox, ... help me any further on my pursuit? A Matrix that lists the complete set of metadata for the most popular formats would sure be helpful to me. I would help providing it, if properly directed. Thanks, Andreas PS: I've also noticed some differences in the date formats being used (using version 0.9). Is that something I should be concerned about when using it through SolrCell? meta name=Creation-Date content=Mon May 17 10:10:15 PDT 2010/ (from a Word document) meta name=Creation-Date content=2011-01-03T18:45:50Z/ (from a PDF) From: Mattmann, Chris A (388J) chris.a.mattm...@jpl.nasa.gov To: solr-user@lucene.apache.org solr-user@lucene.apache.org Sent: Fri, February 25, 2011 4:11:00 PM Subject: Re: Tika metadata extracted per supported document format? Hi Andreas, java -jar tika-app-0.9.jar --list-met-models TikaMetadataKeys PROTECTED RESOURCE_NAME_KEY TikaMimeKeys MIME_TYPE_MAGIC TIKA_MIME_FILE Both 0.8 and 0.9 give me the same list. Is that a configuration issue? Strange -- those are the only met models you're seeing listed? I'm a bit unclear if that gets me to what I was looking for - metadata like content_type or last_modified. Or am I confusing Tika metadata with SolrCell metadata? I thought SolrCell metadata comes from Tika, or does it not? It does come from Tika that's for sure, but in SolrCell, there is a configuration for the ExtractingRequestHandler that remaps the field names from Tika to Solr. So that's probably where it's coming from. Check this out: http://wiki.apache.org/solr/ExtractingRequestHandler HTH! Cheers, Chris ++ Chris Mattmann, Ph.D. Senior Computer Scientist NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA Office: 171-266B, Mailstop: 171-246 Email: chris.a.mattm...@nasa.gov WWW: http://sunset.usc.edu/~mattmann/ ++ Adjunct Assistant Professor, Computer Science Department University of Southern California, Los Angeles, CA 90089 USA ++
Re: Basic Dismax syntax question
Fields are str type. The issue happens regardless of case. I just threw in some examples using names to highlight the issue. In the actual index, the data is the affected fields is all LC, and I'm searching in LC. Sounds like the syntax looks okay to you? Thanks iorixxx wrote: --- On Mon, 2/28/11, mrw mikerobertsw...@gmail.com wrote: From: mrw mikerobertsw...@gmail.com Subject: Basic Dismax syntax question To: solr-user@lucene.apache.org Date: Monday, February 28, 2011, 7:41 PM Say I have an index with first_name and last_name fields, and also a copy field for the full name called full_name. Say I add two employees: Napoleon Bonaparte and Napoleon Dynamite. If I search for just the first or last name, or both names, with mm=1, I get the expected results: q=NapoleondefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=1 // 2 results However, if I try to search for both names with mm=2 (which I think means term1 AND term2), I get 0 results: q=napoleon%20bonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2 // 0 results q=napoleon%20bonapartedefType=dismaxtie=0.1qf=full_namemm=2 // 0 results I also see this when I put all fields (including the copy field) into the qf parameter. debugQuery=on will dump useful information. What is the field types of first_name, last_name and full_name? What happens when you query first letter uppercased? q=Napoleon%20BonapartedefType=dismaxtie=0.1qf=first_name%20last_namemm=2 -- View this message in context: http://lucene.472066.n3.nabble.com/Different-behavior-for-q-goo-com-vs-q-goo-com-in-queries-tp2168935p2597510.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: setting different solrconfig.xml for a core
(The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node
please make JSONWriter public
Hello fellow SOLR experts, may I ask to make top-level and public the class org.apache.solr.request.JSONWriter inside org.apache.solr.request.JSONResponseWriter I am re-using it to output JSON search result to code that I wish not to change on the client but the current visibility settings (JSONWriter is package protected) makes it impossible for me without actually copying the code (which is possible thanks to the good open-source nature). thanks in advance paul
Re: setting different solrconfig.xml for a core
On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node That isn't great, becuase there are more differences in optimal solrconfig.xml between master and slave than just the replication handler difference, which that URL covers. A master (which won't be queried against) doesn't need spellcheck running after commits, but the slave does. A master doesn't need slow newsearcher/firstsearcher query warm-ups, but the slave does. The master may be better with different (lower) cache settings, since it won't be used to service live queries. The documentation clearly suggests it _ought_ to be possible to tell a core the name of it's config file (default solrconfig.xml) to be something other than solrconfig.xml -- but I havent' been able to make it work, and find the lack of any errors in the log file when it's not working to be frustrating. Has anyone actually done this? Can anyone confirm that it's even possible, and the documentation isn't just taking me for a ride?
Re: setting different solrconfig.xml for a core
Okay, I did manage to find a clue from the log that it's not working, when it's not working: INFO: Jk running ID=0 time=0/66 config=null config=null, that's not right. When I try to over-ride the config file name in solr.xml core config, I can't seem to put a name in there that works to find a file that does actually exist. Unless I put the name solrconfig.xml in there, then it works fine, heh. On 2/28/2011 3:00 PM, Jonathan Rochkind wrote: On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node That isn't great, becuase there are more differences in optimal solrconfig.xml between master and slave than just the replication handler difference, which that URL covers. A master (which won't be queried against) doesn't need spellcheck running after commits, but the slave does. A master doesn't need slow newsearcher/firstsearcher query warm-ups, but the slave does. The master may be better with different (lower) cache settings, since it won't be used to service live queries. The documentation clearly suggests it _ought_ to be possible to tell a core the name of it's config file (default solrconfig.xml) to be something other than solrconfig.xml -- but I havent' been able to make it work, and find the lack of any errors in the log file when it's not working to be frustrating. Has anyone actually done this? Can anyone confirm that it's even possible, and the documentation isn't just taking me for a ride?
RE: setting different solrconfig.xml for a core
Jonathan, When I was first setting up replication a couple weeks ago, I had this working, as described here: http://wiki.apache.org/solr/SolrReplication#Replicating_solrconfig.xml I created the slave's solrconfig.xml and saved it on the master in the conf dir as solrconfig_slave.xml, then began the confFiles parameter on the master with solrconfig_slave.xml:solrconfig.xml,schema.xml,etc. And it was working (v1.4.1). I'm not sure why you haven't had good luck with this but you can at least know it is possible to get it to work. I think to get the slave up and running for the first time I saved the slave's version on the slave as solrconfig.xml. It then would copy over any changed versions of solrconfig_slave.xml from the master to the slave, saving them on the slave as solrconfig.xml. But I primed it by giving it its config file in-sync to start with. I ended up going the same-config-file-everywhere route though because we're using our master to handle requests when its not indexing (one less server to buy)... James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 2:03 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core Okay, I did manage to find a clue from the log that it's not working, when it's not working: INFO: Jk running ID=0 time=0/66 config=null config=null, that's not right. When I try to over-ride the config file name in solr.xml core config, I can't seem to put a name in there that works to find a file that does actually exist. Unless I put the name solrconfig.xml in there, then it works fine, heh. On 2/28/2011 3:00 PM, Jonathan Rochkind wrote: On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node That isn't great, becuase there are more differences in optimal solrconfig.xml between master and slave than just the replication handler difference, which that URL covers. A master (which won't be queried against) doesn't need spellcheck running after commits, but the slave does. A master doesn't need slow newsearcher/firstsearcher query warm-ups, but the slave does. The master may be better with different (lower) cache settings, since it won't be used to service live queries. The documentation clearly suggests it _ought_ to be possible to tell a core the name of it's config file (default solrconfig.xml) to be something other than solrconfig.xml -- but I havent' been able to make it work, and find the lack of any errors in the log file when it's not working to be frustrating. Has anyone actually done this? Can anyone confirm that it's even possible, and the documentation isn't just taking me for a ride?
Re: Problem with Solr and Nutch integration
Hi Anurag Thanks for the prompt reply. I'm following the tutorial at http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/ I have built solr and the example and added it to Tomcat as per http://wiki.apache.org/solr/SolrTomcat and this (solr-example) all appears to work fine (I can access the solr admin page at http://localhost:8080/solr-example/admin/http://sol001:8080/solr-example/admin/ and search using the same). I have copied the nutch schema.xml across and replaced the example one. Again everything seems to work fine. However when I add the request handler: requestHandler name=/nutch class=solr.SearchHandler lst name=defaults str name=defTypedismax/str str name=echoParamsexplicit/str float name=tie0.01/float str name=qf content^0.5 anchor^1.0 title^1.2 /str str name=pf content^0.5 anchor^1.5 title^1.2 site^1.5 /str str name=fl url /str str name=mm 2lt;-1 5lt;-2 6lt;90% /str int name=ps100/int bool hl=true/ str name=q.alt*:*/str str name=hl.fltitle url content/str str name=f.title.hl.fragsize0/str str name=f.title.hl.alternateFieldtitle/str str name=f.url.hl.fragsize0/str str name=f.url.hl.alternateFieldurl/str str name=f.content.hl.fragmenterregex/str /lst /requestHandler and restart the solr-example app under tomcat I get the following error: HTTP Status 404 - missing core name in path -- *type* Status report *message* *missing core name in path* *description* *The requested resource (missing core name in path) is not available.* * * As soon as I comment out the request handler the example appears to work again. From the previous mentioned post I understand that this error is masking the actual error and I need to check the logs. However I'm unsure exactly where these are located. I was hoping if I could post them It'd allow you guys to suggest a solution. Many thanks Paul On 28 February 2011 11:37, Anurag anurag.it.jo...@gmail.com wrote: Solr uses jetty server default, do u know that? you can run solr server without using Tomcat (using jetty server). Please describe the steps that led to the error. Which command u executed?
Re: How to handle special character in filter query
Thanks you both. In order to get it (e.g. filter query -- http://...fq:genres=ACTION ADVENTURE) work in browser, i have to explicitly put url encode for special character, regardless what following solution i use: 1. Using double quote on filter query field 2. Using escape on special character. In my case, it is white space and within the ACTION ADVENTURE 3. Using raw query Qparser It looks like someone already asked the question before and get answered: http://lucene.472066.n3.nabble.com/fq-field-with-facets-td2559948.html Thanks for your help. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-handle-special-character-in-filter-query-tp2579978p2598130.html Sent from the Solr - User mailing list archive at Nabble.com.
Problem with sorting using functions.
This works: /select/?q=*:*sort=price desc This throws a 400 error: /select/?q=*:*sort=sum(1, 1) desc Missing sort order. I'm using 1.4.2. I've tried all sorts of different numbers, functions, and fields but nothing seems to change that error. Any ideas?
Indexed, but cannot search
Hi all, I was able to get my installation of Solr indexed using dataimport. However, I cannot seem to get search working. I can verify that the data is there by going to: http://localhost:8983/solr/select/?q=*%3A*version=2.2start=0rows=10indent=on This gives me the response: result name=response numFound=234961 start=0 But when I go to http://localhost:8983/solr/select/?q=dogversion=2.2start=0rows=10indent=on I get the response: result name=response numFound=0 start=0 I know that dog should return some results because it is the first result when I select all the records. So what am I doing incorrectly that would prevent me from seeing results?
Re: setting different solrconfig.xml for a core
Yeah, I'm actually _not_ trying to get replication to copy over the config files. Instead, I'm assuming the config files are all there, and I'm actually trying to get one of the cores to _use_ a file that actually on disk in that core is called, eg, solrconfig_slave.xml. This wiki page: http://wiki.apache.org/solr/CoreAdmin suggests I _ought_ to be able to do that, to tell a particular core to use a config file of any name I want. But I'm having trouble getting it to work. But that could be my own local mistake of some kind too. Just makes it harder to figure out when I'm not even exactly sure how you're _supposed_ to be able to do that -- CoreAdmin wiki page implies at least two different ways you should be able to do it, but doesn't include an actual example so I'm not sure if I'm understanding what it's implying correctly -- or if the actual 1.4.1 behavior matches what's in that wiki page anyway. On 2/28/2011 3:14 PM, Dyer, James wrote: Jonathan, When I was first setting up replication a couple weeks ago, I had this working, as described here: http://wiki.apache.org/solr/SolrReplication#Replicating_solrconfig.xml I created the slave's solrconfig.xml and saved it on the master in the conf dir as solrconfig_slave.xml, then began the confFiles parameter on the master with solrconfig_slave.xml:solrconfig.xml,schema.xml,etc. And it was working (v1.4.1). I'm not sure why you haven't had good luck with this but you can at least know it is possible to get it to work. I think to get the slave up and running for the first time I saved the slave's version on the slave as solrconfig.xml. It then would copy over any changed versions of solrconfig_slave.xml from the master to the slave, saving them on the slave as solrconfig.xml. But I primed it by giving it its config file in-sync to start with. I ended up going the same-config-file-everywhere route though because we're using our master to handle requests when its not indexing (one less server to buy)... James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 2:03 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core Okay, I did manage to find a clue from the log that it's not working, when it's not working: INFO: Jk running ID=0 time=0/66 config=null config=null, that's not right. When I try to over-ride the config file name in solr.xml core config, I can't seem to put a name in there that works to find a file that does actually exist. Unless I put the name solrconfig.xml in there, then it works fine, heh. On 2/28/2011 3:00 PM, Jonathan Rochkind wrote: On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node That isn't great, becuase there are more differences in optimal solrconfig.xml between master and slave than just the replication handler difference, which that URL covers. A master (which won't be queried against) doesn't need spellcheck running after commits, but the slave does. A master doesn't need slow newsearcher/firstsearcher query warm-ups, but the slave does. The master may be better with different (lower) cache settings, since it won't be used to service live queries. The documentation clearly suggests it _ought_ to be possible to tell a core the name of it's config file (default solrconfig.xml) to be something other than solrconfig.xml -- but I havent' been able to make it work, and find the lack of any errors in the log file when it's not working to be frustrating. Has anyone actually done this? Can anyone confirm that it's even possible, and the documentation isn't just taking me for a ride?
Re: setting different solrconfig.xml for a core
Aha, wait, I think I've made it work, as simple as this in the solr.xml core config, to make a core use a solrconfig.xml file with a different name: ... core name=master_prod instanceDir=master_prod config=master-solrconfig.xml ... Not sure why that didn't work the first half a dozen times I tried. May have had a syntax error in my master-solrconfig.xml file, even though the Solr log files didn't report any, maybe when there's a syntax error Solr just silently gives up on the config file and presents an empty index, I dunno. On 2/28/2011 3:46 PM, Jonathan Rochkind wrote: Yeah, I'm actually _not_ trying to get replication to copy over the config files. Instead, I'm assuming the config files are all there, and I'm actually trying to get one of the cores to _use_ a file that actually on disk in that core is called, eg, solrconfig_slave.xml. This wiki page: http://wiki.apache.org/solr/CoreAdmin suggests I _ought_ to be able to do that, to tell a particular core to use a config file of any name I want. But I'm having trouble getting it to work. But that could be my own local mistake of some kind too. Just makes it harder to figure out when I'm not even exactly sure how you're _supposed_ to be able to do that -- CoreAdmin wiki page implies at least two different ways you should be able to do it, but doesn't include an actual example so I'm not sure if I'm understanding what it's implying correctly -- or if the actual 1.4.1 behavior matches what's in that wiki page anyway. On 2/28/2011 3:14 PM, Dyer, James wrote: Jonathan, When I was first setting up replication a couple weeks ago, I had this working, as described here: http://wiki.apache.org/solr/SolrReplication#Replicating_solrconfig.xml I created the slave's solrconfig.xml and saved it on the master in the conf dir as solrconfig_slave.xml, then began the confFiles parameter on the master with solrconfig_slave.xml:solrconfig.xml,schema.xml,etc. And it was working (v1.4.1). I'm not sure why you haven't had good luck with this but you can at least know it is possible to get it to work. I think to get the slave up and running for the first time I saved the slave's version on the slave as solrconfig.xml. It then would copy over any changed versions of solrconfig_slave.xml from the master to the slave, saving them on the slave as solrconfig.xml. But I primed it by giving it its config file in-sync to start with. I ended up going the same-config-file-everywhere route though because we're using our master to handle requests when its not indexing (one less server to buy)... James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 2:03 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core Okay, I did manage to find a clue from the log that it's not working, when it's not working: INFO: Jk running ID=0 time=0/66 config=null config=null, that's not right. When I try to over-ride the config file name in solr.xml core config, I can't seem to put a name in there that works to find a file that does actually exist. Unless I put the name solrconfig.xml in there, then it works fine, heh. On 2/28/2011 3:00 PM, Jonathan Rochkind wrote: On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node That isn't great, becuase there are more differences in optimal solrconfig.xml between master and slave than just the replication handler difference, which that URL covers. A master (which won't be queried against) doesn't need spellcheck running after commits, but the slave does. A master doesn't need slow newsearcher/firstsearcher query warm-ups, but the slave does. The master may be better with different (lower) cache settings, since it won't be used to service live queries. The documentation clearly suggests it _ought_ to be possible to tell a core the name of it's config file (default solrconfig.xml) to be something other than solrconfig.xml -- but I havent' been able to make it work, and find the lack of any errors in the log file when it's not working to be frustrating. Has anyone actually done this? Can anyone confirm that it's even possible, and the documentation isn't just taking me for a ride?
Sub entities
Hi all, I was able to get my dataimport to work correctly but I'm a little unclear as to how the entity within an entity works in regards to search results. When I do a search for all results, it seems only the outermost responses are returned. For example, I have the following in my db config file: dataConfig dataSource type=JdbcDataSource name=mystuff batchSize=-1 driver=com.mysql.jdbc.Driver url=jdbc:mysql://localhost/db?characterEncoding=UTF8amp;zeroDateTimeBehavior=convertToNull user=user password=password/ document entity name=animal dataSource=mystuff query=SELECT * FROM animals field column=id name=id / field column=type name=type / field column=genus name=genus / !-- Add in the species -- entity name=specie_relations dataSource=mystuff query=SELECT specie_id FROMspecie_relations WHERE animal_id=${animal.id} entity name=species dataSource=mystuff query=SELECT specie FROM species WHERE id=${specie_relations.specie_id} field column=specie name=specie / /entity /entity /entity /document /dataSource /dataConfig However, specie never shows up in my search results: doc str name=typeMammal/str str name=id1/str str name=genusCanis/str /doc I had hoped the results would include the species. Can it? If so, what is my malfunction?
Re: setting different solrconfig.xml for a core
And in other news of other possibilities. If I DID want to use the same solrconfig.xml for both master and slave, but disable the newsearcher/firstsearcher queries on master, it _looks_ like I can use the techique here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node Applied to newsearcher/firstsearcher too: listener event=firstSearcher class=solr.QuerySenderListener enable=${enable.slave:false} Now that listener will only be turned on if enable.slave is set to true. Might make more sense to use a different property value there, like enable.searcher or something. I'm not entirely sure in what places the enable attribute is recognized and in what places it isn't, but it LOOKS like it's recognized on the listener tag. I think. On 2/28/2011 3:52 PM, Jonathan Rochkind wrote: Aha, wait, I think I've made it work, as simple as this in the solr.xml core config, to make a core use a solrconfig.xml file with a different name: ...core name=master_prod instanceDir=master_prod config=master-solrconfig.xml ... Not sure why that didn't work the first half a dozen times I tried. May have had a syntax error in my master-solrconfig.xml file, even though the Solr log files didn't report any, maybe when there's a syntax error Solr just silently gives up on the config file and presents an empty index, I dunno. On 2/28/2011 3:46 PM, Jonathan Rochkind wrote: Yeah, I'm actually _not_ trying to get replication to copy over the config files. Instead, I'm assuming the config files are all there, and I'm actually trying to get one of the cores to _use_ a file that actually on disk in that core is called, eg, solrconfig_slave.xml. This wiki page: http://wiki.apache.org/solr/CoreAdmin suggests I _ought_ to be able to do that, to tell a particular core to use a config file of any name I want. But I'm having trouble getting it to work. But that could be my own local mistake of some kind too. Just makes it harder to figure out when I'm not even exactly sure how you're _supposed_ to be able to do that -- CoreAdmin wiki page implies at least two different ways you should be able to do it, but doesn't include an actual example so I'm not sure if I'm understanding what it's implying correctly -- or if the actual 1.4.1 behavior matches what's in that wiki page anyway. On 2/28/2011 3:14 PM, Dyer, James wrote: Jonathan, When I was first setting up replication a couple weeks ago, I had this working, as described here: http://wiki.apache.org/solr/SolrReplication#Replicating_solrconfig.xml I created the slave's solrconfig.xml and saved it on the master in the conf dir as solrconfig_slave.xml, then began the confFiles parameter on the master with solrconfig_slave.xml:solrconfig.xml,schema.xml,etc. And it was working (v1.4.1). I'm not sure why you haven't had good luck with this but you can at least know it is possible to get it to work. I think to get the slave up and running for the first time I saved the slave's version on the slave as solrconfig.xml. It then would copy over any changed versions of solrconfig_slave.xml from the master to the slave, saving them on the slave as solrconfig.xml. But I primed it by giving it its config file in-sync to start with. I ended up going the same-config-file-everywhere route though because we're using our master to handle requests when its not indexing (one less server to buy)... James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 2:03 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core Okay, I did manage to find a clue from the log that it's not working, when it's not working: INFO: Jk running ID=0 time=0/66 config=null config=null, that's not right. When I try to over-ride the config file name in solr.xml core config, I can't seem to put a name in there that works to find a file that does actually exist. Unless I put the name solrconfig.xml in there, then it works fine, heh. On 2/28/2011 3:00 PM, Jonathan Rochkind wrote: On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and slave in replication have the exact same repo checkout for their conf directory, but have the master using a different solrconfig.xml, one set up to be master.) How about using same solrconfig.xml for master too? As described here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node That isn't great, becuase there are more differences in optimal solrconfig.xml between master and slave than just the replication handler difference, which that URL covers. A master (which won't be queried against) doesn't need spellcheck running after commits, but the slave does. A master doesn't need slow newsearcher/firstsearcher query warm-ups, but the slave does. The master may be better with
Re: Problem with Solr and Nutch integration
The above mentioned requesthandler lies in schema file or solrconfig file of Solr? I integrated solr and nutch and requesthandler lies in SolrConfig.xml. This is my some content of my solrconfig.xml. As far as Jetty server is concerned its working, don't know about solrtomcat. requestHandler name=/nutch class=solr.SearchHandler lst name=defaults str name=defTypedismax/str str name=echoParamsexplicit/str float name=tie0.01/float str name=qf content^0.5 anchor^1.0 title^1.2 /str str name=pf content^0.5 anchor^1.5 title^1.2 site^1.5 /str str name=fl url /str str name=mm 2lt;-1 5lt;-2 6lt;90% /str int name=ps100/int bool hl=true/ str name=q.alt*:*/str str name=hl.fltitle url content/str str name=f.title.hl.fragsize0/str str name=f.title.hl.alternateFieldtitle/str str name=f.url.hl.fragsize0/str str name=f.url.hl.alternateFieldurl/str str name=f.content.hl.fragmenterregex/str /lst /requestHandler requestHandler name=/nutch class=solr.SearchHandler lst name=defaults str name=defTypedismax/str str name=echoParamsexplicit/str float name=tie0.01/float str name=qf content^0.5 anchor^1.0 title^1.2 /str str name=pf content^0.5 anchor^1.5 title^1.2 site^1.5 /str str name=fl url /str str name=mm 2lt;-1 5lt;-2 6lt;90% /str int name=ps100/int bool hl=true/ str name=q.alt*:*/str str name=hl.fltitle url content/str str name=f.title.hl.fragsize0/str str name=f.title.hl.alternateFieldtitle/str str name=f.url.hl.fragsize0/str str name=f.url.hl.alternateFieldurl/str str name=f.content.hl.fragmenterregex/str /lst /requestHandler But yeah deploying solr.war file in tomcat 6 on my system works. On Tue, Mar 1, 2011 at 1:56 AM, Paul Rogers [via Lucene] ml-node+2598137-98595181-146...@n3.nabble.com wrote: Hi Anurag Thanks for the prompt reply. I'm following the tutorial at http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/ I have built solr and the example and added it to Tomcat as per http://wiki.apache.org/solr/SolrTomcat and this (solr-example) all appears to work fine (I can access the solr admin page at http://localhost:8080/solr-example/admin/ http://sol001:8080/solr-example/admin/ and search using the same). I have copied the nutch schema.xml across and replaced the example one. Again everything seems to work fine. However when I add the request handler: requestHandler name=/nutch class=solr.SearchHandler lst name=defaults str name=defTypedismax/str str name=echoParamsexplicit/str float name=tie0.01/float str name=qf content^0.5 anchor^1.0 title^1.2 /str str name=pf content^0.5 anchor^1.5 title^1.2 site^1.5 /str str name=fl url /str str name=mm 2-1 5-2 690% /str int name=ps100/int bool hl=true/ str name=q.alt*:*/str str name=hl.fltitle url content/str str name=f.title.hl.fragsize0/str str name=f.title.hl.alternateFieldtitle/str str name=f.url.hl.fragsize0/str str name=f.url.hl.alternateFieldurl/str str name=f.content.hl.fragmenterregex/str /lst /requestHandler and restart the solr-example app under tomcat I get the following error: HTTP Status 404 - missing core name in path -- *type* Status report *message* *missing core name in path* *description* *The requested resource (missing core name in path) is not available.* * * As soon as I comment out the request handler the example appears to work again. From the previous mentioned post I understand that this error is masking the actual error and I need to check the logs. However I'm unsure exactly where these are located. I was hoping if I could post them It'd allow you guys to suggest a solution. Many thanks Paul On 28 February 2011 11:37, Anurag [hidden email]http://user/SendEmail.jtp?type=nodenode=2598137i=0 wrote: Solr uses jetty server default, do u know that? you can run solr server without using Tomcat (using jetty server). Please describe the steps that led to the error. Which command u executed? -- If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/Problem-with-Solr-and-Nutch-integration-tp2590334p2598137.html To start a new topic under Solr - User, email ml-node+472068-1941297125-146...@n3.nabble.com To unsubscribe from Solr - User, click herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=472068code=YW51cmFnLml0LmpvbGx5QGdtYWlsLmNvbXw0NzIwNjh8LTIwOTgzNDQxOTY=. -- Kumar Anurag - Kumar Anurag -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-Solr-and-Nutch-integration-tp2590334p2598526.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: setting different solrconfig.xml for a core
Just did a quick search for ' enable= ' in the 1.4.1 source. Looks like from the example solrconfig.xml, both searchComponent and requestHandler tags can take the enable attribute. Its only shown with the ClusteringComponent so I'm not sure if just any SC or RH will honor it. Also see the unit test TestPluginEnable.java, which seems to show that the StandardRequestHandler will honor it. James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 3:09 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core And in other news of other possibilities. If I DID want to use the same solrconfig.xml for both master and slave, but disable the newsearcher/firstsearcher queries on master, it _looks_ like I can use the techique here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node Applied to newsearcher/firstsearcher too: listener event=firstSearcher class=solr.QuerySenderListener enable=${enable.slave:false} Now that listener will only be turned on if enable.slave is set to true. Might make more sense to use a different property value there, like enable.searcher or something. I'm not entirely sure in what places the enable attribute is recognized and in what places it isn't, but it LOOKS like it's recognized on the listener tag. I think. On 2/28/2011 3:52 PM, Jonathan Rochkind wrote: Aha, wait, I think I've made it work, as simple as this in the solr.xml core config, to make a core use a solrconfig.xml file with a different name: ...core name=master_prod instanceDir=master_prod config=master-solrconfig.xml ... Not sure why that didn't work the first half a dozen times I tried. May have had a syntax error in my master-solrconfig.xml file, even though the Solr log files didn't report any, maybe when there's a syntax error Solr just silently gives up on the config file and presents an empty index, I dunno. On 2/28/2011 3:46 PM, Jonathan Rochkind wrote: Yeah, I'm actually _not_ trying to get replication to copy over the config files. Instead, I'm assuming the config files are all there, and I'm actually trying to get one of the cores to _use_ a file that actually on disk in that core is called, eg, solrconfig_slave.xml. This wiki page: http://wiki.apache.org/solr/CoreAdmin suggests I _ought_ to be able to do that, to tell a particular core to use a config file of any name I want. But I'm having trouble getting it to work. But that could be my own local mistake of some kind too. Just makes it harder to figure out when I'm not even exactly sure how you're _supposed_ to be able to do that -- CoreAdmin wiki page implies at least two different ways you should be able to do it, but doesn't include an actual example so I'm not sure if I'm understanding what it's implying correctly -- or if the actual 1.4.1 behavior matches what's in that wiki page anyway. On 2/28/2011 3:14 PM, Dyer, James wrote: Jonathan, When I was first setting up replication a couple weeks ago, I had this working, as described here: http://wiki.apache.org/solr/SolrReplication#Replicating_solrconfig.xml I created the slave's solrconfig.xml and saved it on the master in the conf dir as solrconfig_slave.xml, then began the confFiles parameter on the master with solrconfig_slave.xml:solrconfig.xml,schema.xml,etc. And it was working (v1.4.1). I'm not sure why you haven't had good luck with this but you can at least know it is possible to get it to work. I think to get the slave up and running for the first time I saved the slave's version on the slave as solrconfig.xml. It then would copy over any changed versions of solrconfig_slave.xml from the master to the slave, saving them on the slave as solrconfig.xml. But I primed it by giving it its config file in-sync to start with. I ended up going the same-config-file-everywhere route though because we're using our master to handle requests when its not indexing (one less server to buy)... James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 2:03 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core Okay, I did manage to find a clue from the log that it's not working, when it's not working: INFO: Jk running ID=0 time=0/66 config=null config=null, that's not right. When I try to over-ride the config file name in solr.xml core config, I can't seem to put a name in there that works to find a file that does actually exist. Unless I put the name solrconfig.xml in there, then it works fine, heh. On 2/28/2011 3:00 PM, Jonathan Rochkind wrote: On 2/28/2011 1:09 PM, Ahmet Arslan wrote: (The reason I want to do this is so I can have master and
Re: Problem with sorting using functions.
sort by functionquery is only available from solr 3.1 (from : http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function) 2011/2/28 John Sherwood j...@storecrowd.com This works: /select/?q=*:*sort=price desc This throws a 400 error: /select/?q=*:*sort=sum(1, 1) desc Missing sort order. I'm using 1.4.2. I've tried all sorts of different numbers, functions, and fields but nothing seems to change that error. Any ideas?
Re: setting different solrconfig.xml for a core
Hmm, I'm pretty sure I'm seeing that listener can take an 'enable' attribute too. Even though that's not a searchComponent or a requestComponent, is it? After toggling enable back on forth on a listener and restarting Solr and watching my logs closely, I am as confident as I can be that it mysteriously is being respected on listener. Go figure. Convenient for me, because I wanted to disable my expensive and timeconsuming newSearcher/firstSearcher warming queries on a core marked 'master'. On 2/28/2011 4:21 PM, Dyer, James wrote: Just did a quick search for ' enable= ' in the 1.4.1 source. Looks like from the example solrconfig.xml, bothsearchComponent andrequestHandler tags can take the enable attribute. Its only shown with the ClusteringComponent so I'm not sure if just any SC or RH will honor it. Also see the unit test TestPluginEnable.java, which seems to show that the StandardRequestHandler will honor it. James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 3:09 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core And in other news of other possibilities. If I DID want to use the same solrconfig.xml for both master and slave, but disable the newsearcher/firstsearcher queries on master, it _looks_ like I can use the techique here: http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node Applied to newsearcher/firstsearcher too: listener event=firstSearcher class=solr.QuerySenderListener enable=${enable.slave:false} Now that listener will only be turned on if enable.slave is set to true. Might make more sense to use a different property value there, like enable.searcher or something. I'm not entirely sure in what places the enable attribute is recognized and in what places it isn't, but it LOOKS like it's recognized on thelistener tag. I think. On 2/28/2011 3:52 PM, Jonathan Rochkind wrote: Aha, wait, I think I've made it work, as simple as this in the solr.xml core config, to make a core use a solrconfig.xml file with a different name: ...core name=master_prod instanceDir=master_prod config=master-solrconfig.xml ... Not sure why that didn't work the first half a dozen times I tried. May have had a syntax error in my master-solrconfig.xml file, even though the Solr log files didn't report any, maybe when there's a syntax error Solr just silently gives up on the config file and presents an empty index, I dunno. On 2/28/2011 3:46 PM, Jonathan Rochkind wrote: Yeah, I'm actually _not_ trying to get replication to copy over the config files. Instead, I'm assuming the config files are all there, and I'm actually trying to get one of the cores to _use_ a file that actually on disk in that core is called, eg, solrconfig_slave.xml. This wiki page: http://wiki.apache.org/solr/CoreAdmin suggests I _ought_ to be able to do that, to tell a particular core to use a config file of any name I want. But I'm having trouble getting it to work. But that could be my own local mistake of some kind too. Just makes it harder to figure out when I'm not even exactly sure how you're _supposed_ to be able to do that -- CoreAdmin wiki page implies at least two different ways you should be able to do it, but doesn't include an actual example so I'm not sure if I'm understanding what it's implying correctly -- or if the actual 1.4.1 behavior matches what's in that wiki page anyway. On 2/28/2011 3:14 PM, Dyer, James wrote: Jonathan, When I was first setting up replication a couple weeks ago, I had this working, as described here: http://wiki.apache.org/solr/SolrReplication#Replicating_solrconfig.xml I created the slave's solrconfig.xml and saved it on the master in the conf dir as solrconfig_slave.xml, then began the confFiles parameter on the master with solrconfig_slave.xml:solrconfig.xml,schema.xml,etc. And it was working (v1.4.1). I'm not sure why you haven't had good luck with this but you can at least know it is possible to get it to work. I think to get the slave up and running for the first time I saved the slave's version on the slave as solrconfig.xml. It then would copy over any changed versions of solrconfig_slave.xml from the master to the slave, saving them on the slave as solrconfig.xml. But I primed it by giving it its config file in-sync to start with. I ended up going the same-config-file-everywhere route though because we're using our master to handle requests when its not indexing (one less server to buy)... James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 -Original Message- From: Jonathan Rochkind [mailto:rochk...@jhu.edu] Sent: Monday, February 28, 2011 2:03 PM To: solr-user@lucene.apache.org Subject: Re: setting different solrconfig.xml for a core Okay, I did manage to find a clue from the log that it's not working, when it's
Re: Problem with sorting using functions.
Fair call. Thanks. On Tue, Mar 1, 2011 at 8:21 AM, Geert-Jan Brits gbr...@gmail.com wrote: sort by functionquery is only available from solr 3.1 (from : http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function) 2011/2/28 John Sherwood j...@storecrowd.com This works: /select/?q=*:*sort=price desc This throws a 400 error: /select/?q=*:*sort=sum(1, 1) desc Missing sort order. I'm using 1.4.2. I've tried all sorts of different numbers, functions, and fields but nothing seems to change that error. Any ideas?
suggestion: do not require masterUrl for slave config
Suggestion, curious what other people think of it, if I should bother filing a JIRA and/or trying to come up with a patch. Currently, when you configure a replication lst name=slave, you HAVE to give it a masterUrl. SEVERE: org.apache.solr.common.SolrException: 'masterUrl' is required for a slave at org.apache.solr.handler.SnapPuller.init(SnapPuller.java:126) at org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:775) at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:508) at org.apache.solr.core.SolrCore.init(SolrCore.java:588) At first this makes sense, why would you want a slave without a masterUrl? But since you can supply the masterUrl as a query parameter in /replication?command=fetchIndexmasterUrl=X, there's really no reason to require you to specify it in the solrconfig.xml, if you are planning on not having automatic polling, but just triggering replication manually, and supplying the masterUrl in the command every time. This can sometimes be convenient for letting some other monitor process decide when and how to replicate, instead of having solr itself be configured for pulling via polling. Does that make any sense?
Re: Problem with sorting using functions.
Fair call. Thanks. On Tue, Mar 1, 2011 at 8:21 AM, Geert-Jan Brits gbr...@gmail.com wrote: sort by functionquery is only available from solr 3.1 (from : http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function) 2011/2/28 John Sherwood j...@storecrowd.com This works: /select/?q=*:*sort=price desc This throws a 400 error: /select/?q=*:*sort=sum(1, 1) desc Missing sort order. I'm using 1.4.2. I've tried all sorts of different numbers, functions, and fields but nothing seems to change that error. Any ideas?
multi-core solr, specifying the data directory
Unless I'm doing something wrong, in my experience in multi-core Solr in 1.4.1, you NEED to explicitly provide an absolute path to the 'data' dir. I set up multi-core like this: cores adminPath=/admin/cores core name=some_core instanceDir=some_core /core /cores Now, setting instanceDir like that works for Solr to look for the 'conf' directory in the default location you'd expect, ./some_core/conf. You'd expect it to look for the 'data' dir for an index in ./some_core/data too, by default. But it does not seem to. It's still looking for the 'data' directory in the _main_ solr.home/data, not under the relevant core directory. The only way I can manage to get it to look for the /data directory where I expect is to spell it out with a full absolute path: core name=some_core instanceDir=some_core property name=dataDir value=/path/to/main/solr/some_core/data / /core And then in the solrconfig.xml do a dataDir${dataDir}/dataDir Is this what everyone else does too? Or am I missing a better way of doing this? I would have thought it would just work, with Solr by default looking for a ./data subdir of the specified instanceDir. But it definitely doesn't seem to do that. Should it? Anyone know if Solr in trunk past 1.4.1 has been changed to do what I expect? Or am I wrong to expect it? Or does everyone else do multi-core in some different way than me where this doesn't come up? Jonathan
Re: Indexed, but cannot search
q=dog is equivalent to q=text:dog (where the default search field is defined as text at the bottom of schema.xml). If you want to specify a different field, well, you need to tell it :-) Is that it? Upayavira On Mon, 28 Feb 2011 15:38 -0500, Brian Lamb brian.l...@journalexperts.com wrote: Hi all, I was able to get my installation of Solr indexed using dataimport. However, I cannot seem to get search working. I can verify that the data is there by going to: http://localhost:8983/solr/select/?q=*%3A*version=2.2start=0rows=10indent=on This gives me the response: result name=response numFound=234961 start=0 But when I go to http://localhost:8983/solr/select/?q=dogversion=2.2start=0rows=10indent=on I get the response: result name=response numFound=0 start=0 I know that dog should return some results because it is the first result when I select all the records. So what am I doing incorrectly that would prevent me from seeing results? --- Enterprise Search Consultant at Sourcesense UK, Making Sense of Open Source
Disabling caching for fq param?
Based on what I've read here and what I could find on the web, it seems that each fq clause essentially gets its own results cache. Is that correct? We have a corporate policy of passing the user's Oracle OLS labels into the index in order to be matched against the labels field. I currently separate this from the user's query text by sticking it into an fq param... ?q=user-entered expression fq=labels:the label values expression qf=song metadata copy field song lyrics field tie=0.1 defType=dismax ...but since its value (a collection of hundreds of label values) only apply to that user, the accompanying result set won't be reusable by other users: My understanding is that this query will result in two result sets (q and fq) being cached separately, with the union of the two sets being returned to the user. (Is that correct?) There are thousands of users, each with a unique combination of labels, so there seems to be little value in caching the result set created from the fq labels param. It would be beneficial if there were some kind of fq parameter override to indicate to Solr to not cache the results? Thanks! -- View this message in context: http://lucene.472066.n3.nabble.com/Disabling-caching-for-fq-param-tp2600188p2600188.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Distances in spatial search (Solr 4.0)
Use sort with geodist() to sort by distance. Getting the distance returned us documented on the wiki if you are not using score. see reference to _Val_ Bill Bell Sent from mobile On Feb 28, 2011, at 7:54 AM, Alexandre Rocco alel...@gmail.com wrote: Hi guys, We are implementing a separate index on our website, that will be dedicated to spatial search. I've downloaded a build of Solr 4.0 to try the spatial features and got the geodist working really fast. We now have 2 other features that will be needed on this project: 1. Returning the distance from the reference point to the search hit (in kilometers) 2. Sorting by the distance. On item 2, the wiki doc points that a distance function can be used but I was not able to find good info on how to accomplish it. Also, returning the distance (item 1) is noted as currently being in development and there is some workaround to get it. Anyone had experience with the spatial feature and could help with some pointers on how to achieve it? Thanks, Alexandre
RE: Disabling caching for fq param?
As far as I know there is not, it might be beneficial, but also worth considering: thousands of users isn't _that_ many, and if that same clause is always the same per user, then if the same user does a query a second time, it wouldn't hurt to have their user-specific fq in the cache. A single fq cache may not take as much RAM as you think, you could potentially afford increase your fq cache size to thousands/tens-of-thousands, and win all the way around. The filter cache should be a least-recently-used-out-first cache, so even if the filter cache isn't big enough for all of them, fq's that are used by more than one user will probably stay in the cache as old user-specific fq's end up falling off the back as least-recently-used. So in actual practice, one way or another, it may not be a problem. From: mrw [mikerobertsw...@gmail.com] Sent: Monday, February 28, 2011 9:06 PM To: solr-user@lucene.apache.org Subject: Disabling caching for fq param? Based on what I've read here and what I could find on the web, it seems that each fq clause essentially gets its own results cache. Is that correct? We have a corporate policy of passing the user's Oracle OLS labels into the index in order to be matched against the labels field. I currently separate this from the user's query text by sticking it into an fq param... ?q=user-entered expression fq=labels:the label values expression qf=song metadata copy field song lyrics field tie=0.1 defType=dismax ...but since its value (a collection of hundreds of label values) only apply to that user, the accompanying result set won't be reusable by other users: My understanding is that this query will result in two result sets (q and fq) being cached separately, with the union of the two sets being returned to the user. (Is that correct?) There are thousands of users, each with a unique combination of labels, so there seems to be little value in caching the result set created from the fq labels param. It would be beneficial if there were some kind of fq parameter override to indicate to Solr to not cache the results? Thanks! -- View this message in context: http://lucene.472066.n3.nabble.com/Disabling-caching-for-fq-param-tp2600188p2600188.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: multi-core solr, specifying the data directory
I spent a few hours chasing my tail on this one; I really just assumed that the core's data dir would be under the core's instance dir. But I ended up doing exactly what you did (copying from something I found on the web). seems like a design flaw that could be difficult to fix without breaking some older setups? -Mike On 2/28/2011 6:00 PM, Jonathan Rochkind wrote: Unless I'm doing something wrong, in my experience in multi-core Solr in 1.4.1, you NEED to explicitly provide an absolute path to the 'data' dir. I set up multi-core like this: cores adminPath=/admin/cores core name=some_core instanceDir=some_core /core /cores Now, setting instanceDir like that works for Solr to look for the 'conf' directory in the default location you'd expect, ./some_core/conf. You'd expect it to look for the 'data' dir for an index in ./some_core/data too, by default. But it does not seem to. It's still looking for the 'data' directory in the _main_ solr.home/data, not under the relevant core directory. The only way I can manage to get it to look for the /data directory where I expect is to spell it out with a full absolute path: core name=some_core instanceDir=some_core property name=dataDir value=/path/to/main/solr/some_core/data / /core And then in the solrconfig.xml do a dataDir${dataDir}/dataDir Is this what everyone else does too? Or am I missing a better way of doing this? I would have thought it would just work, with Solr by default looking for a ./data subdir of the specified instanceDir. But it definitely doesn't seem to do that. Should it? Anyone know if Solr in trunk past 1.4.1 has been changed to do what I expect? Or am I wrong to expect it? Or does everyone else do multi-core in some different way than me where this doesn't come up? Jonathan
Query on multivalue field
Hi, I have a variable number of text-based fields associated with each primary record which I wanted to apply a search across. I wanted to avoid the use of dynamic fields if possible or having to create a different document type in the index (as the app is based around the primary record and different views mean a lot of work to revamp pagination etc). So, is there a way to apply a query to each value of a multivalued field or is it always treated as a single field from a query perspective? Thanks. Scott.
Re: solr.xml isn't loaded from classpath?
: It seems like when solr home is absent, Solr makes an attempt to look : a few other places to load its configuration. It will try to look for : solrconfig.xml on the classpath as well. It doesn't seem like it makes : any attempt to find solr.xml though. Why is that? Read below for the : larger narrative... First off: there is always a solr home, even if you don't want to use it it has to exist, either explicitly declared, or implicitly it uses ./solr/ in the CWD -- even if that directory doesn't exist, as a last resort that directory is used. Second: the ability to load files out of the classpath was always been something of a missfeature. It existed in the early versions of Solr largely as a fluke, and it has caused some problems in the past. my personal pet peeve is that it really makes it hard to provide good error messages when people have typos in paths/filenames; but more recently (as i understand it) it is causing problems with the SolrCloud efforts, so classpath loading of config files may not definitley be supported in Solr 4x. Third: the very specific and concrete reason why solr.xml can't be loaded from the classpath (and never has been) is that from day one that solr.xml (and multicore support) was added, solr supported CoreAdmin functionality which allows you to add SolrCores at runtime and persist the changes to solr.xml -- which means it can't be something read from the classpath, it has to be something with a real concrete path thta solr can write to. In theory, the code could be teased apart to make all of the persistence opional if and only if the file isn't loaded from the classpath, but as i mentioned: the classpath loading in general may not actually be supported long term. : Having this configuration discovery makes things really convenient for : creating custom Solr web applications where you can throw all of Solr's : config in your resources, create a war, deploy it to Tomcat and it : happily loads. No setting of environment variables or setup required. ... : And when you try to run this, Solr can't find what it needs to start up. : To fix this, we manually deployed the configuration on the web server : and set the solr/home environment variable on the web app's config : within Tomcat. Not ideal and it makes automation awkward. : : Ultimately, I want a completely packaged war for a multicore instance I : can drop anywhere without additional setup. Is this possible? Am I : approaching this wrong? Well, my first question would be: where are you keeping the index? Assuming you don't care about persistence of data, and are okay with your data vanishing everytime the webapp is reloaded (and that's all i can imagine you can do if you don't have any physical files on disk) My suggestion would be to have your custom war use a temp dir to create a solr home, and expand all of your config files into that (and use it for keeping the data) Alternately: Assuming you start using 3x, you can keep doing what you've been doing with the single core approach, but have your own init code in your war (that runs after SOlrDispatchFilter.init()) use the CoreAdmin commands to dynamicly register the various SolrCores you want (on the 3x branch, even when CoreContainer can't find a solr.xml it uses a hardcoded default that initializes one core configured to be the default, and setups the URL /admin/cores for doing non-persistent Core administration) -Hoss