Re: Handling disparate data sources in Solr
Chris Hostetter wrote: what do you guys think? I'm going to spend some time today looking at the Solr source and matching your suggestions to it, hopefully I'll be more able to give a slightly more considered opinion after that ;-) I'm in the process of evaluating what we are going to do with the search functionality for http://opensolaris.org, and at the moment Solr is my first choice to replace what we already have - *if* it can be made to handle disparate data sources. If I do decide that we are going to use Solr, I'll be happy to help add whatever extra functionality is needed to satisfy our requirements. We need this fairly quickly, so I should be able to put a significant amount of time towards getting it done, once a design is fleshed out. I'm not a Solr expert (yet! ;-) so I'm grateful for whatever guidance the Solr community can give on how best to go about fulfilling our requirements. I'm also wondering if we could use Solr to back-end the OpenGrok (http://www.opensolaris.org/os/project/opengrok/) source code search engine that we use on opensolaris.org - having a single search index for both site content and code might be useful, not least because we get the benefits of Solr the index distribution stuff. OpenGrok already uses Lucene as it's back-end, so it should be possible to do this, although I haven't dug through the OG codebase yet. -- Alan Burlison --
Re: Handling disparate data sources in Solr
On Jan 8, 2007, at 4:58 AM, Alan Burlison wrote: I'm in the process of evaluating what we are going to do with the search functionality for http://opensolaris.org, and at the moment Solr is my first choice to replace what we already have - *if* it can be made to handle disparate data sources. There really is no question of if Solr can be made to handle it. :) POSTing an encoded binary document in XML will work, and it certainly will work to have Solr unencode it and parse it. The Lucene in Action codebase has a DocumentHandler interface that could be used for this, which has implementations for Word, PDF, HTML, RTF, and some others. It's simplistic, so it might not be of value specifically. Erik
Re: Handling disparate data sources in Solr
Erik Hatcher wrote: There really is no question of if Solr can be made to handle it. :) The if was a tuits if, not a technical if ;-) POSTing an encoded binary document in XML will work, and it certainly will work to have Solr unencode it and parse it. Yes, but the bits aren't there to do this (yet). And I didn't want to do a one-off hack just for our purposes. The Lucene in Action codebase has a DocumentHandler interface that could be used for this, which has implementations for Word, PDF, HTML, RTF, and some others. It's simplistic, so it might not be of value specifically. Do you have a pointer to the code? Thanks, -- Alan Burlison --
Re: Handling disparate data sources in Solr
On Jan 8, 2007, at 5:45 AM, Alan Burlison wrote: Erik Hatcher wrote: The Lucene in Action codebase has a DocumentHandler interface that could be used for this, which has implementations for Word, PDF, HTML, RTF, and some others. It's simplistic, so it might not be of value specifically. Do you have a pointer to the code? Sure... http://www.lucenebook.com and Download source code. The DocumentHandler is in lia.handlingtypes.framework package. Erik
[jira] Created: (SOLR-97) Rakefile now supports functional and unit tests
Rakefile now supports functional and unit tests --- Key: SOLR-97 URL: https://issues.apache.org/jira/browse/SOLR-97 Project: Solr Issue Type: Improvement Components: clients - ruby - flare Environment: Darwin frizz 8.8.1 Darwin Kernel Version 8.8.1: Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 i386 i386 Reporter: Ed Summers This patch includes modifications to support both functional and unit tests split out into separate directories like RoR. The test server activation was converted from two independent functions into a singleton class with start() and stop() methods. The functional tests have been wrapped with a ensure clause so that the solr test server will always be shut down--even if an exception was tossed during testing. By default the solr test server will not log startup messages to STDERR, however if it's desirable to see these you can: rake SOLR_CONSOLE=true -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (SOLR-20) A simple Java client with Java APIs for add(), delete(), commit() and optimize().
[ https://issues.apache.org/jira/browse/SOLR-20?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertrand Delacretaz updated SOLR-20: Component/s: (was: update) clients - java A simple Java client with Java APIs for add(), delete(), commit() and optimize(). - Key: SOLR-20 URL: https://issues.apache.org/jira/browse/SOLR-20 Project: Solr Issue Type: New Feature Components: clients - java Environment: all Reporter: Darren Erik Vengroff Priority: Minor Attachments: DocumentManagerClient.java, DocumentManagerClient.java, solr-client-java-2.zip.zip, solr-client-java.zip, solr-client-sources.jar, solr-client.zip, solr-client.zip, SolrClientException.java, SolrServerException.java I wrote a simple little client class that can connect to a Solr server and issue add, delete, commit and optimize commands using Java methods. I'm posting here for review and comments as suggested by Yonik. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (SOLR-30) Java client code for performing searches against a Solr instance
[ https://issues.apache.org/jira/browse/SOLR-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertrand Delacretaz updated SOLR-30: Component/s: (was: search) clients - java Java client code for performing searches against a Solr instance Key: SOLR-30 URL: https://issues.apache.org/jira/browse/SOLR-30 Project: Solr Issue Type: New Feature Components: clients - java Reporter: Philip Jacob Priority: Minor Attachments: solrsearcher-client.zip Here are a few classes that connect to a Solr instance to perform searches. Results are returned in a Response object. The Response encapsulates a ListMapString,Field that gives you access to the key data in the results. This is the main part that I'm looking for comments on. There are 2 dependencies for this code: JDOM and Commons HttpClient. I'll remove the JDOM dependency in favor of regular DOM at some point, but I think that the HttpClient dependency is worthwhile here. There's a lot that can be exploited with HttpClient that isn't demonstrated in this class. The purpose here is mainly to get feedback on the API of SolrSearcher before I start optimizing anything. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (SOLR-51) SolrQuery - PHP query client for Solr
[ https://issues.apache.org/jira/browse/SOLR-51?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertrand Delacretaz updated SOLR-51: Summary: SolrQuery - PHP query client for Solr (was: SolrQuery) SolrQuery - PHP query client for Solr - Key: SOLR-51 URL: https://issues.apache.org/jira/browse/SOLR-51 Project: Solr Issue Type: New Feature Environment: PHP, ADODB, curl Reporter: Brian Lucas Attachments: SolrQuery.php PHP client for querying a SOLR datastore -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (SOLR-50) SolrUpdate - PHP update client for Solr
[ https://issues.apache.org/jira/browse/SOLR-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertrand Delacretaz updated SOLR-50: Summary: SolrUpdate - PHP update client for Solr (was: SolrUpdate) SolrUpdate - PHP update client for Solr --- Key: SOLR-50 URL: https://issues.apache.org/jira/browse/SOLR-50 Project: Solr Issue Type: New Feature Components: update Environment: PHP, ADODB, Curl Reporter: Brian Lucas Attachments: SolrUpdate.php This provides the PHP client for adding information to Solr. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: prefixquery, phrasequery support in solr
: I would like to know if there is a way to support prefix, phrase, etc queries : in solr. I know that the queries, such as solr* and solr would do the trick, : but I am looking for a solution similar to the SolrQueryParser defaultOperator : setting (with q.op={OR|AND} or in the schema config file). : : If such support does not exist, would it be a useful thing to have? I'm having a little trouble understanding what you mean ... can you elaborate with some examples (both of what the configuration would look like, and what hte behavior would be with some example inputs) -Hoss
Re: Handling disparate data sources in Solr
: The design issue for this is to be clear about the schema and how : documents are mapped into the schema. If all document types are : mapped into the same schema, then one type of query will work : for all. If the documents have different schemas (in the search : index), then the query needs an expansion specific to each : document type. Right, the only way to provide a general purpose solution is to make sure any out of the box UpdateParsers (using the interface names from my previous email) can be configured in the solrconfig.xml to map the native concepts in the document format to user defined schema fields. (people writing their own custom UpdateParsers could allways hardcode their schema fields) I don't know anything about PDF structure, but using your RFC-2822 email as an example, the configuration for an Rfc2822UpdateParser would need to be able to specify which Headers map to which fields, and what to do with body text -- in theory, it could also be configured with refrences to other UpdateParser instances for dealing with multi-part mime messages (one other good out of the box UpdateParser hat i forgot to mention before would be an XSLTUpdateParser that could take in XML in any format the user wanted to send, along with the URL of an XSLT to apply to convert it to the Solr Standard adddoc format) -Hoss
Re: prefixquery, phrasequery support in solr
Ok, let me show an example. Assume, I want to perform the following search: cat AND dog. I can perform it 3 different ways: a) simply search for: cat AND dog (cat +dog) b) search for: cat dog (assuming schema.xml has solrQueryParser defaultOperator=AND/ c) search for: cat dog (passing q.op=AND query parameter to the requesthandler). I am wondering I can do something similar to b) and c) if I want to perform a prefix query (of phrasequery)... Ideally, I would like to have a parameter similar to defaultOperator, like defaultQueryType which can take on values PREFIXQUERY, PHRASEQUERY in which case the query string won't be parsed by QueryParser and will interpreted as a prefix query. E.g., assume I have solrQueryParser defaultQueryType=PREFIXQUERY/ in my schema.xml and I search for: solr, then it will be interpreted as the solr* query in the current context (which is a prefix search, because currently the queryparser parses the query string by default). thanks, mirko Quoting Chris Hostetter [EMAIL PROTECTED]: : I would like to know if there is a way to support prefix, phrase, etc queries : in solr. I know that the queries, such as solr* and solr would do the trick, : but I am looking for a solution similar to the SolrQueryParser defaultOperator : setting (with q.op={OR|AND} or in the schema config file). : : If such support does not exist, would it be a useful thing to have? I'm having a little trouble understanding what you mean ... can you elaborate with some examples (both of what the configuration would look like, and what hte behavior would be with some example inputs) -Hoss
[jira] Commented: (SOLR-69) PATCH:MoreLikeThis support
[ https://issues.apache.org/jira/browse/SOLR-69?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463125 ] Ryan McKinley commented on SOLR-69: --- Thanks. it works great. The only problem i ran into is a null pointer if you do not specify the fields to return (by default all of them without the score). just add a not null check to line 102 of MoreLikeThisHelper.java code protected boolean usesScoreField(SolrQueryRequest req) { String fl = req.getParams().get(SolrParams.FL); if( fl != null ) { for(String field : splitList.split(fl)) { if(score.equals(field)) return true; } } return false; } /code PATCH:MoreLikeThis support -- Key: SOLR-69 URL: https://issues.apache.org/jira/browse/SOLR-69 Project: Solr Issue Type: Improvement Components: search Reporter: Bertrand Delacretaz Priority: Minor Attachments: lucene-queries-2.0.0.jar, SOLR-69.patch Here's a patch that implements simple support of Lucene's MoreLikeThis class. The MoreLikeThisHelper code is heavily based on (hmm...lifted from might be more appropriate ;-) Erik Hatcher's example mentioned in http://www.mail-archive.com/solr-user@lucene.apache.org/msg00878.html To use it, add at least the following parameters to a standard or dismax query: mlt=true mlt.fl=list,of,fields,which,define,similarity See the MoreLikeThisHelper source code for more parameters. Here are two URLs that work with the example config, after loading all documents found in exampledocs in the index (just to show that it seems to work - of course you need a larger corpus to make it interesting): http://localhost:8983/solr/select/?stylesheet=q=apacheqt=standardmlt=truemlt.fl=manu,catmlt.mindf=1mlt.mindf=1fl=id,score http://localhost:8983/solr/select/?stylesheet=q=apacheqt=dismaxmlt=truemlt.fl=manu,catmlt.mindf=1mlt.mindf=1fl=id,score Results are added to the output like this: response ... lst name=moreLikeThis result name=UTF8TEST numFound=1 start=0 maxScore=1.5293242 doc float name=score1.5293242/float str name=idSOLR1000/str /doc /result result name=SOLR1000 numFound=1 start=0 maxScore=1.5293242 doc float name=score1.5293242/float str name=idUTF8TEST/str /doc /result /lst I haven't tested this extensively yet, will do in the next few days. But comments are welcome of course. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (SOLR-98) simple python client
[ https://issues.apache.org/jira/browse/SOLR-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley updated SOLR-98: - Attachment: solr.py simple python client Key: SOLR-98 URL: https://issues.apache.org/jira/browse/SOLR-98 Project: Solr Issue Type: New Feature Components: clients - python Reporter: Yonik Seeley Attachments: solr.py I've had this python client lying around for almost a year, used for various little testing scenarios. It really doesn't do all that much except for connection handling (persistent/non-persistent) and some protocol handling. Agile languages like Python and Ruby need less in any case. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (SOLR-98) simple python client
[ https://issues.apache.org/jira/browse/SOLR-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463129 ] Yonik Seeley commented on SOLR-98: -- Unless someone has a better starting point for a python client, I'll commit this one to clients/python and we can go from there. simple python client Key: SOLR-98 URL: https://issues.apache.org/jira/browse/SOLR-98 Project: Solr Issue Type: New Feature Components: clients - python Reporter: Yonik Seeley Attachments: solr.py I've had this python client lying around for almost a year, used for various little testing scenarios. It really doesn't do all that much except for connection handling (persistent/non-persistent) and some protocol handling. Agile languages like Python and Ruby need less in any case. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [VOTE] graduate Solr to Lucene subproject
This vote has passed, and I've just called for a vote within the Lucene PMC. -Yonik On 1/4/07, Yonik Seeley [EMAIL PROTECTED] wrote: It's time that Solr graduate from the incubator and become an official Lucene subproject. So, please cast your votes: [ ] +1 ask Lucene PMC and the Incubator PMC to graduate Solr from the Incubator to become a Lucene subproject. [ ] 0 Don't care [ ] -1 Not at this time, stay in the Incubator for now. -Yonik -- -Yonik http://incubator.apache.org/solr Solr, the open-source Lucene search server
Re: Handling disparate data sources in Solr
Chris Hostetter wrote: : The design issue for this is to be clear about the schema and how : documents are mapped into the schema. If all document types are : mapped into the same schema, then one type of query will work : for all. If the documents have different schemas (in the search : index), then the query needs an expansion specific to each : document type. Right, the only way to provide a general purpose solution is to make sure any out of the box UpdateParsers (using the interface names from my previous email) can be configured in the solrconfig.xml to map the native concepts in the document format to user defined schema fields. (people writing their own custom UpdateParsers could allways hardcode their schema fields) I don't know anything about PDF structure http://en.wikipedia.org/wiki/Extensible_Metadata_Platform http://partners.adobe.com/public/developer/en/xmp/sdk/XMPspecification.pdf but using your RFC-2822 email as an example, the configuration for an Rfc2822UpdateParser would need to be able to specify which Headers map to which fields, and what to do with body text -- in theory, it could also be configured with refrences to other UpdateParser instances for dealing with multi-part mime messages There's two cases I can think of: 1. The document is already decomposed into fields before the insert/update, but one or more of the fields requires special handling. For example when indexing source code you could get the author, date, revision etc from the SCMS, but you might want to process the code itself just to extract identifiers and ignore keywords. You might want different handlers for different languages, but for the resulting tokens all to be stored in the same field, irrespective of language. 2. The document contains both metadata and content. PDF is a good example of such a document type. You therefore need to be able to specify two types of preprocessing - either at the whole-document level, or at the individual field level. And for both of these you'd need to be able to specify the mapping between the data/metadata in the source document and the corresponding Solr schema fields. I'm not sure if you'd want this in the solrconfig.xml file or in the indexing request itself. Doing it in solrconfig.xml means you could change the disposition of the indexed data without changing the clients submitting the content. That was the reasoning behind my initial suggestion: | Extend the doc and field element with the following attributes: | | mime-type Mime type of the document, e.g. application/pdf, text/html | and so on. | | encoding Encoding of the document, with base64 being the standard | implementation. | | href The URL of any documents that can be accessed over HTTP, instead | of embedding them in the indexing request. The indexer would fetch | the document using the specified URL. | | There would then be entries in the configuration file that map each | MIME type to a handler that is capable of dealing with that document | type. So for case 1 where the source is locally accessible you might have something like this: add doc field name=authorAlan Burlison/field field name=revision1.2/field field name=date08-Jan-2007/field field name=source mime-typetext/java href=file:///source/org/apache/foo/bar.java /field /doc /add And for case 2 where the file can't be directly accessed you might have something like this: add doc encoding=base64 mime-typeapplication/pdf [base64-encoded version of the PDF file] /doc /add -- Alan Burlison --
[jira] Updated: (SOLR-50) SolrUpdate - PHP update client for Solr
[ https://issues.apache.org/jira/browse/SOLR-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-50: - Component/s: (was: update) clients - php SolrUpdate - PHP update client for Solr --- Key: SOLR-50 URL: https://issues.apache.org/jira/browse/SOLR-50 Project: Solr Issue Type: New Feature Components: clients - php Environment: PHP, ADODB, Curl Reporter: Brian Lucas Attachments: SolrUpdate.php This provides the PHP client for adding information to Solr. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (SOLR-51) SolrQuery - PHP query client for Solr
[ https://issues.apache.org/jira/browse/SOLR-51?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-51: - Component/s: clients - php SolrQuery - PHP query client for Solr - Key: SOLR-51 URL: https://issues.apache.org/jira/browse/SOLR-51 Project: Solr Issue Type: New Feature Components: clients - php Environment: PHP, ADODB, curl Reporter: Brian Lucas Attachments: SolrQuery.php PHP client for querying a SOLR datastore -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Ant build.xml for solr-base (v1.1.0) broken?
: Thanks for the clarification. I'm still not sure why you decided to let : Ant control the version of JUnit that Solr gets to use (by expecting the : JUnit JAR to be in ANT_HOME/lib). My Eclipse installation places the Ant It wasn't an explict decision on my part, so much as the convention: a) recommended by the Ant FAQ; b) in use in the Java Lucene build system; and c) recommended by Erik Hatcher (whom i trust to know Ant better then anyone else i've ever talked to, even if i haven't had a chance to read either of his books on the subject) ... http://www.nabble.com/Re%3A-inital-observations-from-a-newbie-p2970514.html http://www.nabble.com/Re%3A-Contrib-in-oblivion-p1458269.html http://ant.apache.org/faq.html#delegating-classloader : and JUnit JARs into different folders within its plugins/ folder, so it : would seem awkward for me to put another copy of the JUnit JAR into my : ANT_HOME/lib/. I would imagine that different projects might want to use ANT_HOME/lib/ seems to be the recommended place to put it ... if you build with eclipse and ecplise prefers another place you should still be fine for compiling -- as long as that other place is in whatever default CLASSPATH ecplise sets up for you (even if the Solr build.xml went out of it's way to try and find it for the javac tasks the way it does for javadoc as you suggested before, that wouldn't help if ecplise keeps it someplace completley different then ANT_HOME/lib/) : different versions of JUnit, which is why other projects I've used : (e.g., Nutch, Hadoop) typically put the JUnit JAR into their own lib/ : folders. Would you please elaborate on why doing so in the Solr project : would make for improper JUnit/Ant integration? I wasn't aware that Nutch/Hadoop included it in their lib dir -- but even if they do, it doesn't mean they get arround the classpath problem. I just tried checking Nutch out of subversion and found that with ant 1.6.5, i could not run the ant test target for nutch without my personally copy of the junit JAR in my ANT_HOME/lib directory. I got this error from ant (which is what I expected)... BUILD FAILED /home/chrish/tmp/nutch-svn/nutch-trunk/build.xml:265: Could not create task or type of type: junit. Ant could not find the task or a class this task relies upon. This is common and has a number of causes; the usual solutions are to read the manual pages then download and install needed JAR files, or fix the build file: - You have misspelt 'junit'. Fix: check your spelling. - The task needs an external JAR file to execute and this is not found at the right place in the classpath. Fix: check the documentation for dependencies. Fix: declare the task. - The task is an Ant optional task and the JAR file and/or libraries implementing the functionality were not found at the time you yourself built your installation of Ant from the Ant sources. Fix: Look in the ANT_HOME/lib for the 'ant-' JAR corresponding to the task and make sure it contains more than merely a META-INF/MANIFEST.MF. If all it contains is the manifest, then rebuild Ant with the needed libraries present in ${ant.home}/lib/optional/ , or alternatively, download a pre-built release version from apache.org - The build file was written for a later version of Ant Fix: upgrade to at least the latest release version of Ant - The task is not an Ant core or optional task and needs to be declared using taskdef. - You are attempting to use a task defined using presetdef or macrodef but have spelt wrong or not defined it at the point of use Remember that for JAR files to be visible to Ant tasks implemented in ANT_HOME/lib, the files must be in the same directory or on the classpath Please neither file bug reports on this problem, nor email the Ant mailing lists, until all of these causes have been explored, as this is not an Ant bug. Total time: 16 seconds
[jira] Resolved: (SOLR-97) Rakefile now supports functional and unit tests
[ https://issues.apache.org/jira/browse/SOLR-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher resolved SOLR-97. -- Resolution: Fixed Assignee: Erik Hatcher Applied. Thanks again, Ed! Note: be sure to svn add files before generating patches. Rakefile now supports functional and unit tests --- Key: SOLR-97 URL: https://issues.apache.org/jira/browse/SOLR-97 Project: Solr Issue Type: Improvement Components: clients - ruby - flare Environment: Darwin frizz 8.8.1 Darwin Kernel Version 8.8.1: Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 i386 i386 Reporter: Ed Summers Assigned To: Erik Hatcher Attachments: split_out_tests.patch This patch includes modifications to support both functional and unit tests split out into separate directories like RoR. The test server activation was converted from two independent functions into a singleton class with start() and stop() methods. The functional tests have been wrapped with a ensure clause so that the solr test server will always be shut down--even if an exception was tossed during testing. By default the solr test server will not log startup messages to STDERR, however if it's desirable to see these you can: rake SOLR_CONSOLE=true -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira