I have tried very hard to follow documentation and forums that try to answer questions about how to return snippets with highlights for relevant searched term using Solr (as nutch does with such ease).
I will be really grateful if someone can guide me with basics, i have made sure that the field to be highlighted is "stored" in index etc. Still I can't figure out why it doesn't return the snippet and instead returns the whole document. I have tried all different highlight parameters with variations, but no idea what's happening. Can I test highlighting using given application using "full search interface" option? How, it just returns xml with full document between field tag at the moment. Please find attached my conf files as well....
<?xml version="1.0" encoding="UTF-8" ?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <config> <!-- Set this to 'false' if you want solr to continue working after it has encountered an severe configuration error. In a production environment, you may want solr to keep working even if one handler is mis-configured. You may also set this to false using by setting the system property: -Dsolr.abortOnConfigurationError=false --> <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError> <!-- Used to specify an alternate directory to hold all index data other than the default ./data under the Solr home. If replication is in use, this should match the replication configuration. --> <!-- <dataDir>./solr/data</dataDir> --> <indexDefaults> <!-- Values here affect all index writers and act as a default unless overridden. --> <useCompoundFile>false</useCompoundFile> <mergeFactor>5</mergeFactor> <maxBufferedDocs>100</maxBufferedDocs> <maxMergeDocs>2147483647</maxMergeDocs> <maxFieldLength>10000</maxFieldLength> <writeLockTimeout>1000</writeLockTimeout> <commitLockTimeout>10000</commitLockTimeout> </indexDefaults> <mainIndex> <!-- options specific to the main on-disk lucene index --> <useCompoundFile>false</useCompoundFile> <mergeFactor>5</mergeFactor> <maxBufferedDocs>1000</maxBufferedDocs> <maxMergeDocs>2147483647</maxMergeDocs> <maxFieldLength>10000</maxFieldLength> <!-- If true, unlock any held write or commit locks on startup. This defeats the locking mechanism that allows multiple processes to safely access a lucene index, and should be used with care. --> <unlockOnStartup>false</unlockOnStartup> </mainIndex> <!-- the default high-performance update handler --> <updateHandler class="solr.DirectUpdateHandler2"> <!-- A prefix of "solr." for class names is an alias that causes solr to search appropriate packages, including org.apache.solr.(search|update|request|core|analysis) --> <!-- autocommit pending docs if certain criteria are met <autoCommit> <maxDocs>10000</maxDocs> <maxTime>1000</maxTime> </autoCommit> --> <autoCommit> <maxDocs>1000</maxDocs> <maxTime>1000</maxTime> </autoCommit> <!-- The RunExecutableListener executes an external command. exe - the name of the executable to run dir - dir to use as the current working directory. default="." wait - the calling thread waits until the executable returns. default="true" args - the arguments to pass to the program. default=nothing env - environment variables to set. default=nothing --> <!-- A postCommit event is fired after every commit or optimize command <listener event="postCommit" class="solr.RunExecutableListener"> <str name="exe">snapshooter</str> <str name="dir">solr/bin</str> <bool name="wait">true</bool> <arr name="args"> <str>arg1</str> <str>arg2</str> </arr> <arr name="env"> <str>MYVAR=val1</str> </arr> </listener> --> <!-- A postOptimize event is fired only after every optimize command, useful in conjunction with index distribution to only distribute optimized indicies <listener event="postOptimize" class="solr.RunExecutableListener"> <str name="exe">snapshooter</str> <str name="dir">solr/bin</str> <bool name="wait">true</bool> </listener> --> </updateHandler> <query> <!-- Maximum number of clauses in a boolean query... can affect range or prefix queries that expand to big boolean queries. An exception is thrown if exceeded. --> <maxBooleanClauses>1024</maxBooleanClauses> <!-- Cache used by SolrIndexSearcher for filters (DocSets), unordered sets of *all* documents that match a query. When a new searcher is opened, its caches may be prepopulated or "autowarmed" using data from caches in the old searcher. autowarmCount is the number of items to prepopulate. For LRUCache, the autowarmed items will be the most recently accessed items. Parameters: class - the SolrCache implementation (currently only LRUCache) size - the maximum number of entries in the cache initialSize - the initial capacity (number of entries) of the cache. (seel java.util.HashMap) autowarmCount - the number of entries to prepopulate from and old cache. --> <filterCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="256"/> <!-- queryResultCache caches results of searches - ordered lists of document ids (DocList) based on a query, a sort, and the range of documents requested. --> <queryResultCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="256"/> <!-- documentCache caches Lucene Document objects (the stored fields for each document). Since Lucene internal document ids are transient, this cache will not be autowarmed. --> <documentCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0"/> <!-- If true, stored fields that are not requested will be loaded lazily. This can result in a significant speed improvement if the usual case is to not load all stored fields, especially if the skipped fields are large compressed text fields. --> <enableLazyFieldLoading>true</enableLazyFieldLoading> <!-- Example of a generic cache. These caches may be accessed by name through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert(). The purpose is to enable easy caching of user/application level data. The regenerator argument should be specified as an implementation of solr.search.CacheRegenerator if autowarming is desired. --> <!-- <cache name="myUserCache" class="solr.LRUCache" size="4096" initialSize="1024" autowarmCount="1024" regenerator="org.mycompany.mypackage.MyRegenerator" /> --> <!-- An optimization that attempts to use a filter to satisfy a search. If the requested sort does not include score, then the filterCache will be checked for a filter matching the query. If found, the filter will be used as the source of document ids, and then the sort will be applied to that. <useFilterForSortedQuery>true</useFilterForSortedQuery> --> <!-- An optimization for use with the queryResultCache. When a search is requested, a superset of the requested number of document ids are collected. For example, if a search for a particular query requests matching documents 10 through 19, and queryWindowSize is 50, then documents 0 through 50 will be collected and cached. Any further requests in that range can be satisfied via the cache. --> <queryResultWindowSize>10</queryResultWindowSize> <!-- This entry enables an int hash representation for filters (DocSets) when the number of items in the set is less than maxSize. For smaller sets, this representation is more memory efficient, more efficient to iterate over, and faster to take intersections. --> <HashDocSet maxSize="3000" loadFactor="0.75"/> <!-- boolToFilterOptimizer converts boolean clauses with zero boost into cached filters if the number of docs selected by the clause exceeds the threshold (represented as a fraction of the total index) --> <boolTofilterOptimizer enabled="true" cacheSize="32" threshold=".05"/> <!-- a newSearcher event is fired whenever a new searcher is being prepared and there is a current searcher handling requests (aka registered). --> <!-- QuerySenderListener takes an array of NamedList and executes a local query request for each NamedList in sequence. --> <!-- <listener event="newSearcher" class="solr.QuerySenderListener"> <arr name="queries"> <lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst> <lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst> </arr> </listener> --> <!-- a firstSearcher event is fired whenever a new searcher is being prepared but there is no current registered searcher to handle requests or to gain autowarming data from. --> <!-- <listener event="firstSearcher" class="solr.QuerySenderListener"> <arr name="queries"> <lst> <str name="q">fast_warm</str> <str name="start">0</str> <str name="rows">10</str> </lst> </arr> </listener> --> <!-- If a search request comes in and there is no current registered searcher, then immediately register the still warming searcher and use it. If "false" then all requests will block until the first searcher is done warming. --> <useColdSearcher>false</useColdSearcher> <!-- Maximum number of searchers that may be warming in the background concurrently. An error is returned if this limit is exceeded. Recommend 1-2 for read-only slaves, higher for masters w/o cache warming. --> <maxWarmingSearchers>4</maxWarmingSearchers> </query> <!-- Let the dispatch filter handler /select?qt=XXX handleSelect=true will use consistent error handling for /select and /update handleSelect=false will use solr1.1 style error formatting --> <requestDispatcher handleSelect="true" > <!--Make sure your system has some authentication before enabling remote streaming! --> <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="2048" /> </requestDispatcher> <!-- requestHandler plugins... incoming queries will be dispatched to the correct handler based on the qt (query type) param matching the name of registered handlers. The "standard" request handler is the default and will be used if qt is not specified in the request. --> <requestHandler name="standard" class="solr.StandardRequestHandler"> <!-- default values for query parameters --> <lst name="defaults"> <str name="echoParams">explicit</str> <!-- <int name="rows">10</int> <str name="fl">*</str> <str name="version">2.1</str> --> </lst> </requestHandler> <!-- DisMaxRequestHandler allows easy searching across multiple fields for simple user-entered phrases. see http://wiki.apache.org/solr/DisMaxRequestHandler --> <requestHandler name="dismax" class="solr.DisMaxRequestHandler" > <lst name="defaults"> <str name="echoParams">explicit</str> <float name="tie">0.01</float> <str name="qf"> text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4 </str> <str name="pf"> text^0.2 features^1.1 name^1.5 manu^1.4 manu_exact^1.9 </str> <str name="bf"> ord(poplarity)^0.5 recip(rord(price),1,1000,1000)^0.3 </str> <str name="fl"> id,name,price,score </str> <str name="mm"> 2<-1 5<-2 6<90% </str> <int name="ps">100</int> <str name="q.alt">*:*</str> </lst> </requestHandler> <!-- Note how you can register the same handler multiple times with different names (and different init parameters) --> <requestHandler name="partitioned" class="solr.DisMaxRequestHandler" > <lst name="defaults"> <str name="echoParams">explicit</str> <str name="qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0</str> <str name="mm">2<-1 5<-2 6<90%</str> <!-- This is an example of using Date Math to specify a constantly moving date range in a config... --> <str name="bq">incubationdate_dt:[* TO NOW/DAY-1MONTH]^2.2</str> </lst> <!-- In addition to defaults, "appends" params can be specified to identify values which should be appended to the list of multi-val params from the query (or the existing "defaults"). In this example, the param "fq=instock:true" will be appended to any query time fq params the user may specify, as a mechanism for partitioning the index, independent of any user selected filtering that may also be desired (perhaps as a result of faceted searching). NOTE: there is *absolutely* nothing a client can do to prevent these "appends" values from being used, so don't use this mechanism unless you are sure you always want it. --> <lst name="appends"> <str name="fq">inStock:true</str> </lst> <!-- "invariants" are a way of letting the Solr maintainer lock down the options available to Solr clients. Any params values specified here are used regardless of what values may be specified in either the query, the "defaults", or the "appends" params. In this example, the facet.field and facet.query params are fixed, limiting the facets clients can use. Faceting is not turned on by default - but if the client does specify facet=true in the request, these are the only facets they will be able to see counts for; regardless of what other facet.field or facet.query params they may specify. NOTE: there is *absolutely* nothing a client can do to prevent these "invariants" values from being used, so don't use this mechanism unless you are sure you always want it. --> <lst name="invariants"> <str name="facet.field">cat</str> <str name="facet.field">manu_exact</str> <str name="facet.query">price:[* TO 500]</str> <str name="facet.query">price:[500 TO *]</str> </lst> </requestHandler> <requestHandler name="instock" class="solr.DisMaxRequestHandler" > <!-- for legacy reasons, DisMaxRequestHandler will assume all init params are "defaults" if you don't explicitly specify any defaults. --> <str name="fq"> inStock:true </str> <str name="qf"> text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4 </str> <str name="mm"> 2<-1 5<-2 6<90% </str> </requestHandler> <!-- SpellCheckerRequestHandler takes in a word (or several words) as the value of the "q" parameter and returns a list of alternative spelling suggestions. If invoked with a ...&cmd=rebuild, it will rebuild the spellchecker index. --> <requestHandler name="spellchecker" class="solr.SpellCheckerRequestHandler" startup="lazy"> <!-- default values for query parameters --> <lst name="defaults"> <int name="suggestionCount">1</int> <float name="accuracy">0.5</float> </lst> <!-- Main init params for handler --> <!-- The directory where your SpellChecker Index should live. --> <!-- May be absolute, or relative to the Solr "dataDir" directory. --> <!-- If this option is not specified, a RAM directory will be used --> <str name="spellcheckerIndexDir">spell</str> <!-- the field in your schema that you want to be able to build --> <!-- your spell index on. This should be a field that uses a very --> <!-- simple FieldType without a lot of Analysis (ie: string) --> <str name="termSourceField">word</str> </requestHandler> <!-- Update request handler. Note: Since solr1.1 requestHandlers requires a valid content type header if posted in the body. For example, curl now requires: -H 'Content-type:text/xml; charset=utf-8' The response format differs from solr1.1 formatting and returns a standard error code. To enable solr1.1 behavior, remove the /update handler or change its path --> <requestHandler name="/update" class="solr.XmlUpdateRequestHandler" /> <!-- CSV update handler, loaded on demand --> <requestHandler name="/update/csv" class="solr.CSVRequestHandler" startup="lazy" /> <!-- Admin Handlers. TODO? There could be a single handler that loads them all... --> <requestHandler name="/admin/luke" class="org.apache.solr.handler.admin.LukeRequestHandler" /> <requestHandler name="/admin/system" class="org.apache.solr.handler.admin.SystemInfoHandler" /> <requestHandler name="/admin/plugins" class="org.apache.solr.handler.admin.PluginInfoHandler" /> <requestHandler name="/admin/threads" class="org.apache.solr.handler.admin.ThreadDumpHandler" /> <requestHandler name="/admin/properties" class="org.apache.solr.handler.admin.PropertiesRequestHandler" /> <!-- Echo the request contents back to the client --> <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" > <lst name="defaults"> <str name="echoParams">explicit</str> <!-- for all params (including the default etc) use: 'all' --> <str name="echoHandler">true</str> </lst> </requestHandler> <!-- queryResponseWriter plugins... query responses will be written using the writer specified by the 'wt' request parameter matching the name of a registered writer. The "standard" writer is the default and will be used if 'wt' is not specified in the request. XMLResponseWriter will be used if nothing is specified here. The json, python, and ruby writers are also available by default. <queryResponseWriter name="standard" class="org.apache.solr.request.XMLResponseWriter"/> <queryResponseWriter name="json" class="org.apache.solr.request.JSONResponseWriter"/> <queryResponseWriter name="python" class="org.apache.solr.request.PythonResponseWriter"/> <queryResponseWriter name="ruby" class="org.apache.solr.request.RubyResponseWriter"/> <queryResponseWriter name="custom" class="com.example.MyResponseWriter"/> --> <!-- XSLT response writer transforms the XML output by any xslt file found in Solr's conf/xslt directory. Changes to xslt files are checked for every xsltCacheLifetimeSeconds. --> <queryResponseWriter name="xslt" class="org.apache.solr.request.XSLTResponseWriter"> <int name="xsltCacheLifetimeSeconds">5</int> </queryResponseWriter> <!-- config for the admin interface --> <admin> <defaultQuery>solr</defaultQuery> <gettableFiles>solrconfig.xml schema.xml admin-extra.html</gettableFiles> <!-- pingQuery should be "URLish" ... & separated key=val pairs ... but there shouldn't be any URL escaping of the values --> <pingQuery> qt=standard&q=solrpingquery </pingQuery> <!-- configure a healthcheck file for servers behind a loadbalancer <healthcheck type="file">server-enabled</healthcheck> --> </admin> </config>
<?xml version="1.0" encoding="UTF-8" ?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- This is the Solr schema file. This file should be named "schema.xml" and should be in the conf directory under the solr home (i.e. ./solr/conf/schema.xml by default) or located where the classloader for the Solr webapp can find it. This example schema is the recommended starting point for users. It should be kept correct and concise, usable out-of-the-box. For more information, on how to customize this file, please see http://wiki.apache.org/solr/SchemaXml --> <schema name="example" version="1.1"> <!-- attribute "name" is the name of this schema and is only used for display purposes. Applications should change this to reflect the nature of the search collection. version="1.1" is Solr's version number for the schema syntax and semantics. It should not normally be changed by applications. 1.0: multiValued attribute did not exist, all fields are multiValued by nature 1.1: multiValued attribute introduced, false by default --> <types> <!-- field type definitions. The "name" attribute is just a label to be used by field definitions. The "class" attribute and any other attributes determine the real behavior of the fieldType. Class names starting with "solr" refer to java classes in the org.apache.solr.analysis package. --> <!-- The StrField type is not analyzed, but indexed/stored verbatim. - StrField and TextField support an optional compressThreshold which limits compression (if enabled in the derived fields) to values which exceed a certain size (in characters). --> <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/> <!-- boolean type: "true" or "false" --> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/> <!-- The optional sortMissingLast and sortMissingFirst attributes are currently supported on types that are sorted internally as strings. - If sortMissingLast="true", then a sort on this field will cause documents without the field to come after documents with the field, regardless of the requested sort order (asc or desc). - If sortMissingFirst="true", then a sort on this field will cause documents without the field to come before documents with the field, regardless of the requested sort order. - If sortMissingLast="false" and sortMissingFirst="false" (the default), then default lucene sorting will be used which places docs without the field first in an ascending sort and last in a descending sort. --> <!-- numeric field types that store and index the text value verbatim (and hence don't support range queries, since the lexicographic ordering isn't equal to the numeric ordering) --> <fieldType name="integer" class="solr.IntField" omitNorms="true"/> <fieldType name="long" class="solr.LongField" omitNorms="true"/> <fieldType name="float" class="solr.FloatField" omitNorms="true"/> <fieldType name="double" class="solr.DoubleField" omitNorms="true"/> <!-- Numeric field types that manipulate the value into a string value that isn't human-readable in its internal form, but with a lexicographic ordering the same as the numeric ordering, so that range queries work correctly. --> <fieldType name="sint" class="solr.SortableIntField" sortMissingLast="true" omitNorms="true"/> <fieldType name="slong" class="solr.SortableLongField" sortMissingLast="true" omitNorms="true"/> <fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true"/> <fieldType name="sdouble" class="solr.SortableDoubleField" sortMissingLast="true" omitNorms="true"/> <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and is a more restricted form of the canonical representation of dateTime http://www.w3.org/TR/xmlschema-2/#dateTime The trailing "Z" designates UTC time and is mandatory. Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z All other components are mandatory. Expressions can also be used to denote calculations that should be performed relative to "NOW" to determine the value, ie... NOW/HOUR ... Round to the start of the current hour NOW-1DAY ... Exactly 1 day prior to now NOW/DAY+6MONTHS+3DAYS ... 6 months and 3 days in the future from the start of the current day Consult the DateField javadocs for more information. --> <fieldType name="date" class="solr.DateField" sortMissingLast="true" omitNorms="true"/> <!-- solr.TextField allows the specification of custom text analyzers specified as a tokenizer and a list of token filters. Different analyzers may be specified for indexing and querying. The optional positionIncrementGap puts space between multiple fields of this type on the same document, with the purpose of preventing false phrase matching across fields. For more info on customizing your analyzer chain, please see http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters --> <!-- One can also specify an existing Analyzer class that has a default constructor via the class attribute on the analyzer element <fieldType name="text_greek" class="solr.TextField"> <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/> </fieldType> --> <!-- A text field that only splits on whitespace for exact matching of words --> <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> </analyzer> </fieldType> <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of words on case-change, alpha numeric boundaries, and non-alphanumeric chars, so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi". Synonyms and stopwords are customized by external files, and stemming is enabled. Duplicate tokens at the same position (which may result from Stemmed Synonyms or WordDelim parts) are removed. --> <fieldType name="text" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <!-- in this example, we will only use synonyms at query time <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> </fieldType> <fieldType name="textHTML" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.HTMLStripWhitespaceTokenizerFactory"/> <!-- in this example, we will only use synonyms at query time <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> </fieldType> <!-- Less flexible matching, but less false matches. Probably not ideal for product names, but may be good for SKUs. Can insert dashes in the wrong place and still match. --> <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" > <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> </fieldType> <!-- This is an example of using the KeywordTokenizer along With various TokenFilterFactories to produce a sortable field that does not include some properties of the source text --> <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true"> <analyzer> <!-- KeywordTokenizer does no actual tokenizing, so the entire input string is preserved as a single token --> <tokenizer class="solr.KeywordTokenizerFactory"/> <!-- The LowerCase TokenFilter does what you expect, which can be when you want your sorting to be case insensitive --> <filter class="solr.LowerCaseFilterFactory" /> <!-- The TrimFilter removes any leading or trailing whitespace --> <filter class="solr.TrimFilterFactory" /> <!-- The PatternReplaceFilter gives you the flexibility to use Java Regular expression to replace any sequence of characters matching a pattern with an arbitrary replacement string, which may include back refrences to portions of the orriginal string matched by the pattern. See the Java Regular Expression documentation for more infomation on pattern and replacement string syntax. http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html --> <filter class="solr.PatternReplaceFilterFactory" pattern="([^a-z])" replacement="" replace="all" /> </analyzer> </fieldType> <!-- since fields of this type are by default not stored or indexed, any data added to them will be ignored outright --> <fieldtype name="ignored" stored="false" indexed="false" class="solr.StrField" /> </types> <fields> <!-- Valid attributes for fields: name: mandatory - the name for the field type: mandatory - the name of a previously defined type from the <types> section indexed: true if this field should be indexed (searchable or sortable) stored: true if this field should be retrievable compressed: [false] if this field should be stored using gzip compression (this will only apply if the field type is compressable; among the standard field types, only TextField and StrField are) multiValued: true if this field may contain multiple values per document omitNorms: (expert) set to true to omit the norms associated with this field (this disables length normalization and index-time boosting for the field, and saves some memory). Only full-text fields or fields that need an index-time boost need norms. --> <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="sku" type="textTight" indexed="true" stored="true" omitNorms="true"/> <field name="name" type="text" indexed="true" stored="true"/> <field name="nameSort" type="string" indexed="true" stored="false"/> <field name="alphaNameSort" type="alphaOnlySort" indexed="true" stored="false"/> <field name="manu" type="text" indexed="true" stored="true" omitNorms="true"/> <field name="cat" type="text_ws" indexed="true" stored="true" multiValued="true" omitNorms="true"/> <field name="features" type="text" indexed="true" stored="true" multiValued="true"/> <field name="includes" type="text" indexed="true" stored="true"/> <field name="weight" type="sfloat" indexed="true" stored="true"/> <field name="price" type="sfloat" indexed="true" stored="true"/> <!-- "default" values can be specified for fields, indicating which value should be used if no value is specified when adding a document. --> <field name="popularity" type="sint" indexed="true" stored="true" default="0"/> <field name="inStock" type="boolean" indexed="true" stored="true"/> <!-- Some sample docs exists solely to demonstrate the spellchecker functionality, this is the only field they container. Typically you might build the spellchecker of "catchall" type field containing all of the text in each document --> <field name="word" type="string" indexed="true" stored="true"/> <!-- catchall field, containing all other searchable text fields (implemented via copyField further on in this schema --> <field name="text" type="text" indexed="true" stored="false" multiValued="true"/> <!-- non-tokenized version of manufacturer to make it easier to sort or group results by manufacturer. copied from "manu" via copyField --> <field name="manu_exact" type="string" indexed="true" stored="false"/> <!-- Here, default is used to create a "timestamp" field indicating When each document was indexed. --> <field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/> <!-- ********************************************************* --> <!-- <field name="url" type="string" indexed="true" stored="true" required="true" /> --> <field name="existingAndReq" type="text" indexed="true" stored="true" multiValued="false"/> <field name="summary" type="text" indexed="true" stored="true" multiValued="false"/> <field name="title" type="text" indexed="true" stored="true" multiValued="false"/> <field name="document" type="textHTML" indexed="true" stored="true" multiValued="false"/> <!-- Dynamic field definitions. If a field name is not found, dynamicFields will be used if the name matches any of the patterns. RESTRICTION: the glob-like pattern in the name attribute must have a "*" only at the start or the end. EXAMPLE: name="*_i" will match any field ending in _i (like myid_i, z_i) Longer patterns will be matched first. if equal size patterns both match, the first appearing in the schema will be used. --> <dynamicField name="*_i" type="sint" indexed="true" stored="true"/> <dynamicField name="*_s" type="string" indexed="true" stored="true"/> <dynamicField name="*_l" type="slong" indexed="true" stored="true"/> <dynamicField name="*_t" type="text" indexed="true" stored="true"/> <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/> <dynamicField name="*_f" type="sfloat" indexed="true" stored="true"/> <dynamicField name="*_d" type="sdouble" indexed="true" stored="true"/> <dynamicField name="*_dt" type="date" indexed="true" stored="true"/> <!-- uncomment the following to ignore any fields that don't already match an existing field name or dynamic field, rather than reporting them as an error. alternately, change the type="ignored" to some other type e.g. "text" if you want unknown fields indexed and/or stored by default --> <!--dynamicField name="*" type="ignored" /--> </fields> <!-- Field to use to determine and enforce document uniqueness. Unless this field is marked with required="false", it will be a required field --> <uniqueKey>id</uniqueKey> <!-- field for the QueryParser to use when an explicit fieldname is absent --> <defaultSearchField>document</defaultSearchField> <!-- SolrQueryParser configuration: defaultOperator="AND|OR" --> <solrQueryParser defaultOperator="OR"/> <!-- copyField commands copy one field to another at the time a document is added to the index. It's used either to index the same field differently, or to add multiple fields to the same field for easier/faster searching. --> <!--<copyField source="id" dest="sku"/>--> <copyField source="cat" dest="text"/> <copyField source="name" dest="text"/> <copyField source="name" dest="nameSort"/> <copyField source="name" dest="alphaNameSort"/> <copyField source="manu" dest="text"/> <copyField source="features" dest="text"/> <copyField source="includes" dest="text"/> <copyField source="manu" dest="manu_exact"/> <!-- Similarity is the scoring routine for each document vs. a query. A custom similarity may be specified here, but the default is fine for most applications. --> <!-- <similarity class="org.apache.lucene.search.DefaultSimilarity"/> --> </schema>