What would an update request look like? How close/far is this from the
existing format/functions? Is it just a syntax change from what we have
or would it require something more?
Is this something that can be handled with XSLT processing an atom file?
Martin Grotzke wrote:
On Tue, 2007-06-26 at 23:22 -0700, Chris Hostetter wrote:
: So if it would be possible to go over each item in the search result
: I could check the price field and define my ranges for the specific
: query on solr side and return the price ranges as a facet.
: Otherwise,
Andrew Nagy wrote:
Hello, I have been playing off and on with the more like this patch and I
really want to get it working well. I have the patch installed and I have
about 500K bibliographic records in my solr index.
My MLT query uses a fieldlist of about 5 or 6 fields. There are a mix of
the solr date format is a bit more strict (ISO 8601)
-MM-dd'T'HH:mm:ss.SSS
there is talk of a more lienent date parser, but nothing exists yet..
The format you suggest would be ok if you index your dates as a string
'20070101' and then use a range query.
Stu Hood wrote:
Hello,
I just started running the scripts and
The commit script seems to run fine, but it says there was an error. I
looked into it, and the scripts expect 1.1 style response:
result status=0/result
1.2 /update returns:
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeader
If you are dealing with such large files, you need to make sure the JVM
has a big enough heap. Try starting java with -mx100m (-mx2G if you
have it)
java -mx100m -jar post.jar flix.xml
The solr server also needs to be started with enough memory...
ryan
michael ravits wrote:
hello
Thierry Collogne wrote:
Just to be clear. This client is compatible with the 1.2 release of solr?
Yes. Assuming you use default values, it should also work against 1.1.
Chris Hostetter wrote:
: SOLR-133 includes this fix... it squawks if it hits an unknown tag.
Really? i thought SOLR-133 only changed the way the incoming XML is
parsed, is it also changing hte way the schema.xml is parsed?
Sorry my mistake -- It parses incoming XML spitting out errors if
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: Ryan McKinley [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent: Thursday, June 14, 2007 7:09:17 PM
Subject: Re: Solr 1.2 HTTP Client for Java
I'm working
Any idea if you are going to make it distributable via the central Maven
repo?
It will be included in the next official solr release. I don't use
maven, but assume all official apache projects are included in their
repo. If they do nightly snapshots, it will be there
ryan
Daniel Alheiros wrote:
Excellent.
I just added SOLR-20 to trunk.
you will need:
1. checkout trunk
2. ant dist
3. include:
apache-solr-1.3-dev-common.jar
apache-solr-1.3-dev-solrj.jar
solrj-lib/*.jar
Here is the basic interface:
SOLR-133 includes this fix... it squawks if it hits an unknown tag.
Walter Underwood wrote:
Do we have a bug filed on this? Solr really should have complained
about the unknown element. --wunder
On 6/14/07 4:54 PM, Tiong Jeffrey [EMAIL PROTECTED] wrote:
arh! i spent 6-7 hours on this error
what version of solr/container are you running?
this sounds similar to what people running solr 1.1 with the jetty
include in that example...
Jack L wrote:
It happened twice in the past few days that the solr instance stopped
responding (the admin page does not load) while the process was
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position
369: ordinal not in range(128)
What character is at position 369? make sure it is valid unicode...
Is there a simple way to tell solr to accept UTF8 characters?
Solr can accept UTF8 characters... check the
compass is nice: you install it and it just works. I was totally
impressed.
Can anyone suggest which one would perform/scale well Compass or Solr?
It depends on what your app looks like. If you need to update the index
from multiple computers at the same time (load balancing) - solr is
:
: Actually, it's not quite equivalent if there was a schema change.
: There are some sticky field properties that are per-segment global.
: For example, if you added omitNorms=true to a field, then did
Hmmm... I thought the optimize would take care of that?
Oh yes, sorry, I was thinking
Jack L wrote:
Hello Chris,
I'm using version 1.1.
If I'm only using 1.1 features, should I still try 1.2 for other
improvements such as stability, error handling, etc.?
If you can upgrade, it is highly recommended. There are lots of little
annoying fixes included in 1.2 -- in addition to
I don't use tomcat, so I can't be particularly useful. The behavior you
describe does not happen with resin or jetty...
My guess is that tomcat is caching the error state. Since fixing the
problem is outside the webapp directory, it does not think it has
changed so it stays in a broken
have you taken a look the output from the admin/analysis?
http://localhost:8983/solr/admin/analysis.jsp?highlight=on
This lets you see what tokens are generated for index/query. From your
description, I'm suspicious that the generated tokens are actually:
willi am
Also, if you want the same
If the example is in:
C:\workspace\solr\example
Try putting you custom .jar in:
C:\workspace\solr\example\solr\lib
Check the README in solr home:
C:\workspace\solr\example\solr\README.txt
This directory is optional. If it exists, Solr will load any Jars
found in this directory and use them
i modifiy it and now start is ok
field name=id type=integer stored=true /
property required means?
i not find it in comment.
required means that the field *must* be specified when you add it to
the index. If it isn't there, you will get an error.
If you upgrade or work from trunk,
Sorry, for 1.1, use:
throw new SolrException( 500, ... );
the ErrorCode enum was added in 1.2 -- that should be out *very* soon.
Teruhiko Kurosaka wrote:
Ryan,
Thank you for your reply, but I can't find this
class SolrException.ErrorCode in Solr 1.1.
The Solr source seems to be giving a
Are there plans to rethink the plugin architecture, e.g. to break into
phases or modules where other components/plugins can extend? Or what are
some other suggestions you guys may have?
Check org.apache.solr.util.SolrPluginUtils -- ideally most functionality
shared across multiple
Teruhiko Kurosaka wrote:
When the parameter to a token filter is out of
range, or a mandatory paramter is not given, what
is the proper way to fail in the init() and
crate() methods?
Should I throw an RuntimeException? Or should I
simply call SolrCore.log.severe(message)?
Is it OK for create()
If your french/english apps really don't need to share data, I don't
think there is any general rule -- the choice will come down to your
personal taste...
One thing to consider is you will probably want to use french analyzers
for the french app and english ones for the english app...
I have an app where I want dismax style automatic field boosting (for
the title), but also want to expose lucene query syntax (phrase, range, etc)
The default search field for my schema is fulltext. I am copying all
the relevant fields but want Boston in the title to be worth more then
Chris Hostetter wrote:
or set the JVM's security manager to
one that does not allow file writes to that directory (if you need other
apps to be able to udpate the index)
I'll look into that... thanks
currently no.
Right now you even need a new request for each delete...
Patrick Givisiez wrote:
can I add and delete docs at same post?
Some thing like this:
myDocs.xml
=
add
docfield name=mainId4/field/doc
docfield name=mainId5/field/doc
docfield
Teruhiko Kurosaka wrote:
I have a form that sets the hl.fl form hidden variable.
I wanted to change the higlighted field depending on the
query string that is typed, using JavaScript.
This is normally done by the JavaScript code like this:
document.myform.varname.value = whatever
But
I don't know if this helps, but...
Do *all* your queries need to include the fast updates? I have a setup
where there are some cases that need the newest stuff but most cases can
wait 5 mins (or so)
In that case, I have two solr instances pointing to the same index
files. One is used for
(In general a DateTranslatingTokenFilter class would be a pretty cool
addition to Lucene, it could as constructor args two DateFormatters (one
for parsing the incoming tokens, and one for formating the outgoing
If this happens, it would be nice (perhaps overkill) to have a chronic
input
check:
http://wiki.apache.org/solr/DisMaxRequestHandler
For now, most of the docs for dismax are in the javadocs:
http://lucene.apache.org/solr/api/org/apache/solr/request/DisMaxRequestHandler.html
Matthew Runo wrote:
I'd love to see some explanation of what's going on here, and how to
Yonik Seeley wrote:
On 5/9/07, Yonik Seeley [EMAIL PROTECTED] wrote:
If you are saving the file in UTF-8 format, then try changing the
first line to be this:
?xml version=1.0 encoding=UTF-8?
We should probably change the example solrconfig.xml and schema.xml to
be UTF-8 by default. Any
sorry. I tested with something that did not duplicate the problem.
update and try rev 536048.
Koji Sekiguchi wrote:
Ryan,
Thank you for committing SOLR-214, but we are still facing the garbled
characters problem
under Tomcat 5.5.23.
I checked the patch, but unfortunately,
escher2k wrote:
I am trying to remove documents from my index using delete by query.
However when I did this, the deleted
items seem to remain. This is the format of the XML file I am using -
deletequeryload_id:20070424150841/query/delete
deletequeryload_id:20070425145301/query/delete
. If deleteByQuery
functionality is needed, it's best if they can be batched and executed
together so they may share the same index reader.
I don't quite know what batched means since it only reads one command...
Thanks.
ryan mckinley wrote:
escher2k wrote:
I am trying to remove documents
Chris Hostetter wrote:
: 1. Which exact version of Resin? Still 3.0.23?
: 2. Just to confirm, you uncommented out the lines in web.xml
: mentioned previously?
: Try uncommenting out the lines in the web.xml and see if that fixes
: your problem.
Ken: I'm not very familiar withteh problem you
Daniel Einspanjer wrote:
The example EmbeddedSolr class on the wiki makes use of getUpdateHandler
which was added after 1.1 (so it seems to be available only on trunk).
I'd really like to move to an embedded Solr sooner rather than later. My
questions are:
- Would it be easy/possible to work
Is it possible / is it an ok idea to have multiple solr instances
running on the same machine pointing to the same index files?
Essentially, I have two distinct needs - in some cases i need a commit
immediately after indexing one document, but most of the time it is fine
to wait 10 mins for
Walter Underwood wrote:
This is for monitoring -- what happened in the last 30 seconds.
Log file analysis doesn't really do that.
I think the XML output in admin/stats.jsp may be enough for us.
That gives the cumulative requests on each handler. Those are
counted in StandardRequestHandler
Chris Hostetter wrote:
: Essentially, I have two distinct needs - in some cases i need a commit
: immediately after indexing one document, but most of the time it is fine
: to wait 10 mins for changes if that has better performance.
:
: Sounds like a configuration issue... set autocommit to
paladin:/data/solr mtorgler1$ curl http://localhost:8080/solr/update
--data-binary articles.xml
result status=1org.xmlpull.v1.XmlPullParserException: only
whitespace content allowed before start tag and not a (position:
START_DOCUMENT seen a... @1:1)
at
As this question comes up so often, i put a new page on the wiki:
http://wiki.apache.org/solr/MultipleIndexes
We should fill in more details and link it to the front page.
Chris Hostetter wrote:
: So if you're looking for some shoes:
: (size:8 AND color:'blue') AND object_type:'shoe'
: Or
James liu wrote:
i read it from http://wiki.apache.org/solr/IndexInfoRequestHandler
The IndexInfoRequestHandler was added since the solr 1.1. You will need
to compile the source from:
http://svn.apache.org/repos/asf/lucene/solr/trunk/
to get the IndexInfo handler.
On 4/13/07, Henrib [EMAIL PROTECTED] wrote:
I'm trying to choose between embedding Lucene versus embedding Solr in one
webapp.
In Solr terms, functional requirements would more or less lead to multiple
schema conf (need CRUD/generation on those) and deployment constraints
imply one webapp
With a clean checkout, you can run:
$ ant example
$ cd example
$ java -jar start.jar
and things work OK.
But, when you delete all but the two fields, you get an exception somewhere?
On 4/12/07, Andrew Nagy [EMAIL PROTECTED] wrote:
Yonik Seeley wrote:
I dropped your schema.xml directly into
PROTECTED] wrote:
Andrew Nagy wrote:
Ryan McKinley wrote:
What errors are you getting? Are there exceptions in the log when it
starts up?
Just a null pointer exception.
I added a few fields to my schema, and then replaced my solr war file
with the latest build (see my message from a week
Off topic a bit, Has anyone set forth to build a new admin interface for
SOLR? I build a lot of admin interfaces for my day job and would love
to give the admin module a bit of a tune-up (I won't use the term overhaul).
i think we definitely need an updated admin interface, yes!
Ideally, we
complicated. We
need several of the extra features Solr provides, which is why we are
trying to use it instead of Lucene directly.
On 4/2/07, Ryan McKinley [EMAIL PROTECTED] wrote:
I have embedded solr skipping HTTP transport altogether. It was
remarkably easy to link directly to request handlers
What errors are you getting? Are there exceptions in the log when it starts up?
On 4/10/07, Andrew Nagy [EMAIL PROTECTED] wrote:
Does anyone have a good method of debugging a schema?
I have been struggling to get my new schema to run for the past couple
of days and just do not see anything
Can you elaborate on running SOLR-20 with a hibernate-solr auto link? You
mean you listen to Hibernate events and use them to keep the index served by Solr in sync
with the DB?
I built a HibernateEventWatcher modeled after the compass framework
that automatically gets notified on
On 4/4/07, James liu [EMAIL PROTECTED] wrote:
I wanna know how to solve big index which seems u have big index.
As far as lucene is concerned, we have a relatively small index.
~300K docs (and growing!)
I haven't even needed to tune things much - it is mostly default
settings from the
Everything is in place to make it an easy task. A CSV update handler
was recently committed, a JSON loader should be a relatively
straightforward task. But, I don't think anyone is working on it
yet...
On 4/5/07, Jack L [EMAIL PROTECTED] wrote:
Hello solr-user,
Query result in JSON format
We just had a major release on http://www.instructables.com/
We have been running solr for months as a band-aid, this release
integrates solr deeply. Solr takes care of the 'browse' functionality
and a nice interface for people to manage their library of uploaded
images/files. This replaced an
Is there / should there be a way to access the three core caches?
You can access user defined caches from:
searcher.getCache( name );
The three core caches only have private access from SolrIndexSearcher.
I want to be able to programmatic check the cache sizes and make sure
they are big
until there is a need...
On 4/4/07, Mike Klaas [EMAIL PROTECTED] wrote:
On 4/4/07, Erik Hatcher [EMAIL PROTECTED] wrote:
On Apr 4, 2007, at 7:28 PM, Ryan McKinley wrote:
Is there / should there be a way to access the three core caches?
there should. +1
I want to be able
Yes, it is only solr - and will have a normal HTTP interface, no lucene.
But as i said, the catch is that *all* fields must be stored, not only
the ones you want to change. Solr will pull the document out of the
index, modify it and put it back - it can only pull out stored fields
so you must
Yes yes!
On 3/31/07, Jeff Rodenburg [EMAIL PROTECTED] wrote:
We built our first search system architecture around Lucene.Net back in 2005
and continued to make modifications through 2006. We quickly learned that
search management is so much more than query algorithms and indexing
choices. We
Lucene does not have any way to modify existing fields, so solr can't
do it either... (document boosts are stored as part of the field)
In http://issues.apache.org/jira/browse/SOLR-139, I'm working on a
convenience function to let the client modify an existing solr
document - the one catch is
You may want to take a look at the related discussion:
http://www.nabble.com/result-grouping--tf2910425.html#a8131895
Yonik suggested a dynamic priority queue... if the number of things
you are grouping by is small it is probably easier to make multiple
calls to solr.
ryan
On 3/16/07, Brian
thank you thank you
that does it.
line app, the
next step before trying to use it in Solr is probably to try and use it in
a simple JSP
Do u mean if it work well in cmd that meas it can use with solr?
2007/3/13, Ryan McKinley [EMAIL PROTECTED]:
does your use bean jsp example work if you dump it into the exploded
solr.war
I agree, the display is a bit weird. But, if you check the response
headers it the response code is 400 Bad Request
In firefox or IE, you would need to inspect the headers to see what is going on.
The issue is that /select uses a servlet that writes out a stack trace
for every error it hits
On 3/7/07, Andrew Nagy [EMAIL PROTECTED] wrote:
Argh! Thanks Yonik for pointing out the log files, duh! I had a
malformed line in my schema.xml. Nice feature to add down the line,
although I know there is a lot of work going into the admin interface so
who knows if it is already thought of.
Solr looks at one index - If you want to look at multiple indexes, you
need multiple solr instances running. Check the wiki for how to set
that up:
http://wiki.apache.org/solr/SolrJetty
(the resin and tomcat pages have something similar)
On 3/7/07, Venkatesh Seetharam [EMAIL PROTECTED]
SOLR-103 is waiting for SOLR-139 to solidify before i post more updates...
I have it running successfully, but it requires too many other patches
to suggest trying to get it running unless you are up for a bit of
work. If you are, i can easily post an update.
About the schema... SOLR-103 uses
MySQL has a TIMESTAMP field that can autoupdate everytime something
changes... i've never used it, but that may be a place to look.
alternativly you could add a TRIGGER to automatticaly dump stuff to a
bucket when it changes and clear the bucket when you index
On 3/6/07, Debra [EMAIL
str name=qallMessageContent:test;subject+asc/str
there should be a space between subject and asc,
try: http://host/select?q=allMessageContent:test;subject%20asc
+ is supposed to become a space, but it looks like it is staying +
I know I'm pushing solr to do things it was never designed to do, so
shut me up quick if this is not where you want things to go - I could
quietly implement this with quick hacks, but i'd rather not...
Currently SolrCore loads all the request handlers in a final variable
as the instance is
: I had silent a error that I can't remember the details of, but it
: was something like putting the str for boost functions outside
: the lst. It didn't blow up, but it was a nonsense config that
: was accepted.
again, there's nothing erroneous about having a str outside of a lst
when specifing
// get all the registered handlers by class
CollectionSolrRequestHandler getRequestHandlers( Class? extends,
SolrRequestHandler clazz );
By class? What's that for?
It was useful to check what else is configured. The alternative is to have a
CollectionSolrRequestHandler
The response writers print a debug message when you put a non-standard
value in them:
} else {
// default... for debugging only
writeStr(name, val.getClass().getName() + ':' + val.toString(), true);
}
All the values used in the standard handlers are in the list, so this
is fine.
Keep in mind, there is a contract about what constitutes Returnable data
http://lucene.apache.org/solr/api/org/apache/solr/request/SolrQueryResponse.html#returnable_data
That list is not quite up-to-date, it should add:
* Document
* Collection should be changed to: Iterable
* Iterator
There still may be a bug that Ryan mentioned about unknown fields
simply being ignored, but that should be fixed if true.
I just looked into this - /trunk code is fine.
I wasn't noticing the errors because the response code is always 200
with an error included in the xml. My code was only
On 3/3/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 3/3/07, Ryan McKinley [EMAIL PROTECTED] wrote:
Is there enough general interest in having error response codes to
change the standard web.xml config to let the SolrDispatchFilter
handle /select?
/select should already use HTTP error codes
/update
does send 200 even if there was an error.
after SOLR-173 we may want to change the default solrconfig to map
/update so that everything has a consistent error format.
On 3/3/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 3/3/07, Ryan McKinley [EMAIL PROTECTED] wrote:
Is there enough
For anyone not on the dev list, I just posted:
http://issues.apache.org/jira/browse/SOLR-179
so it is not lost, I also posted Otis' bug report:
http://issues.apache.org/jira/browse/SOLR-180
On 3/3/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 3/3/07, Ryan McKinley [EMAIL PROTECTED] wrote:
But MANY of the SolrExceptions use a status
code '1'.
Hmmm, I did an audit of the exceptions before we entered the incubator, and
I thought I caught all the ones that generated anything out
329.0 total time
0.0 set up/parsing
125.0 main query
46.0 faceting
100.0 optimized pre-fetch
58.0 debug
Times are in milliseconds. I've found breaking down the timing rather
useful since I have huge stored docs and non-query-related tasks often
On 3/2/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 3/2/07, Ryan McKinley [EMAIL PROTECTED] wrote:
The rationale with the solrconfig stuff is that a broken config should
behave as best it can.
I don't think that's what I was actually going for in this instance
(the schema).
I was focused
Faceting is much happier if you use a single valued field, but my apps
all require multivalued fields:
doc
arr name=subject
straaa/str
strbbb/str
strccc/str
/arr
/doc
I'd like to use copyField to accumulate the multivalued fields into a
single field that can be efficiently faceted. (As
it sounds like we may have a very bad bug in the XmlUpdateRequestHandler
I haven't looked at this yet, but if i understand the description, it
would have to be a problem with the SolrDispatchFilter and/or the
SolrRequestParsers.
the part this *is* exactly the same is the
On 2/24/07, Erik Hatcher [EMAIL PROTECTED] wrote:
On Feb 24, 2007, at 6:26 AM, Erik Hatcher wrote:
On Feb 24, 2007, at 3:36 AM, Pierre-Yves LANDRON wrote:
it will be easy to add. take a look at a simple SolrRequestHandler:
http://svn.apache.org/repos/asf/lucene/solr/trunk/src/java/org/
Does an implementation of this method exists in solr ?
i don;t think so.
If not, is it difficult to develop new instructions for solr ? where I must
start to do so ?
it will be easy to add. take a look at a simple SolrRequestHandler:
Looks like it was actually an error with SOLR-133 not handling CDATA
properly. I fixed it and updated the patch.
at least SOLR-20 ins't to blame!
On 2/21/07, Brian Whitman [EMAIL PROTECTED] wrote:
On Feb 21, 2007, at 5:10 PM, Yonik Seeley wrote:
So far so good for me.
I started with
On 2/20/07, Chris Hostetter [EMAIL PROTECTED] wrote:
: Can you get the boost of an indexed document? Am I missing something
: basic? Is the stored document boost lost once it is indexed?
Bingo. In Lucene, Document boosts aren't stored in the docs for later
recovered - the getBoost method is
there no good solution yet. There has been discussion on possible approaches
http://www.nabble.com/convert-custom-facets-to-Solr-facets...-tf3163183.html#a8790179
http://wiki.apache.org/solr/UserTagDesign
On 2/12/07, Gmail Account [EMAIL PROTECTED] wrote:
I know that I've seen this topic
i'm switching from standard to dismax and ran into this.
I'll post a little patch in a sec.
ryan
Is there a way to return documents in a random order?
(obviously paging would not work)
thanks
ryan
Are there any simple automatic test we can run to see what fields
would support fast faceting?
Is it just that the cache size needs to be bigger then the number of
distinct values for a field?
If so, it would be nice to add an /admin page that lists each field,
the distinct value count and a
On 2/3/07, Walter Underwood [EMAIL PROTECTED] wrote:
We would never use JOIN. We denormalize for speed. Not a big deal.
I'm looking at an application where speed is not the only concern. If
I can remove the need for a 'normalized' and 'denormalized' form it
would be a HUGE win. Essentially
oops!!! I meant to reply directly to Brian - an old friend of mine
from graduate school...
next time I'll check the reply-to button more closely.
The index has a type field: A for archived objects and C for
collectibles. All the original objects are indexed in batch fashion
as type A. Users collect objects and tags/annotates them. When a
user collects an object, a document of type C is indexed with the
original objects unique
Your argument is a good one, and I buy it. However, I've never had a
case where a user typing multiple words where the expectation was
for OR, it is always AND.
But there are many cases where the expectation is to to get the best
results possible. With AND you get zero results even when
Is it an ok idea to design an app with solr where you assume data will
be indexed immediately? For example, after a user uploads an image -
immediately use solr to search a collection that will include this new
image?
Essentially I'm asking if it ok to call commit/ often. Up to many
times /
check the wiki:
http://wiki.apache.org/solr/CollectionDistribution
and the scripts that come with the source:
http://svn.apache.org/repos/asf/lucene/solr/trunk/src/scripts/
On 1/23/07, S Edirisinghe [EMAIL PROTECTED] wrote:
Hi,
I just started looking into solr. I like the features that have
looks like we wont save the discussion for later :)
At this point though, I can't for the life of me remeber what Ryan said to
convince me that it made sense to have a DocumentParser concept that
UpdateHandlers could delegate to -- as opposed to the UpdateHandler doing
it directly :)
We
Is there any easy way to split a string into a multi-field on the server:
given:
add
field name=subjectsubject1; subject2; subject- 3/field
/doc
I would like:
add
field name=subjectsubject1/field
field name=subjectsubject2/field
field name=subjectsubject- 3/field
/doc
Thanks for any pointers
...
/analyzer
/fieldtype
On 1/21/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 1/21/07, Ryan McKinley [EMAIL PROTECTED] wrote:
Is there any easy way to split a string into a multi-field on the server:
From an indexing perspective, yes... just assign a tokenizer that splits on ';'
I don't
On 1/21/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 1/21/07, Ryan McKinley [EMAIL PROTECTED] wrote:
Are you suggesting something like this:
fieldtype name=splitField class=solr.TextField
sortMissingLast=true omitNorms=true
multi-field
tokenizer class
501 - 600 of 607 matches
Mail list logo