Vighnesh -
What you're looking for is DataImportHandler's TemplateTransformer. Docs here:
http://wiki.apache.org/solr/DataImportHandler#TemplateTransformer
Basically just enable the TemplateTransformer in each of your DIH configs then
set a literal field value like this differently for each
The way folks have addressed this situation to date is to model the
multivalued fields as additional documents too.
On Aug 26, 2011, at 09:32 , dar...@ontrenet.com dar...@ontrenet.com wrote:
Many thanks Erick.
I think a good feature to add to Solr to address this is
to allow a query to
On Aug 26, 2011, at 17:49 , Lord Khan Han wrote:
We are indexing news document from the various sites. Currently we have
200K docs indexed. Total index size is 36 gig. There is also attachement to
the news (pdf -docs etc) So document size could be high (ie 10mb).
We are using some complex
Good idea. However response writers can't control HTTP response headers
currently... Only the content type returned.
Erik
On Aug 24, 2011, at 8:52, Jon Hoffman j...@foursquare.com wrote:
What about the HTTP response header?
Great question. But how would that get returned in the
What do you mean you don't want to display it? Generally you'd just
navigate to solr_response['response'] to ignore the header and just deal with
the main body.
But, there is an omitHeader parameter -
http://wiki.apache.org/solr/CommonQueryParameters#omitHeader
Erik
On Aug 17,
For the record, I'm starting work now on moving the Velocity response writer
back to a contrib module so these dependencies won't be embedded in the WAR
file (after I make the commit this week some time, most likely).
Erik
On Aug 17, 2011, at 15:35 , Chris Hostetter wrote:
: Caused
Sounds like you aren't using SolrJ, which will return a Java object back to you
natively. Give that a try and let us know how it fairs against the jaxb
method.
Erik
On Aug 12, 2011, at 02:58 , Tri Nguyen wrote:
Hi,
My results from solr returns about 982 documents and I use jaxb
Though I think you could get the Dedupe feature to do this:
http://wiki.apache.org/solr/Deduplication
On Aug 13, 2011, at 11:52 , Erick Erickson wrote:
If you mean just throw the new document on the floor if
the index already contains a document with that key, I don't
think you can do that.
[:shards] = HA_SHARDS
response = conn.query(query, options)
Where LOCAL_SHARD points to a haproxy of a single shard and HA_SHARDS is an
array of 18 shards (via haproxy).
Ian.
On Mon, Aug 8, 2011 at 12:50 PM, Erik Hatcher erik.hatc...@gmail.comwrote:
Ian -
What does your solr-ruby using
Stephane -
Also - I don't think even with v.properties=velocity.properties that it'd be
picked up from the solr-home/conf directory the way the code is loading it
using SolrResourceLoader. The .properties file would need to be in your JAR
file for your custom tool (or in the classpath somehow
You can set default=true in solrconfig on the JSON response writer, like this:
queryResponseWriter name=json
default=true
class=solr.JSONResponseWriter /
Or you can add str name=wtjson/str to any request handler definitions.
Erik
Try making your queries, manually, to see this closer in action...
q=MyField:uri and see what you get. In this case, because your URI contains
characters that make the default query parser unhappy, do this sort of query
instead:
{!term f=MyField}uri
That way the query is parsed properly
Because you've got a stemmer in your analysis chain for those fields. If you
want unstemmed terms, remove the stemmer, or copyField to a different field to
use for the terms component.
Erik
On Aug 9, 2011, at 10:20 , Royi Ronen wrote:
Hi,
I am using the terms component.
Many times
Ian -
What does your solr-ruby using code look like?
Solr::Connection is light-weight, so you could just construct a new one of
those for each request. Are you keeping an instance around?
Erik
On Aug 8, 2011, at 12:03 , Ian Connor wrote:
Hi,
I have seen some of these errors
As far as I know, there isn't a patch for pivot faceting for 3.x. It'd require
extracting the code from trunk and porting it. Perhaps as easy as applying the
diff from the pivot commit from trunk to the 3.x codebase? (but probably not
quite that easy)
Erik
On Aug 3, 2011, at 00:58
You could use Solr's distributed (shards parameter) capability to do this.
However, if you've got somewhat different schemas that isn't necessarily going
to work properly. Perhaps unify your schemas in order to facilitate this using
Solr's distributed search feature?
Erik
On Aug 3,
Great question. But how would that get returned in the response?
It is a drag that the header is lost when results are written in CSV, but there
really isn't an obvious spot for that information to be returned.
Erik
On Aug 4, 2011, at 01:52 , Pooja Verlani wrote:
Hi,
Is there
On Jul 18, 2011, at 19:15 , Naomi Dushay wrote:
I found a weird behavior with the Solr defType argument, perhaps with
respect to default queries?
q={!defType=dismax}*:* hits
this is the confusing one. defType is a Solr request parameter, but not
something that works as a local
There's several starting points for Solr UI out there, but really the best
choice is whatever fits your environment and the skills/resources you have
handy. Here's a few off the top of my head -
* Blacklight - it's a Ruby on Rails full-featured search UI powered by Solr.
It can be
Just a hunch, ;), but I'm guessing you don't have a price field defined. qt is
for selecting a request handler you have defined in your solrconfig.xml - you
need to customize the parameters to your schema.
Erik
On Jul 19, 2011, at 04:32 , Yusniel Hidalgo Delgado wrote:
Hi,
I have
You'll have to add some logic in your Velocity templates string process the
sort parameter and determine whether to set the link to be ascending or
descending. It'll require learning some Velocity techniques to do this with
#if and how to navigate the objects Solr puts into the Velocity
On Jul 13, 2011, at 15:34 , Mourad K wrote:
Are there any good podcasts for beginners in SOLR
There's a bunch of stuff we've created and posted to our site here:
http://www.lucidimagination.com/devzone/videos-podcasts
Erik
On Jun 7, 2011, at 06:22 , roySolr wrote:
Every product has different facets. I have something like this in my schema:
dynamicField name=*_FACET type=facetType indexed=true stored=true
multiValued=true/
One optimization, if you don't need the stored values, is to set
stored=false.
I wouldn't share the same index across two Solr webapps - as they could step on
each others toes.
In this scenario, I think having two Solr instances replicating from the same
master is the way to go, to allow you to scale your load from each application
separately.
Erik
On Jul
YH -
One technique (that the Smithsonian employs, I believe) is a technique to index
the field names for the attributes into a separate field, facet on that first,
and then facet on the fields you'd like from that response in a second request
to Solr.
There's a basic hack here so the indexing
Put an OR between your two nested queries to ensure you're using that operator.
Also, those hl params in your first dismax don't really belong there and
should be separate parameters globally.
Erik
On Jul 1, 2011, at 06:19 , joelmats wrote:
Hello!
Is it possible to have an
Jamie - there is a JIRA about this, at least one:
https://issues.apache.org/jira/browse/SOLR-218
Erik
On Jun 15, 2011, at 10:12 , Jamie Johnson wrote:
So simply lower casing the works but can get complex. The query that I'm
executing may have things like ranges which require some
How'd you install it?
Generally you just delete the directory where you installed it. But you
might be deploying solr.war in a container somewhere besides Solr's example
Jetty setup, in which case you need to undeploy it from those other containers
and remove the remnants.
Curious though...
I guess you mean from the /browse view?
You can override/replace hit.vm (in conf/velocity/hit.vm) with whatever you
like. Here's an example from a demo I recently did using the open Best Buy
data where I mapped their url value for a product into a url_s field in Solr
and rendered a link to
No, there's not a way to control Similarity on a per-request basis.
Some factors from Similarity are computed at index-time though.
What factors are you trying to tweak that way and why? Maybe doing boosting
using some other mechanism (boosting functions, boosting clauses) would be a
better
Rather than reinventing wheels here, I think that fronting the conf/ directory
with a WebDAV server would be a great way to go. I'm not familiar with the
state-of-the-art of WebDAV servers these days but there might be something
pretty trivial that can be configured in Tomcat to do this? Or
This seems like it deserves some kind of collecting TokenFilter(Factory) that
will slurp up all incoming tokens and glue them together with a space (and
allow separator to be configurable). Hmmm surprised one of those doesn't
already exist. With something like that you could have a
For this to work, _val_: goes *in* the q parameter, not as a separate
parameter.
See here for more details:
http://wiki.apache.org/solr/SolrQuerySyntax#Differences_From_Lucene_Query_Parser
Erik
On Jun 2, 2011, at 07:46 , Savvas-Andreas Moysidis wrote:
Hello,
I'm trying to
What does the parsed query look like with debugQuery=true for both scenarios?
Any difference? Doesn't make any sense that echoParams would have an effect,
unless somehow your search client is relying on parameters returned to do
something with them.?!
Erik
On Apr 13, 2011, at 09:57
Try using AND (or set q.op):
q=car+AND+_val_:marketValue
On Apr 12, 2011, at 07:11 , Marco Martinez wrote:
Hi everyone,
My situation is the next, I need to sum the value of a field to the score to
the docs returned in the query, but not to all the docs, example:
q=car returns 3 docs
On Apr 8, 2011, at 17:32 , Mark wrote:
How come this new version is bundled with rails and why is there no .war
output format?
Rails, via JRuby, is used in LucidWorks Enterprise for both the admin and
search interfaces. (and also powers the Alerts REST API).
I wanted a simple drop in
I've written an Understanding Lucene refcard that has just been published at
DZone. See here for details:
http://www.lucidimagination.com/blog/2011/03/28/understanding-lucene-by-erik-hatcher-free-dzone-refcard-now-available/
If you're new to Lucene or Solr, this refcard will be a nice
On Mar 29, 2011, at 10:01 , Robert Gründler wrote:
Hi all,
i'm trying to implement a FunctionQuery using the bf parameter of the
DisMaxQueryParser, however, i'm getting an exception:
Unknown function min in FunctionQuery('min(1,2)', pos=4)
The request that causes the error looks like
On Mar 21, 2011, at 14:19 , karsten-s...@gmx.de wrote:
Hi,
I am working on a migration from verity k2 to solr.
At this point I have a parser for the Verity Query Language (our used subset)
which generates a syntax tree.
I transfer this in a couple of filters and one query. This
it is committed. Are you referring to trunk?
The reason I am asking is that I have been using 1.4.1 for some time now and
have been thinking of upgrading to trunk... or branch
Thank you Lewis
From: Erik Hatcher [erik.hatc...@gmail.com]
Sent: 16 March 2011
On Mar 16, 2011, at 14:53 , Jonathan Rochkind wrote:
Interesting, any documentation on the PathTokenizer anywhere? Or just have to
find and look at the source? That's something I hadn't known about, which may
be useful to some stuff I've been working on depending on how it works.
purely negative queries work with Solr's default (lucene) query parser. But
don't with dismax. Or so that seems with my experience testing this out just
now, on trunk.
In chatting with Jonathan further off-list we discussed having the best of both
worlds
q={!lucene}*:* AND NOT
Sorry, I missed the original mail on this thread
I put together that hierarchical faceting wiki page a couple of years ago when
helping a customer evaluate SOLR-64 vs. SOLR-792 vs.other approaches. Since
then, SOLR-792 morphed and is committed as pivot faceting. SOLR-64 spawned a
On Feb 1, 2011, at 08:58 , Estrada Groups wrote:
Has anyone noticed the rails application that installs with Solr4.0? I am
interested to hear some feedback on that one...
I guess you're talking about the client/ruby/flare stuff? It's been untouched
for quite a while and has not been
Maybe copy fields should be refactored to happen in a new, core, update
processor, so there is nothing special/awkward about them? It seems they fit
as part of what an update processor is all about, augmenting/modifying incoming
documents.
Erik
On Feb 23, 2011, at 04:40 , Jan Høydahl
Try -
fq={!field f=category}insert value, URL encoded of course, here
You can also try surrounding with quotes, but that gets tricky and you'll need
to escape things possibly. Or you could simply backslash escape the whitespace
(and colon, etc) characters.
Erik
On Feb 23, 2011, at
On Feb 23, 2011, at 09:25 , Savvas-Andreas Moysidis wrote:
Hi Eric,
could you please let us know where can we find more info about this notation
( fq={!field f=category})? What is it called, how to use it etc? Is there a
wiki page?
There's some details of this here:
On Feb 23, 2011, at 10:06 , Rosa (Anuncios) wrote:
Thanks Erik,
this works well.
the only thing, but i'm not sure is comes from there is with the accents:
q=memoire+sdfq={!field f=category}Electronique++Cartes+mémoires
any tricks for that?
Hard to say what the problem is. Maybe it
Vincent,
Look at Solr's fq (filter query) capability. You'll likely want to put your
restricting query in an fq parameter from your search client.
If your restricting query is a simple TermQuery, have a look at the various
built-in query parsers in Solr. On trunk you can do this: fq={!term
Paul - go with 1.4.1 in this case.
Keep tabs on the upcoming 3.1 release (of both Lucene and Solr) and consider
that in a month or so.
Erik
On Feb 17, 2011, at 10:04 , Paul wrote:
Thanks, going to update now. This is a system that is currently
deployed. Should I just update to
Yes, you may use POST to make search requests to Solr.
Erik
On Feb 17, 2011, at 14:27 , mrw wrote:
We are running into some issues with large queries. Initially, they were
ostensibly header buffer overruns, because increasing Jetty's
headerBufferSize value to 65536 resolved them.
this is not the case.
I have specified lib dir=./lib / in solrconfig.xml, is this enough or do
I need to use an exact path. I have already tried specifying an exact path
and it does not seem to work either.
Thank you
Lewis
From: Erik Hatcher
looks like you're missing the Velocity JAR. It needs to be in some Solr
visible lib directory. With 1.4.1 you'll need to put it in solr-home/lib.
In later versions, you can use the lib elements in solrconfig.xml to point to
other directories.
Erik
On Feb 14, 2011, at 10:41 ,
Sounds like you just described the VelocityResponseWriter. On trunk (or 3.x I
believe), try out http://localhost:8983/solr/browse and look at what makes that
tick.
Erik
On Feb 11, 2011, at 08:40 , McGibbney, Lewis John wrote:
Hi list,
I have been looking at an alternative UI
That's an incorrect way to POST PDF files (though maybe the latest work on
post.jar makes it possible, but would require additional parameters).
In order to index PDF files, you'll need to script an iteration over all files
and POST them in (or stream them however is most reasonable for your
Might with be a stop word removed by one of those qf fields? That'd explain
why mm=3 doesn't work, I think.
Erik
On Feb 11, 2011, at 15:43 , Tanner Postert wrote:
I'm having a problem using the dismax query for the term obsessed with
winning
the trick is, you have to remove the data/ directory, not just the data/index
subdirectory. and of course then restart Solr.
or delete *:*?commit=true, depending on what's the best fit for your ops.
Erik
On Feb 1, 2011, at 11:41 , Dennis Gearon wrote:
I tried removing the index
Yes, you need to create both a QParserPlugin and a QParser implementation.
Look at Solr's own source code for the LuceneQParserPlugin/LuceneQParser and
built it like that.
Baking the surround query parser into Solr out of the box would be a useful
contribution, so if you care to give it a
Beyond what Erick said, I'll add that it is often better to do this from the
outside and send in multiple actual end-user displayable facet values. When
you send in a field like Water -- Irrigation ; Water -- Sewage, that is what
will get stored (if you have it set to stored), but what you
No. SolrQueryRequest doesn't (currently) have access to the actual HTTP
request coming in. You'll need to do this either with a servlet filter and
register it into web.xml or restrict it from some other external firewall'ish
technology.
Erik
On Jan 23, 2011, at 13:21 , Teebo wrote:
On Jan 12, 2011, at 12:53 , Dmitriy Shvadskiy wrote:
Thanks Gora
The workaround of loading fields via LukeRequestHandler and building fl from
it will work for what we need. However it takes 15 seconds per core and we
have 15 cores.
The query I'm running is /admin/luke?show=schema
Is
There's nothing to sort in the results of a facet.query all you get back is
a single count of docs that match that query (within the q/fq constraints).
Erik
On Jan 3, 2011, at 07:46 , Em wrote:
Hi,
thanks for your reply, but this seems not to work on my facetQuery.
I mean
On Dec 22, 2010, at 09:21 , Jonathan Rochkind wrote:
This won't actually give you the number of distinct facet values, but will
give you the number of documents matching your conditions. It's more
equivalent to SQL without the distinct.
There is no way in Solr 1.4 to get the number of
On Dec 17, 2010, at 08:14 , Grant Ingersoll wrote:
I don't think pivot supports dates at this point. Would probably be good to
open an issue to note this feature, as I do think it would be good to have.
I would think we could support all the various facet options during pivots
(dates,
One oddity is the duplicated sections:
arr name=facet.pivot
strroot_category_name,parent_category_name,category/str
strroot_category_id,parent_category_id,category_id/str
/arr
That's in your responseHeader twice. Perhaps something fishy caused from that?
Is this hardcoded in your
We still have some open spots for the meetup we're hosting this Wednesday night
in DC. Come on out, it'll be a great time.
Erik
http://www.lucidimagination.com/blog/2010/11/01/nova-dc-apache-lucenesolr-meetup-630-pm-et-17-november/
http://www.lucidimagination.com/search/?q=%22find+similar%22 (then narrow to
wiki to find things in documentation)
which will get you to http://wiki.apache.org/solr/MoreLikeThisHandler
Erik
On Sep 22, 2010, at 12:12 PM, Li Li wrote:
It seems there is a SimilarLikeThis in lucene . I
Be sure to issue a commit after updates (either with a separate
commit/ or append ?commit=true to your update requests).
Out of curiosity are you using any Ruby library to speak to Solr? Or
hand rolling some Net::HTTP stuff?
Erik
On Sep 16, 2010, at 9:29 AM, maggie chen wrote:
My recommendation is if you need to query on something, index it as
you need... so in this case index another field with the number of
values in that field. This is easy if you're writing a custom
indexer, but maybe not so trivial if you're indexing other ways - so a
custom update
Here's perhaps the coolest webinar we've done to date, IMO :)
I attended Tyler's presentation at Lucene EuroCon* and thoroughly
enjoyed it. Search UI/UX is a fascinating topic to me, and really
important to do well for the applications most of us are building.
I'm pleased to pass along
Are you looking to get access to a remote schema? You can pull
schema.xml via HTTP using a URL like:
http://localhost:8983/solr/admin/file/?file=schema.xml
If you're accessing the schema from inside a custom Solr component
then the IndexSchema API (which you can get to from pretty much
What do you mean by user friendly? If you want an actual end user
search interface, it now comes out of the box on both the trunk and
3_x branch. Fire up the example, index the example data, and go to /
browse
That UI is generated using the Velocity response writer. You can get
The analysis tool is merely that, but during querying there is also a
query parser involved. Adding debugQuery=true to your request will
give you the parsed query in the response offering insight into what
might be going on. Could be lots of things, like not querying the
fields you
tool ABC12 matches ABC12. However, when doing an actual query, it
does not match.
Thank you for any help,
Justin
-- Forwarded message --
From: Erik Hatcher erik.hatc...@gmail.com
To: solr-user@lucene.apache.org
Date: Tue, 3 Aug 2010 16:50:06 -0400
Subject: Re: analysis tool vs
23, 2010 at 2:37 PM, Erik Hatcher
erik.hatc...@gmail.comwrote:
I've update the SOLR-792 patch to apply to trunk (using the solr/
directory
as the root still, not the higher-level trunk/).
This one I think is an important one that I'd love to see
eventually part
of Solr built
Than -
Looks like maybe your text_bo field type isn't analyzing how you'd
like? Though that's just a hunch. I pasted the value of that field
returned in the link you provided into your analysis.jsp page and it
chunked tokens by whitespace. Though I could be experiencing a copy/
Consider using the dismax query parser instead. It has more
sophisticated capability to spread user queries across multiple fields
with different weightings.
Erik
On Jul 20, 2010, at 4:34 AM, Bilgin Ibryam wrote:
Hi all,
I have two simple questions:
I have an Item entity with
On Jul 20, 2010, at 6:14 AM, Bilgin Ibryam wrote:
So I assume that storing entity each field in as a separate index
field is
correct, since they will get different scoring.
Just to get the terminology right... to use dismax, *index* each field
separately. Whether a field is *stored* or
This is simple faceting, doesn't even have to be a multi-valued
field. Just index your description field with the desired stop word
removal and other analysis that you want done, and
facet.field=description
Erik
On Jul 15, 2010, at 3:26 AM, Peter Karich wrote:
Dear Hoss,
I
Tommy,
It's not committed to trunk or any other branch at the moment, so no
future released version until then.
Have you tested it out? Any feedback we should incorporate?
When I can carve out some time over the next week or so I'll review
and commit if there are no issues brought up.
On Jul 4, 2010, at 5:10 PM, Andrew Clegg wrote:
Mark Miller-3 wrote:
On 7/4/10 12:49 PM, Andrew Clegg wrote:
I thought so but thanks for clarifying. Maybe a wording change on
the
wiki
Sounds like a good idea - go ahead and make the change if you'd like.
That page seems to be marked
Solr trunk now has a built-in UI, and it is also something that works
with Solr 1.4 as well (with some effort). Here's how to get it
working with Solr 1.4:
http://www.lucidimagination.com/blog/2009/11/04/solritas-solr-1-4s-hidden-gem/
In Solr trunk, all you have to do is navigate to
On Jul 1, 2010, at 10:33 AM, Mark Allan wrote:
Very nice indeed! That definitely needs to be shouted about in the
docs.
Why thanks! And yeah, marketing isn't my strong point, but it is
indeed a way cool feature of Solr that deserves more attention that I
can give it.
Any way to
Please provide us some details. What and how did you index? What
request did you make to Solr?
Erik
On Jul 1, 2010, at 5:56 PM, Moises Muratalla wrote:
I am getting incomplete search results with solr 1.4.0.
Any suggestions on how to fix or debug this?
Solr has 304 support with the last-modified and etag headers.
Erik
On Jun 30, 2010, at 7:52 PM, Jason Chaffee wrote:
In that case, being able to use Accept headers and conditional GET's
would make them more powerful and easier to use. The Accept header
could be used, if present,
Ken - thanks for these improvements! Comments below...
On Jun 23, 2010, at 8:24 PM, Ken Krugler wrote:
I grabbed the latest greatest from trunk, and then had to make a
few minor layout tweaks.
1. In main.css, the .query-box input { height} isn't tall enough
(at least on my Mac 10.5/FF
You can use DataImportHandler's XML/XPath capabilities to do this:
http://wiki.apache.org/solr/DataImportHandler#Usage_with_XML.2BAC8-HTTP_Datasource
or you could, of course, convert your XML to Solr's XML format.
Another fine option for what this data looks like, CSV format.
I'd imagine
Sounds like what you want is to override Solr's query component.
Have a look at the built-in one and go from there.
Erik
On Jun 22, 2010, at 1:38 PM, sarfaraz masood wrote:
I am a novice in solr / lucene. but i have gone
thru the documentations of both.I have even implemented
Martijn - Maybe the patches to SolrIndexSearcher could be extracted
into a new issue so that we can put in the infrastructure at least.
That way this could truly be a drop-in plugin without it actually
being in core. I haven't looked at the specifics, but I imagine we
could get the core
You need to share with us the Solr request you made, any any custom
request handler settings that might map to. Chances are you just need
to twiddle with the highlighter parameters (see wiki for docs) to get
it to do what you want.
Erik
On Jun 22, 2010, at 4:42 PM,
Or even better for an exact string query:
q={!raw f=field_name}sony vaio
(that's NOT URL encoded, but needs to be when sending the request over
HTTP)
Erik
On Jun 21, 2010, at 9:43 AM, Jan Høydahl / Cominvent wrote:
Hi,
You either need to quote your string:
Solritas is a way to view Solr responses in a more user friendly way,
it isn't going to help with the underlying suggest mechanism, just the
presentation of it.
Erik
On Jun 19, 2010, at 3:28 AM, Andy wrote:
Hi,
I've seen some posts on using SOLR-1316 or Solritas for
Fixed.
form action URLs really shouldn't have query string parameters on
them anyway, nor do they appear to work if so, so I moved the fq's to
hidden input fields.
Adding the ? into the URLs gets tricky, and doing it in #fqs isn't the
right place, as those are often tacked on after other
That's not a bug with the example schema, as price is a single-valued
field. getFirstValue will work, yes, but isn't necessary when it's
single valued. If you've got multiple prices, you probably want
something like:
#foreach($price in $doc.getFieldValue('price'))$!
Yup, that's basically what I've done too, here's the script part:
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/solr/conf/velocity/layout.vm?view=markup
I didn't touch the example solrconfig, though putting the params in
the request handler is the better way, as you have.
Looks like a typo below, Chantal, and another comment below too...
On Jun 18, 2010, at 3:32 AM, Chantal Ackermann wrote:
$(function() {
$(#qterm).autocomplete('/solr/epg/suggest', {
extraParams: {
'terms.prefix': function() { return
What kind of GUI are you looking for here?
It'd be easy to hack a delete this hit link into the /browse view
that now resides on trunk Solr, for example. But I hesitate to add
that in at the risk of someone deleting things inadvertently, but
perhaps an admin mode would be the way to build
do you mean sorting facets? or sorting search results? you can't
sort search results by a multivalued field - which value would it use?
Erik
On Jun 18, 2010, at 12:45 PM, Marc Sturlese wrote:
hey there!
can someone explain me how impacts to have multivalued fields when
On Jun 18, 2010, at 2:56 PM, Ken Krugler wrote:
Your wish is my command. Check out trunk, fire up Solr (ant run-
example), index example data, hit http://localhost:8983/solr/browse
- type in search box.
That works - excellent!
Now I'm trying to build a distribution from trunk that I can
On Jun 18, 2010, at 2:56 PM, Ken Krugler wrote:
3. I tried ant create-package from trunk/solr, and got this error
near the end:
/Users/kenkrugler/svn/lucene/lucene-trunk/solr/common-build.xml:
252: /Users/kenkrugler/svn/lucene/lucene-trunk/solr/contrib/
velocity/src not found.
I don't
601 - 700 of 1610 matches
Mail list logo