Paul,
Inline below...
On Nov 9, 2009, at 6:28 PM, Paul Rosen wrote:
If I could just create the desired URL, I can probably work
backwards and construct the correct ruby call.
Right, this list will always serve you best if you take the Ruby out
of the equation. solr-ruby, while cool and
maybe you indexed some documents with value 256, but then deleted
them? try optimizing to get the terms removed.
Erik
On Nov 8, 2009, at 6:11 AM, AHMET ARSLAN wrote:
I have a field defined tint with values 100,200,300 and -100 only.
When i use admin/schema.jsp i see 5 distinct
The brackets probably come from it being transformed as an array. Try
saying multiValued=false on your field specifications.
Erik
On Nov 9, 2009, at 12:34 AM, Michael Lackhoff wrote:
On 08.11.2009 16:56 Michael Lackhoff wrote:
What didn't work but looks like the potentially best
EmbeddedSolrServer is only accessible through API calls, not through a
URL. It is strongly recommended to run Solr through the WAR file, for
several reasons including replication and distributed search features
that only work over HTTP.
Erik
On Nov 4, 2009, at 1:44 AM, Christian
Use lt; instead of in that attribute. That should fix the issue.
Remember, it's an XML file, so it has to obey XML encoding rules which
make it ugly but whatcha gonna do?
Erik
On Oct 27, 2009, at 11:50 AM, Andrew Clegg wrote:
Hi,
If I have a DataImportHandler query with a
There is a DirectoryFactory in Solr that could be used to make
Lucene's RAMDirectory. But then you'd have to reindex everything when
restarting Solr. Doesn't seem to make much practical sense to use it,
not even for performance reasons thanks to Lucene and Solr both
caching what they
You're better off putting extensions like these in solr-home/lib and
letting Solr load them rather than putting them in a container
classpath like Jetty's lib/ext. As you've seen, conflicts occur
because of class loader visibility.
Erik
On Oct 14, 2009, at 7:28 PM, Teruhiko
Paul-
Trunk solr-ruby has this instead:
hash[:sort] = @params[:sort].collect do |sort|
key = sort.keys[0]
#{key.to_s} #{sort[key] == :descending ? 'desc' : 'asc'}
end.join(',') if @params[:sort]
The ;sort... stuff is now deprecated with Solr itself
I suppose the 0.8 gem
I've just pushed a new 0.0.8 gem to Rubyforge that includes the fix I
described for the sort parameter.
Erik
On Oct 12, 2009, at 11:03 AM, Paul Rosen wrote:
I did an experiment that worked. In Solr::Request::Standard, in the
to_hash() method, I changed the commented line below to
don't forget q=... :)
Erik
On Oct 1, 2009, at 9:49 AM, Andrew Clegg wrote:
Hi folks,
I'm using the 2009-09-30 build, and any single or double quotes in
the query
string cause an NPE. Is this normal behaviour? I never tried it with
my
previous installation.
Example:
, I'm an idiot.
Thanks :-)
Andrew.
Erik Hatcher-4 wrote:
don't forget q=... :)
Erik
On Oct 1, 2009, at 9:49 AM, Andrew Clegg wrote:
Hi folks,
I'm using the 2009-09-30 build, and any single or double quotes in
the query
string cause an NPE. Is this normal behaviour? I never tried
/%3c67117a73-2208-401f-ab5d-148634c77...@variogr.am%3e
Matt
On Sun, Mar 30, 2008 at 9:50 PM, Erik Hatcher e...@ehatchersolutions.com
wrote:
Documents with a particular field can be matched using:
field:[* TO *]
Or documents without a particular field with:
-field:[* TO *]
An empty field
Excuse the cross-posting and gratuitous marketing :)
Erik
My company, Lucid Imagination, is sponsoring a free and in-depth
technical webinar with Erik Hatcher, one of our co-founders as Lucid
Imagination, as well as co-author of Lucene in Action, and Lucene/Solr
PMC member
There's nothing in that output that indicates something we can help
with over in solr-user land. What is the call you're making to Solr?
Did Solr log anything anomalous?
Erik
On Sep 28, 2009, at 4:41 AM, Steinar Asbjørnsen wrote:
I just posted to the SolrNet-group since i have
Note that whatever query you use will be cached in the query cache. -
*:* is likely the best choice. Another alternative if you've got
dynamic fields wired in, is something like
_nonexistent_field_s:dummy_value
Erik
On Sep 28, 2009, at 5:17 AM, Øystein F. Steimler wrote:
Hi,
acts_as_solr accesses the Solr server listed in the config solr.yml
file. You don't have to use the start/stop Rake actions, they are
really just conveniences for development/testing (I personally would
launch Solr separately in production though).
Out of curiosity, what acts_as_solr
Seems wrong, but actually is how I've done this sort of thing (with
year ranges like 1860-1865). Denormalizing/expanding is a pretty
common way to solve problems with Lucene/Solr. There's not that many
zip codes, so expanding shouldn't be prohibitive.
Erik
On Sep 21, 2009, at
, 2009 at 9:44 PM, Erik Hatcher
erik.hatc...@gmail.comwrote:
On Sep 17, 2009, at 11:40 AM, Ian Connor wrote:
Is there any support for connection pooling or a more optimized data
exchange format?
The solr-ruby library (as do other Solr + Ruby libraries) use the
ruby
response format and eval
On Sep 17, 2009, at 7:11 PM, Lance Norskog wrote:
This looks like a Ruby client bug.
Maybe, but I doubt it in this case.
But let's have some details of the Ruby code used to make the request,
and what gets logged on the first Solr for the request.
Erik
If you do the same
On Sep 17, 2009, at 6:14 PM, Lance Norskog wrote:
Yes. facet=false means don't do any faceting. This is why you don't
get any facet data back. This is probably a bug in the solr-ruby code.
Version number 0.0.x is probably a hint about its production-ready
status :)
Actually solr-ruby is
I just tried this on trunk and both with and without a field selector
it parses to a PhraseQuery. I have trouble believing even Solr 1.3
behaved like you reported, something seems fishy.
Erik
On Sep 18, 2009, at 9:02 AM, DHast wrote:
well it seems what is happening is solr is
Free Webinar: Apache Lucene 2.9: Discover the Powerful New Features
---
Join us for a free and in-depth technical webinar with Grant
Ingersoll, co-founder of Lucid Imagination and chair of the Apache
Lucene PMC.
Thursday,
Just FYI - you can put Solr plugins in solr-home/lib as JAR files
rather than messing with solr.war
Erik
On Sep 16, 2009, at 10:15 AM, Alexey Serba wrote:
Hi Aaron,
You can overwrite default Lucene Similarity and disable tf and
lengthNorm factors in scoring formula ( see
This could be achieved purely client-side if all you're talking about
is a stored field (not indexed/searchable). The client-side could
encrypt and encode the encrypted bits as text that Solr/Lucene can
store. Then decrypt client-side.
Erik
On Sep 16, 2009, at 10:39 AM, Jay Hill
[* TO *] on the standard handler is an implicit query of
default_field_name:[* TO *] which matches only documents that have the
default field on them. So [* TO *] and *:* are two very different
queries, only the latter guaranteed to match all documents.
Erik
On Sep 14, 2009, at
3rd edition?! *whew* - let's get the 2nd edition in print first ;)
Erik
On Sep 14, 2009, at 12:10 PM, Fuad Efendi wrote:
http://www.manning.com/ingersoll/
And other books too, such as Lucene in Action 3rd edition... PDF
only (MEAP)
Today Only! Save 50% on any ebook! This offer
With solr-ruby, simply put the core name in the URL of the
Solr::Connection...
solr = Solr::Connection.new('http://localhost:8983/solr/core_name')
Erik
On Sep 9, 2009, at 6:38 PM, Paul Rosen wrote:
Hi all,
I'd like to start experimenting with multicore in a ruby on rails app.
/core1')
because it has a ? in it.
Erik Hatcher wrote:
With solr-ruby, simply put the core name in the URL of the
Solr::Connection...
solr = Solr::Connection.new('http://localhost:8983/solr/core_name')
Erik
On Sep 9, 2009, at 6:38 PM, Paul Rosen wrote:
Hi all,
I'd like to start
at the Speed of Light: Erik Hatcher, Lucene/Solr PMC Member
and Committer, co-author of Lucene In Action, Lucid Imagination
• Migrating from commercial search engines to Solr,Tobias Larsson
Hult and Eskil Andreen, Findwise SE
• Presentations followed by Lightning Talks from community
On Sep 3, 2009, at 1:24 AM, SEZNEC Bruno wrote:
Hi,
Following solr tuto,
I send doc to solr by request :
curl
'http://localhost:8983/solr/update/extract?literal.id=doc1uprefix=attr_map
.
content=attr_contentcommit=true' --F myfi...@oxiane.pdf
response
lst name=responseHeaderint
queries like
{query} AND {filter}???
Why can't we improve Lucene then?
Fuad
P.S.
https://issues.apache.org/jira/browse/SOLR-1169
https://issues.apache.org/jira/browse/SOLR-1179
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: August-26-09 8:50 PM
While Andrzej's talk will focus on things at the Lucene layer, I'm
sure there'll be some great tips and tricks useful to Solrians too.
Andrzej is one of the sharpest folks I've met, and he's also a very
impressive presenter. Tune in if you can.
Erik
Begin forwarded message:
You couldn't sort on a multiValued field though.
I'd simply index a max_side field, and have the indexing client add a
single valued field with max(length,width) to it. Then sort on
max_side.
Erik
On Aug 25, 2009, at 4:00 AM, Constantijn Visinescu wrote:
make a new multivalued
Announcing a new Meetup for SFBay Apache Lucene/Solr Meetup!
What: SFBay Apache Lucene/Solr June Meetup
When: September 3, 2009 6:30 PM
Where: Computer History Museum, 1401 N Shoreline Blvd, Mountain View,
CA 94043
Presentations and discussions on Lucene/Solr, the Apache Open Source
Search
On Aug 25, 2009, at 11:29 AM, Fuad Efendi wrote:
query time relevancy tuning
It is mentioned at
http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/
-What is it? Just GET request parameters for standard handler?
To me, this primarily refers to dismax client-side parameterization of
Earle,
Ahh, I read your mail too fast... Erik Hatcher's method should work.
Thanks!
Koji
Erik Hatcher wrote:
You couldn't sort on a multiValued field though.
I'd simply index a max_side field, and have the indexing client add a
single valued field with max(length,width) to it. Then sort
On Aug 25, 2009, at 10:34 AM, Elaine Li wrote:
I am still looking for help on chinese language search. I tried
chinesetokenizerfactory as my analyzer, but it did not help. Only word
with white space, comma and etc around them can be found.
Try using the StandardTokenizerFactory - it handles
On Aug 25, 2009, at 6:35 PM, Britske wrote:
Moreover, I can't seem to find the actual code in FacetComponent or
anywhere
else for that matter where the {!ex}-param case is treated. I assume
it's in
FacetComponent.refineFacets but I can't seem to get a grip on it..
Perhaps
it's late here..
On Aug 24, 2009, at 7:03 AM, Avlesh Singh wrote:
Can you really sort accurately on tokenized fields?
Yes, as long as there is *one and only one* term emitted from the
analyzer. KeywordTokenizer is your friend, and comes in handy to
lowercase or pattern replace things.
Erik
I think you need a space, not a comma, in the qf parameter. It's
designed to allow for boosts, like qf=features^2.0 make^1.0
Erik
On Aug 24, 2009, at 1:13 PM, darniz wrote:
Hello
i created a custom request handler and i want it to do a search on
features
and make field by
I think you need to elaborate a bit more ... I don't understand what
you're asking. Exact word search only? What is not working as you'd
like/expect currently?
Erik
On Aug 20, 2009, at 7:35 AM, bhaskar chandrasekar wrote:
Hi,
Which Java class needs to be modified to get the
On Aug 19, 2009, at 2:45 PM, Paul Rosen wrote:
You can see the problem here (at least until it's fixed!):
http://nines.performantsoftware.com/search/saved?user=paulname=poem
Hi Paul - that project looks familiar! :)
If you sort by Title/Ascending, you get partially sorted results,
but it
On Aug 19, 2009, at 3:50 PM, Paul Rosen wrote:
I'm surprised you're not seeing an exception when trying to sort on
title given this configuration. Sorting must be done on single
valued indexed fields, that have at most a single term indexed per
document. I recommend you use copyField to
However, you can have a dynamic * field mapping that catches all
field names that aren't already defined - though all of the fields
will be the same field type.
Erik
On Aug 19, 2009, at 5:48 PM, Marco Westermann wrote:
Hi, thanks for your answers, I think I have to go more in
On Aug 18, 2009, at 8:28 AM, Ninad Raut wrote:
Hi,
I want to count the words between two significant words like shell
and
petroleum. Or want to write a query to find all the documents
where the
content has shell and petroleum in close proximity of less than
10 words
between them.
Can
Same works with optimize... /solr/update?optimize=true
Erik
On Aug 12, 2009, at 2:43 PM, KaktuChakarabati wrote:
Hey Yonik,
Thanks for the quick reply, However my first question was more
specific:
* I'm not worried about a commit but about the *optimize* operation
which I
might
Yes, increasing the filterCache size will help with Solr 1.3
performance.
Do note that trunk (soon Solr 1.4) has dramatically improved faceting
performance.
Erik
On Aug 12, 2009, at 1:30 PM, Jérôme Etévé wrote:
Hi everyone,
I'm using some faceting on a solr index containing ~
My hunch, though I'll try to make some time to test this out
thoroughly, is that the entity is parsed initially with variables
resolved, but not per request. Variables/expressions do get expanded
for fields of course, but perhaps not for other high-level attributes?
Erik
On Aug
And further on this, if you want a field automatically added to each
document with the list of its field names, check out http://issues.apache.org/jira/browse/SOLR-1280
Erik
On Aug 4, 2009, at 1:01 AM, Avlesh Singh wrote:
I understand the general need here. And just extending what
Is default-search-field stored (as specified in schema.xml)?
Erik
On Aug 3, 2009, at 8:05 PM, Stephen Green wrote:
Hi, folks. I'm trying to get a very simple example working with Solr
highlighting. I have a default search field (called, unsurprisingly
default-search-field) with
You'll have to reindex your documents from scratch. Such is the
nature of changing the schema of an index. It's always a great idea
(in fact, I'd say mandatory) to have a full reindex process handy.
Erik
On Jul 31, 2009, at 2:37 AM, Vannia Rajan wrote:
Hi,
We are using
On Jul 31, 2009, at 2:35 AM, Rahul R wrote:
Hello,
We are trying to get Solr to work for a really huge parts database.
Details
of the database
- 55 million parts
- Totally 3700 properties (facets). But each record will not have
value for
all properties.
- Most of these facets are defined
On Jul 31, 2009, at 7:01 AM, Vannia Rajan wrote:
On Fri, Jul 31, 2009 at 3:22 PM, Erik Hatcher e...@ehatchersolutions.com
wrote:
You'll have to reindex your documents from scratch. Such is the
nature of
changing the schema of an index. It's always a great idea (in
fact, I'd say
On Jul 31, 2009, at 7:17 AM, Rahul R wrote:
Erik,
I understand that caching is going to improve performance. Infact we
did a
PSR run with caches enabled and we got awesome results. But these
wouldn't
be really representative because the PSR scripts will be doing the
same
searches again
source is great. Just Export, update the config, and import
(=reindex) to see if, for instance the performance is better or just
to
transport the information to an other server.
This can only be done of course when there are no fields added etc.
On Fri, Jul 31, 2009 at 2:59 PM, Erik Hatcher e
PM, Erik Hatcher e...@ehatchersolutions.com
wrote:
On Jul 31, 2009, at 7:17 AM, Rahul R wrote:
Erik,
I understand that caching is going to improve performance. Infact
we did a
PSR run with caches enabled and we got awesome results. But these
wouldn't
be really representative because the PSR
On Jul 30, 2009, at 6:17 AM, Jörg Agatz wrote:
Also, i use the Comandline tool java .jar post.jar xyz.xml
i donkt know what you are mean with
It sounds like you're not using 'entities' for your '' characters
(ampersands) in your XML.
These should be converted to amp; This should look
On Jul 30, 2009, at 9:44 AM, Reece wrote:
Hello everyone :)
I was trying to purge out older things.. in this case of a certain
type of document that had an ID lower than 200. So I posted this:
deletequeryid:[0 TO 200] AND type:I/query/delete
Now, I have only 49 type I items total
On Jul 30, 2009, at 9:19 AM, Licinio Fernández Maurelo wrote:
i want to get the lucene index format version from solr web app (as
luke do), i've tried looking for the info at luke handler response,
but i havn't found this info
the Luke request handler writes it out:
On Jul 30, 2009, at 11:54 AM, Andrew Clegg wrote:
entity dataSource=filesystem name=domain_pdb
url=${domain.pdb_code}-noatom.xml processor=XPathEntityProcessor
forEach=/
field column=content
xpath=//*[local-name()='structCategory']/*[local-name()='struct']/
On Jul 30, 2009, at 12:19 PM, Andrew Clegg wrote:
Don't worry -- your hints put me on the right track :-)
I got it working with:
entity dataSource=filesystem name=domain_pdb
url=${domain.pdb_code}-noatom.xml processor=XPathEntityProcessor
forEach=/datablock
field
On Jul 30, 2009, at 1:00 PM, Shalin Shekhar Mangar wrote:
On Thu, Jul 30, 2009 at 9:53 PM, dar...@ontrenet.com wrote:
Hi,
I am exploring the faceted search results of Solr. My query is like
this.
On Jul 30, 2009, at 1:44 PM, Jérôme Etévé wrote:
Hi all,
I don't know if it does the same from everyone, but when I use the
reply function of my mail agent, it sets the recipient to the user who
sent the message, and not the mailing list.
So it's quite annoying cause I have to change the
I recommend, in this case, that you use Solr's autocommit feature (see
solrconfig.xml) rather than having your indexing clients issue their
own commits. Overlapped searcher warming is just going to be too much
of a hit on RAM, and generally unnecessary with autocommit.
Erik
On
On Jul 30, 2009, at 3:32 PM, Stephen Duncan Jr wrote:
What's the effect of showItems attribute on the fieldValueCache in
Solr 1.4?
Just outputs details of the last accessed items from the cache in the
stats display.
Erik
if (showItems != 0) {
Map items =
On Jul 29, 2009, at 6:55 AM, Vincent Pérès wrote:
Using the following query :
http://localhost:8983/solr/others/select/?debugQuery=trueq=anna%20lewisrows=20start=0fl=*qt=dismax
I get back around 100 results. Follow the two first :
doc
str name=idPerson:151/str
str name=name_sVictoria
On Jul 27, 2009, at 6:13 PM, ranjitr wrote:
Ok, I think I found an alternative.
With Solr 1.3, trying to reload the core from the browser using:
http://localhost:8984/solr/admin/cores?action=RELOAD\core=core1
doesn't work (this problem does not happen with nightly build).
But I use curl
On Jul 26, 2009, at 3:53 PM, manuel aldana wrote:
it is not explicitly mentioned in solr documentation but I guess
when changing stuff inside conf/ folder a restart of webserver is
necessary? Or is there a reload URL call available?
In single core, a restart of Solr is required to pick up
On Jul 23, 2009, at 7:00 AM, Łukasz Osipiuk wrote:
See https://issues.apache.org/jira/browse/SOLR-1293
We're planning to put up a patch soon. Perhaps we can collaborate?
What are your estimations to have this patches ready. We have quite
tight deadlines
and cannot afford months of
On Jul 24, 2009, at 8:33 AM, Nishant Chandra wrote:
Can I use composite key for uniqueKeyId? If yes, how?
No - you get one field to use for uniqueKey in Solr. It is your
indexer's responsibility for aggregating values from your data sources
into a single uniqueKey value. For example, in
On Jul 24, 2009, at 4:53 AM, prerna07 wrote:
Does that mean my indexes should be created with phonetic filter
factory in
my fieldTypes? Currently I am quering on text fields and phonetic
factory is
defined for uery analyser only.
Yes, if you are applying the phonetic filter you need to do
# All advertisements must at least be in the form of executable code ;)
require 'solr'
Solr::Connection.new('http://localhost:8983/
solr', :autocommit=:on).add({ :id = SolrCourses,
:text = '
Lucid Imagination is now offering Solr Essentials Online Sessions, a
On Jul 23, 2009, at 11:03 AM, Jörg Agatz wrote:
Hallo...
I have a problem...
i want to sort a field
at the Moment the field type is text, but i have test it with
string or
date
the content of the field looks like 22.07.09 it is a Date.
when i sort, i get :
failed to open stream: HTTP
Rather than trying to get all document id's in one call to Solr,
consider paging through the results. Set rows=1000 or probably
larger, then check the numFound and continue making requests to Solr
incrementing start parameter accordingly until done.
Erik
On Jul 23, 2009, at 5:35
Give it is a small number of terms, seems like just excluding them
from use/visibility on the client would be reasonable.
Erik
On Jul 23, 2009, at 11:43 AM, Bill Au wrote:
I want to exclude a very small number of terms which will be
different for
each query. So I think my best
On Jul 20, 2009, at 6:11 AM, Code Tester wrote:
I am even unable to delete documents using the EmbeddedSolrServer
( on a
specific core )
Steps:
1) I have 2 cores ( core0 , core1 ) Each of them have ~10 records.
2) System.setProperty(solr.solr.home,
/home/user/projects/solr/example/multi);
See http://issues.apache.org/jira/browse/SOLR-218 - Solr currently
does not have leading wildcard support enabled.
Erik
On Jul 20, 2009, at 8:09 AM, Jörg Agatz wrote:
Hallo Solr Users...
I tryed to search with a Wildcard at the beginning from a search.
for example, i will search
I was particularly surprised by the SOLR-64 numbers. What makes it's
response so huge (and thus slow) to return the entire tree of facet
counts?
Erik
On Jul 19, 2009, at 5:35 PM, Erik Hatcher wrote:
I've posted the details of some experiments I just did comparing/
contrasting
On Jul 17, 2009, at 8:45 PM, J G wrote:
Is it possible to obtain the SOLR index size on disk through the
SOLR API? I've read through the docs and mailing list questions but
can't seem to find the answer.
No, but it'd be a great addition to the /admin/system handler which
returns lots of
On Jul 17, 2009, at 5:21 PM, Bill Au wrote:
I am faceting based on the indexed terms of a field by using
facet.field.
Is there any way to exclude certain terms from the facet counts?
Only using the facet.prefix feature to limit to facet values beginning
with a specific string.
On Jul 16, 2009, at 4:35 AM, Koji Sekiguchi wrote:
ashokcz wrote:
Hi all,
i have a scenario where i need to get facet count for combination
of fields.
Say i have two fields Manufacturer and Year of manufacture.
I search for something and it gives me 15 results and my facet
count as like
On Jul 15, 2009, at 2:59 PM, Mani Kumar wrote:
@mark, @otis:
Can I answer too? :)
yeah copying all the fields to one text field will work but what if
i want
to assign specific weightage to specific fields?
e.g. i have a three fields
1) title
2) tags
3) description
i copied all of them
Use the stopwords feature with a custom mispeled_words.txt and a
StopFilterFactory on the spell check field ;)
Erik
On Jul 13, 2009, at 8:27 PM, Jay Hill wrote:
We're building a spell index from a field in our main index with the
following configuration:
searchComponent
On Jul 14, 2009, at 8:00 AM, Kevin Miller wrote:
I am needing to index primarily .doc files but also need it to look at
.pdf and .xls files. I am currently looking at the Tika project for
this functionality.
This is now built into trunk (aka Solr 1.4):
On Jul 14, 2009, at 5:35 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
On Tue, Jul 14, 2009 at 1:33 AM, Kevin
Millerkevin.mil...@oktax.state.ok.us wrote:
I am new to Solr and trying to get it set up to index files from a
directory structure on a server. I have a few questions.
1.) Is there an
On Jul 13, 2009, at 4:58 AM, Gargate, Siddharth wrote:
I read somewhere that it is deprecated
see the 2nd paragraph in CHANGES.txt:
http://svn.apache.org/repos/asf/lucene/solr/trunk/CHANGES.txt
Erik
You can delete by query - deletequeryurl:some-word/query/delete
Erik
On Jul 13, 2009, at 6:34 AM, Beats wrote:
HI,
i m using nutch to crawl and solr to index the document.
i want to delete the index containing a perticular word or pattern
in url
field.
Is there something like
On Jul 9, 2009, at 5:37 PM, A. Steven Anderson wrote:
A simple example would be if a schema included a phoneNum mulitValue
field
and I wanted to return all docs that contained more than 1 phoneNum
field
value.
all docs that contain more than one phone number - regardless of
matching a
On Jul 9, 2009, at 5:37 PM, A. Steven Anderson wrote:
A simple example would be if a schema included a phoneNum mulitValue
field
and I wanted to return all docs that contained more than 1 phoneNum
field
value.
all docs that contain more than one phone number - regardless of
matching a
I'm exploring other ways of getting data into Solr via
DataImportHandler than through a relational database, particularly the
URLDataSource.
I see the special commands for deleting by id and query as well as the
$hasMore/$nextUrl techniques, but I'm unclear on exactly how one would
go
On Jul 9, 2009, at 1:02 PM, gistol...@gmx.de wrote:
I am using the dismax query parser syntax for the fq param:
.../select?qt=dismaxrows=30q.alt=*:*qf=contentfq={!dismax
qf=contentKeyword^1.0 mm=0%}Foofq=+date:[2009-03-11T00:00:00Z TO
2009-07-09T16:41:50Z]fl=id,date,content
Now, I want
On Jul 8, 2009, at 6:49 AM, Norberto Meijome wrote:
alternatively, u can write a relatively simple java app that will
pick each file up and post it for you using SolrJ
Note that Solr ships with post.jar. So one could post a bunch of Solr
XML file like this:
java -jar post.jar
On Jul 8, 2009, at 7:06 AM, Saeli Mathieu wrote:
Hello.
I posted recently in this ML a script to transform any xml files in
Solr's
xml files.
Anyway.
I've got a problem when I want to index my file, the indexation
script from
the demonstration works perfectly, but now the only problem
On Jul 8, 2009, at 8:10 AM, Saeli Mathieu wrote:
Yep I know that, I almost add more than 60 lines in this file :)
It's just an example.
Do you have any idea why when I'm trying to search something, the
result of
Solr is equal to 0 ?
The first place I start with a general question like is
Pierre - the field you're faceting must not have the StopFilter
applied at indexing time, or the words you want removed aren't in the
stop word list file.
Erik
On Jul 3, 2009, at 5:21 AM, Pierre-Yves LANDRON wrote:
Hello,
When indexing or querying text, i'm using the
You could configure multiple spellcheckers on different fields, or if
you want to aggregate several fields into the suggestions, use
copyField to pool all text to be suggested together into a single field.
Erik
On Jul 2, 2009, at 7:46 AM, Otis Gospodnetic wrote:
Hi Lici,
I don't
We're using recent nightly snapshots of Solr in various applications,
and also our (Lucid's) certified distributions which include many
1.4'ish goodies in a supportable fashion.
So, yeah, I definitely have no qualms about recommending trunk or
nightly builds of Solr. Granted, of course,
Kalyan,
Tell us about your indexer. Is it DIH-powered? Custom Java code,
perhaps, using SolrJ indexing over HTTP? Is your indexer doing a lot
of work itself to preprocess documents before sending to Solr?
Erik
On Jul 1, 2009, at 3:42 PM, Manepalli, Kalyan wrote:
Hi,
What happens when you search for test* ?
Wildcard terms are not analyzed, and thus not lowercased, yet when you
indexed you likely lowercased all terms by way of the analyzer
configuration for the field you're querying.
One solution/workaround is simply to lowercase the entire query string
On Jun 30, 2009, at 1:51 PM, dar...@ontrenet.com wrote:
It seems the merge index request (admin or lucene tool) expects the
indexes to already be local.
Maybe in the future I can specify the URL to the remote indexes for
Solr
to merge. For now, I will find a way maybe using rsync or scp or
901 - 1000 of 1610 matches
Mail list logo