Hi,
I have been using Jetty on my linux/apache webserver for about 3 weeks now.
I decided that I should change to Tomcat after realizing I will be indexing
a lot of URL's and Jetty is good for small production sites as noted in the
Wiki. I am running into this error:
Dear Solr/Lucene gurus,
I have run into a weird issue trying use a negative condition in my query.
Parser:StandardQueryParserMy Query: Field1:Val1 NOT Field2:Val2Resolved as:
Field1:Val1 -Field2:Val2
The above query never returns any document, no matter how we use a paranthesis.
I did see some
try
Field1:Val1 AND (*:* NOT Field2:Val2), that shoud work ok
On Sun, Nov 14, 2010 at 9:02 AM, Viswa S svis...@hotmail.com wrote:
Dear Solr/Lucene gurus,
I have run into a weird issue trying use a negative condition in my query.
Parser:StandardQueryParserMy Query: Field1:Val1 NOT
Thanks for all the responses.
Govind: To answer your question, yes, all I want to search is plain text
files. They are located in NFS directories across multiple Solaris/Linux
storage boxes. The total storage is in hundreds of terabytes.
I have just got started with Solr and my understanding is
Move solr.war file and solrhome directory somewhere else outside the tomcat
webapps. Like /home/foo. Tomcat will generate webapps/solr automatically.
This is what i use: under catalineHome/conf/Catalina/localhost/solr.xml
Context docBase=/home/foo/apache-solr-1.4.0.war debug=0
Hi folks,
I'm using Solr 1.4.1 and I'm willing to use TermsComponent for AutoComplete.
The problem is, I can't get it to match strings with spaces in them. So to
say,
terms.fl=nameterms.lower=davidterms.prefix=davidterms.lower.incl=falseindent=truewt=json
matches all strings starting with david
terms.fl=nameterms.lower=david%20terms.prefix=david%20terms.lower.incl=falseindent=truewt=json
it doesn't match all strings starting with david . Is it
meant to be that
way?
This is about fielyType of name field. What is it? If it does have
ShingleFilterFactory in it, then this is
Hi,
and thanks for your hints. I've done some additional research and found
that there doesn't really seem to be any possibility of an embedded solr
server in solrpy.
Jetty, then. It'd all be probably kinda easy if it weren't for the way
things are unbundled in debian. I've recently posted to
I'm using Solr 1.4.1 and I'm willing to use TermsComponent
for AutoComplete.
The problem is, I can't get it to match strings with spaces
in them. So to
say,
terms.fl=nameterms.lower=davidterms.prefix=davidterms.lower.incl=falseindent=truewt=json
matches all strings starting with david
Hi,
I have up to now focussed on Jetty as it's already bundled with solr.
The main issue there seems to be the way it's unbundled by Debian; I
figure things might be similar with Tomcat, depending on how entangled
configuration is there.
Before I dig deeper into the Tomcat option: would you
Hi Ahmet,
This is the fieldType for name:
fieldType name=textgen class=solr.TextField
positionIncrementGap=100
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt
--- On Sun, 11/14/10, Parsa Ghaffari parsa.ghaff...@gmail.com wrote:
From: Parsa Ghaffari parsa.ghaff...@gmail.com
Subject: Re: Solr TermsComponent: space in term
To: solr-user@lucene.apache.org
Date: Sunday, November 14, 2010, 5:06 PM
Hi Ahmet,
This is the fieldType for name:
Alphanumeric + _ + % + .
So to say: John_Smith, John Smith, John_B._Smith and John 44 Smith
are all possible values.
On Sun, Nov 14, 2010 at 11:46 PM, Ahmet Arslan iori...@yahoo.com wrote:
--- On Sun, 11/14/10, Parsa Ghaffari parsa.ghaff...@gmail.com wrote:
From: Parsa Ghaffari
On Sun, Nov 14, 2010 at 4:17 AM, Leonardo Menezes
leonardo.menez...@googlemail.com wrote:
try
Field1:Val1 AND (*:* NOT Field2:Val2), that shoud work ok
That should be equivalent to Field1:Val1 -Field2:Val2
You only need the *:* trick if all of the clauses of a boolean query
are negative.
Ok, thanks. it works now for title and description fields. :)
But now I also need it for the city. And I cant get that to work, even
though im doing the exact same (or so I think).
I now have the code below for the city field.
(Im defining city field twice in my data-config and schema.xml but
--- On Sun, 11/14/10, PeterKerk vettepa...@hotmail.com wrote:
From: PeterKerk vettepa...@hotmail.com
Subject: Re: full text search in multiple fields
To: solr-user@lucene.apache.org
Date: Sunday, November 14, 2010, 8:52 PM
Ok, thanks. it works now for title and description fields.
:)
both queries give me 0 results...
--
View this message in context:
http://lucene.472066.n3.nabble.com/full-text-search-in-multiple-fields-tp1888328p1900648.html
Sent from the Solr - User mailing list archive at Nabble.com.
both queries give me 0 results...
Then your field(s) is not populated. You can debug on /admin/dataimport.jsp
or /admin/schema.jsp
Ok, more detail: I was testing using the NoMergePolicy in Solr. As
Hoss pointed out in another thread, NoMergePolicy has no 0-argument
constructor, and so throws an exception during loading the core.
When there is no existing data/index/ directory, Solr creates a new
index/ directory at the
Hi,
Thank you! I got it working after you jarred my brain. Of course, the
location of the solr instance is arbitrary/logical to tomcat. Sheesh, I feel
kind of small, now. Anyway, I was able to clearly see my mistake from your
information.
As with all help I get from here I posted my
Ok, that makes sense ;)
but I dont understand why its not indexed.
IMO, I've defined the city_search field the exact same as city in the
schema.xml:
field name=city type=string indexed=true stored=true/
field name=city_search type=string indexed=true stored=true/
copyField source=city_search
In addition, I had tried and since backed away from (on Solr) indexing
heavily while also searching on the same server. This would lock up
segments and searchers longer than the disk space would allow. I
think that part of Solr can be rewritten to better handle this N/RT
use case as there is no
The timed deletion policy is a bit too abstract, as is keeping a
numbered limit of commit points. How would one know what they're
rolling back to when num limit is defined?
I think committing to a name and being able to roll back to it in Solr
is a good feature to add.
On Fri, Nov 12, 2010 at
but I dont understand why its not indexed.
Probably something wrong with data-config.xml.
So you can see, that the city field DOES index some data,
whereas the
city_search and citytext_search have NO data at all...
Then populate these two fields from city via copyField. It is 100% legal.
This feature would make the ReplicationHandler more robust in its own
practice of reserving previous commit points, by pushing that code out
into Solr proper.
Jason Rutherglen wrote:
The timed deletion policy is a bit too abstract, as is keeping a
numbered limit of commit points. How would
Here is a separate configuration: use separate Solr instances for
indexing and querying. Both point to the same data directory. A 'commit'
to the query Solr reloads the index. It works in read-only mode- for
production mode, I would make the indexer and queryer in different
permissions so that
Yes, the ExtractingRequestHandler uses Tika to parse many file formats.
Solr 1.4.1 uses a previous version of Tika (0.6 or 0.7).
Here's the problem with Tika and extraction utilities in general: they
are not perfect. They will fail on some files. In the
ExtractingRequestHandler's case, there
nowhere (unless I overlooked it) do you ever populate city_search
in the first place, it's simply defined..
Also, I don't think (but check it) that copyField is chainable.
I don't *think* that
copyField source=city dest=city_search/
copyField source=city_search dest = citytext_search /
will
On Nov 14, 2010, at 3:02pm, Lance Norskog wrote:
Yes, the ExtractingRequestHandler uses Tika to parse many file
formats.
Solr 1.4.1 uses a previous version of Tika (0.6 or 0.7).
Here's the problem with Tika and extraction utilities in general:
they are not perfect. They will fail on some
I split my docs to 100 indexs,I deploy the 100 indexs on 10 ec2 m2.4xLarge
instances for solr shards. it means each instance has 10 solr cores. it cost
4 to 10 seconds only for search when I test hundred concurrent threads,and
now I have 1000 online users per sencond, the user must wait for
In addition,my index has only two store fields, id and price, and other
fields are index. I increase the document and query cache. the ec2
m2.4xLarge instance is 8 cores, 68G memery. all indexs size is about 100G.
--
View this message in context:
Apologies for starting a new thread again, my mailing list subscription didn't
finalize till later than Yonik's response.
Using Field1:Val1 AND (*:* NOT Field2:Val2) works, thanks.
Does my original query Field1:Value1 AND (NOT Field2:Val2) fall into need
the *:* trick if all of the clauses of
Hi,
I have a requirement where a user enters acronym of a word, then the search
results should come for the expandable word.Let us say. If the user enters
'TV', the search results should come for 'Television'.
Is the synonyms filter is the way to achieve this?
Any inputs.
Regards,
Siva
--
33 matches
Mail list logo