Yes, this is fully supported.
On 2012-08-07 13:45, Omri Suissa wrote:
Hi all,
We would like to add unknown number of fields to a document and later to
search on these fields; can we perform this task using lucene.net?
For example:
The doc will have the following fields:
Title,
Content,
Type,
Te
Hi,
Have you tried using "state industr*", i.e. having the wildcard within
the quotes?
// Simon
On 2012-07-12 15:15, Kohlhepp, Justin W () wrote:
I have an index of about 30M records. One of the fields contains
company names. I am using an out-of-the-box QueryParser to create
queries. My
sing the
KeywordAnalyzer, and I'm still not getting any results against that
NOT_ANALYZED field.
?
On Tue, Jun 26, 2012 at 5:52 PM, Lingam, ChandraMohan J <
chandramohan.j.lin...@intel.com> wrote:
Luke using keyword analyzer as default makes sense. However, in the
original post, there was a
Luke defaults to KeywordAnalyzer which wont change your term in any way.
The QueryParser will still break up your query, so "Name:Jack Bauer"
would become (Name:Jack DefaultField:Bauer). I believe you can have
per-field analyzers (KeywordAnalyzer for Id, StandardAnalyzer for
everything else) us
Hi,
The patch catches all exceptions thrown when calling Searchable.Search,
which exists in your stack trace. It should fix the crashing problem.
Have you considered updating to the latest version, 2.9.4.1, which
contains many bugfixes, including the one you mention?
// Simon
On 2012-06-26
14:28, vicente garcia wrote:
Thank you very much, it works!!
But what is the meaning of "field"?
Thanks a lot :)
On Fri, Jun 15, 2012 at 2:23 PM, Simon Svensson wrote:
var analyzer = new StandardAnalyzer(Version.LUCENE_29);
var textReader = new StringReade
var analyzer = new StandardAnalyzer(Version.LUCENE_29);
var textReader = new StringReader("hola mi nombre es Vicente");
var tokenStream = analyzer.TokenStream("field", textReader);
var terms = new List();
var termAttribute =
(TermAttribu
I presume that you mean a missing field, not a blank field. You can do
this by using TermRangeQuery and passing null for term values. A null
value means that it's an open end ([A To *] or [* TO Z]), two null
values means it will match anything ([* TO *]). The main difference
compared to MatchAl
et/docs/client-api/querying/static-indexes/customizing-results-order
I saw that there is not a SpanishAnalyzer, we only have a
SpanishStemmer, but I don't need an stammer, I need a spanish analyzer
with its stops words, etc.
Has someones another idea on how to index Spanish content?
Thank yo
Welcome,
See Configuring index options[1] to specify a custom analyzer that can
handle spanish content.
A quick check shows that Contrib.Analyzers does not contain a spanish
analyzer. There is a SpanishStemmer available in the Snowball contrib.
You could also use a spanish hunspell dictionar
Hi,
Export your database-based directory into a normal filesystem (using
Directory.Copy), and open it in Luke[1]. The Files tab will show what
the different files are used for, and which ones belong to old commits
and can be removed.
Have you tried the latest version; 2.9.4?
// Simon
[1] h
Hi,
You could require all search terms (using QueryParser.SetDefaultOperator). This
would also cause misspelt words to give you empty search results (assuming
there's no matching misspelt indexed word in your data).
Another approach would be a faceted search; you could tag your products with
Hi,
You describe two separate problems; indexing speed and search issues.
Have you done any cpu profiling to determine where to begin looking for your
slow indexing speed? It sounds like you're ruled out i/o bottleneck, but it
could still be a slow database you're reading from. Try simplify you
Hi,
This sounds a lot like a faceted search. You do need a Collector which grabs
reader+documentId from a search, and then iterate them to read all the
terms. This can be cached in-memory using FieldCache, assuming that you
have only one term per field, or write a custom caching implementation th
> The site stops responding to requests eventually (browser sits
> there indefinitely loading) and only an app pool recycle fixes it. And
> we've no set of actions to manually trigger the "crash".
>
>
>
> On 23 April 2012 14:26, Simon Svensson wrote:
>
Hi,
This is a common exception which is handled within the QueryParser itself.
You're seeing a first chance exception which does not cause the crashes you're
experiencing.
Is this from a production system? Can you share the raw memory dump, or does it
contain sensitive information?
// Simon
stribution or use of this information. All such unauthorized actions
are strictly prohibited. If you have received this transmission in error,
please notify the sender by e-mail and delete all copies of this material from
any computer.
-Original Message-
From: Simon Svensson [mailto:si
Hi,
You could accomplish this by adding several FileNumber fields. I'm
guessing that a regexp would suffice to extract the number from the
complete value.
var document = new Document();
document.Add(new Field("FileNumber", "ABC-12345", Field.Store.NO,
Field.Index.NOT_ANALYZED));
document.Add
18 matches
Mail list logo