Hi, i am newbie for lucene search. i have a bit complex query like this:
SELECT mat.name matName,
FROM LIB_MATERIAL mat,
LIB_MAT_TYPE matType ,
LIB_SHELF shelf ,
LIB_ROOM room ,
FZA_PERSON res
WHERE mat.shelf_id = shelf.record_id(+)
AND shelf.room_id = room.record_id(+)
AND
Hi,
Can you please shed some light on how your final architecture looks like?
Do you manually use the PayloadSpanUtil for each document separately?
How did you solve the problem with phrase results?
Thanks in advance for your time,
Eran.
On Tue, Nov 25, 2008 at 10:30 PM, Greg Shackles [EMAIL
Diego Cassinera wrote:
Are you sure you are creating the fields with Field.Index.ANALYZED ?
Yes, my fields are all ANALYZED. (One was ANALYZED_NO_NORMS but changing it
to ANALYZED did not solve the problem)
I checked with the debugger, and the analyzer I use tu update my indexer
does
If I use MUST, the sentence will be Code1 AND Code2 AND Code3.
I would like Code1 OR Code2 OR Code3. Each document have only one code.
Ian Lea wrote:
Hi
Do you maybe need MUST rather than SHOULD?
--
Ian.
On Tue, Nov 25, 2008 at 11:41 AM, Albert Juhe [EMAIL PROTECTED]
wrote:
Hi,
You can use MUST an the end.
Using your code use as
codisFiltre=XX07_04141_00853#XX06_03002_00852#UX06_07019_02994
String[] codi =codisFiltre.split('#');
*finalFilter = new BooleanFilter();*
booleanFilter = new BooleanFilter();
for (int i = 0; i codi.length; i++) {
Don't write this query in lucene G Step back and
take a deep breath and take off your database hat.
Lucene is NOT a RDBMS, it is a text search engine. You'll
drive yourself crazy trying to make Lucene into one. AND
you'll be very dissatisfied with the results.
Instead, think of how you can
Your problem here is probably tokenization at query time.
Queries like 110_a:library a* would search field 110_a for
library and your default field for a*. You might try
+110a_:library +110a_:a*, but I doubt that's really
what you want since there's no guarantee that the terms
will be next to
Why not index into different fields based upon the file type? That would be
MUCH easier and PerFieldAnalyzerWrapper would be your friend. For
example.
Doc1
text_html: some text here
type: html
Doc2
text_jsp: some jsp text here'
type: html
etc...
Now your PerFieldAnalyzerWrapper has your
Hi,
Below is a document in lucene
-
Field Value
-
ID:1
110_a:library and information
-
I need to search for starts with logic, below are the search cases for the
Hi Erik,
Thanks a lot for your reply. You are right I want different analyzer on same
field depending upon the other fields in the document.
For Example
Doc1
Text:Some text here
type:html
Doc2
Text: Some jsp text here
type:jsp
Now depending upon the type I wanted to use different analyzer for
Sure, I'm happy to give some insight into this. My index itself has a few
fields - one that uniquely identifies the page, one that stores all the text
on the page, and then some others to store characteristics. At indexing
time, the text field for each document is manually created by
We use Lucene at our library for indexing from different sources into
the same logical index. The sources are very diverse and are prioritized
differently at index-time with document boosts. However, different
groups of users (or individual users for that matter) have different
preferences for the
Hello ,
I have two document in my lucene index
Documentstored/uncompressed,indexedtagId:5117
stored/uncompressedtagName:Wholesale Hot Dog Stand Equipment
stored/uncompressed,indexed,tokenizedtagKey:wholesale hot dog stand
equipment stored/uncompressed
Hi,
I think you can achieve your goal using StandardAnalyzer during indexing and
for search, and use WildcardQuery for Query I think it will work!!
naveen.a wrote:
Hi,
Below is a document in lucene
-
Field Value
The most scary part is that that you will have to score each and every
document that has a source, probably all of the documents in your
corpus. So if you have a very large number of documents it might be a
bit expensive. Also, appending this query for boost only means that
you will get
Alex,
if you have length normalization turned on then the length (the number
of tokens and perhaps even the distance between the tokens) of the
second document is much greater than the length of the first document.
The length is the complete number of tokens in the field, i.e. if you
add
16 matches
Mail list logo