Perfect! Thank you very much! Exactly what I needed, and simple!
Tony
-Original Message-
From: Uwe Schindler
Sent: Tuesday, November 14, 2023 05:51
To: java-user@lucene.apache.org
Subject: Re: StandardQueryParser and numeric fields
Hi,
By default the standard query parser has no
Hi,
By default the standard query parser has no idea about field types (and
it cannot because it does not know the schema of your index). If you
want to allow searching in non-text fields (TextField, all other - also
normal StringField breaks easy), you need to customize it.
There are 2
Hello,
I'm banging my head at this point, hoping someone can help me.
I can't get StandardQueryParser to work on numeric fields. Luke v9.8.0
finds the records for me. Example search query string in Luke that works:
eventIdNum:3001
Here is my code:
Query
Hi Michael.
Thanks for the reply.
As I said in the opening statement,
I need to move away reading a file into memory before indexing the file..
The use case here is files 2+ GB in size.
I thought streaming the file to be indexed is the only alternative
to reading the full file in RAM then indexi
Sorry your problem statement makes no sense: you should be able to
store field data in the index without loading all your documents into
RAM while indexing. Maybe there is some constraint you are not telling
us about? Or you may be confused. In any case highlighting requires
the document in its uni
Hi,
I am converting my application from
reading documents into memory, then indexing the documents
to streaming the documents to be indexed.
I quickly found out this required that the field NOT be stored.
I then quickly found out that my highlighting code requires the field to
Many of us already answered in the dev mailing list.
Uwe
Am 25.06.2022 um 05:19 schrieb Yichen Sun:
-- 转发的邮件 -
发件人: Yichen Sun
日期:2022年6月25日 周六11:14
主题:Finding out which fields matched the query
收件人: , , <
java-user@lucene.apache.org>
Hello!
I’m a MSCS student from
What is the reason you need the matched fields? Maybe your use case can be
solved using sth completely different than knowing which fields were matched.
> Am 25.06.2022 um 06:58 schrieb Yichen Sun :
>
> Hello!
>
> I’m a MSCS student from BU and learning to use Lucene. Re
-- 转发的邮件 -
发件人: Yichen Sun
日期:2022年6月25日 周六11:14
主题:Finding out which fields matched the query
收件人: , , <
java-user@lucene.apache.org>
Hello!
I’m a MSCS student from BU and learning to use Lucene. Recently I try to
output matched fields by one query. For example, f
Hello!
I’m a MSCS student from BU and learning to use Lucene. Recently I try to
output matched fields by one query. For example, for one document, there
are 10 fields and 2 of them match the query. I want to get the name of
these fields.
I have tried using explain() method and getting
TL;DR: Why score addition is recommended in examples when using these?
Doesn't that make the resulting score sensible to the number of boolean
clauses?
I've been reading about feature fields (in particular the "dynamic"
feature field you get with LongPoint.newDistanc
Hi,
I'm trying to write a custom DoubleValuesSource for use with a
FunctionScoreQuery instance.
To generate the final score of a document I need to:
1) Read from three indexed docValue fields and
2) Use the score of the wrapped query passed in to the FunctionScoreQuery
instance
For examp
e e.g.
string and numeric values from the original document, but not schema level
information like whether offsets/positions are indexed into postings and
term vectors for each field, or not. That would be safe, if you are trying
to avoid the cost of retrieving the full values for all fields from your
ba
e a high performance cost so I'd like to avoid it if I can (or I might not
have the original value of all fields available). Do you think it would work to
just reconstruct the values for the field being modified, or am I likely to
just run into more issues by modifying a loaded Document?
Hi Albert,
Unfortunately, you have fallen into a common and sneaky Lucene trap.
The problem happens because you loaded a Document from the index's stored
fields (the one you previously indexed) and then tried to modify that one
and re-index.
Lucene does not guarantee that this will
Hi,
I'm upgrading a project to lucene 8.5.2 which had been using 3.0.0.
Some tests are failing with a strange issue. The gist of it is, we create
fields that need position and offset information. Inserting one field works ok,
but then searching for the document and adding another valu
Hi John,
A TermQuery produces a scorer that can compute similarity for a given term
value against a given field, in the context of the index, so as you say, it
produces a score for one field.
If you want to match a given term value across multiple fields, indeed you
could use a BooleanQuery with
Hi,
I have a question regarding how Lucene computes document similarities from
field similarities.
Lucene's scoring documentation mentions that scoring works on fields and
combines the results to return documents. I'm assuming fields are given
scores, and those scores are simply a
E: Get distinct fields values from lucene index
Hello Michael,
Thanks for the response,
I have tried the approach suggested by you(TermsEnum) but it is not working for
me. I have used below code.
String field = "address";
try (IndexReader reader = Utils.getIndexReader(indexDirectoryP
;
}
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: Michael Sokolov
Sent: Friday, November 22, 2019 8:11:25 PM
To: java-user@lucene.apache.org
Subject: Re: Get distinct fields values from lucene index
In Solr and ES t
In Solr and ES this is done with faceting and aggregations,
respectively, based on Lucene's low-level APIs. Have you looked at
TermsEnum? You can use that to get all distinct terms for a segment,
and then it is up to you to coalesce terms across segments ("leaves").
On Thu, Nov 21, 2019 at 1:15 AM
Hello,
I am using lucene in my organization. I want to know how can I get distinct
values from lucene index. I have tried “GroupingSearch” API but it doesn’t
serves the purpose. It will give all documents contains distinct values. I have
used below code.
final GroupingSearch groupingSearch =
What version of Solr? In Solr 8.2 there will be a tool to facilitate this kind
of analysis - see SOLR-13512. In the meantime, if you’re on Solr 8.x you should
be able to easily back port this change to your version (7x should be possible
too, but with more changes).
> On 1 Jul 2019, at 11:23, R
Whoa.
First, it should be pretty easy to figure out what fields are large, just look
at your input documents. The fdt files are really simple, they’re just the
compressed raw data. Numeric fields, for instance, are just character data in
the fdt files. We usually see about a 2:1 ratio. There’s
Hi Rob,
The codec records per docid how many bytes each document consumes -- maybe
instrument the codec's sources locally, then open your index and have it
visit stored fields for every doc in the index and gather stats?
Or, to avoid touching Lucene level code, you could make a small tool
Hello,
We are currently trying to investigate an issue where in the index-size is
disproportionally large for the number of documents. We see that the .fdt
file is more than 10 times the regular size.
Reading the docs, I found that this file contains the fielddata.
I would like to find the docum
Can you create a scoring scenario that counts the number of fields in
which a term occurs and rank by that (descending) with some kind of
post-filtering?
On Fri, Apr 19, 2019 at 11:24 AM Valentin Popov wrote:
>
> Hi,
> I trying find the way, to search all docs has equals term on
It is not possible, because eliminate flexibility of fields I need search
for using old data with out reindexing.
Thanks.
сб, 20 апр. 2019 г. в 03:12, Tomoko Uchida :
> Hi,
>
> I'm not sure there are better ways to meet your requirement by
> querying, but how about considering
Hi,
I'm not sure there are better ways to meet your requirement by
querying, but how about considering static approaches?
I would index an auxiliary field which has binary values (0/1 or
"T"/"F") representing "has equals term on different fields"
so that you c
Hi,
I trying find the way, to search all docs has equals term on different
fields. Like
doc1 {"foo":"master", "bar":"master"}
doc2 {"foo":"test", "bar":"master"}
As result should be doc1 only.
Right now, I'm ge
Hi Luís,
If the contents of the files dont change one solution is to store the text
parsed by tika in a compressed way, ~7% extracted text size.
In updating the document, just search the old one with the contents ready
(compressed) and update the other fields that you need.
Best,
Marcio
http
Thank you, Erick.
Unfortunately we need to index those fields.
Currently we do not store text because of storage requirements and it is
slow to extract it again.
Thank you for the tips.
Luis
Em qua, 13 de fev de 2019 18:13, Erick Erickson If (and only if) the fields you need to update are
If (and only if) the fields you need to update are single-valued,
docValues=true, indexed=false, you can do in-place update of the DV
field only.
Otherwise, you'll probably have to split the docs up. The question is
whether you have evidence that reindexing is too expensive.
If you do ne
Hi all,
Lucene 7 still deletes and re-adds docs when an update operation is done,
as I understood.
When docs have dozens of fields and one of them is large text content
(extracted by Tika) and if I need to update some other small fields, what
is the best approach to not reindex that large text
I'm working on a tool that lists all fields in the index, including
explicit list of all dynamic fields.
I tried GET `/solr/mycore/schema/fields` Schema API however without any
luck. On the other hand there is undocumented API used by Solr UI: GET
`/solr/mycore/admin/luke?wt=json` which
Hi,
Could you ask your question at ManifoldCF user list?
Kind Regards,
Furkan KAMACI
On Wed, Jan 9, 2019 at 6:19 PM Erick Erickson
wrote:
> You'd probably get more knowledgeable info from the Manifold
> folks, I don't know how many people on this list _also_ use
> Mainfold...
>
> Best,
> Erick
You'd probably get more knowledgeable info from the Manifold
folks, I don't know how many people on this list _also_ use
Mainfold...
Best,
Erick
On Wed, Jan 9, 2019 at 5:48 AM subasini wrote:
>
> Hi
> I am using manifoldcf 2.10 and Solr 7.6.0.
> I can crawl my website and indexing done in Solr s
Hi
I am using manifoldcf 2.10 and Solr 7.6.0.
I can crawl my website and indexing done in Solr successfully.
Now I want to send one key-value pair from manifoldcf which should appear in
Solr.
For different websites, the value will be different so that I can use the
same for filtering in my solr qu
are indexed and stored fields treated by Lucene w.r.t space and
> performance?
>
> Is there any performance hit with stored fields which are indexed?
>
>
>
> Lucene Version: 5.3.1
>
>
>
> Assumption:
>
> Stored fields are just simple strings (not huge documents
Hi
How are indexed and stored fields treated by Lucene w.r.t space and
performance?
Is there any performance hit with stored fields which are indexed?
Lucene Version: 5.3.1
Assumption:
Stored fields are just simple strings (not huge documents)
Example:
Data: [101, Gold]; [102
Hi all,
Regarding configurations about every fields( stored? analyzed? sort needed?
numeric ? ), elastic search designed cluster state to hold these
configurations index wise.. solr have those configurations in xml format.
If we have data center in multiple locations, is there any better way of
Use a MultiFieldQuerySearcher.
Like this;
{
"multi_match": {
"query":"quick brown fox",
"fields": [ "title", "body" ]
}
}
On Mon, 12 Feb 2018 at 22:04 Dominik Safaric
wrote:
> Unfortunately you'v
ucene internally store multi valued fields and is it
possible to retrieve them in the same order as they were stored? In particular,
I'd like to retrieve a multi valued keyword field in such a way.
Kind regards,
Dominik
> On 12 Feb 2018, at 19:34, Adrien Grand wrote:
>
> Filteri
ng Hamming distance as a similarity measure applies in this case
> as well.
>
> What I'm concerned with is the following: in the second (the scoring) phase
> I'd like to score documents using all fields of the *fine_grained* array of
> keywords. How can I effectively retrieve these
e_grained *field values, which is an array of keywords. A similar
method using Hamming distance as a similarity measure applies in this case
as well.
What I'm concerned with is the following: in the second (the scoring) phase
I'd like to score documents using all fields of the *fine_grain
Whether this is doable is going to depend on what you mean by "match[ing]
documents according to criteria X". Can you give an example?
Le ven. 9 févr. 2018 à 14:47, Dominik Safaric a
écrit :
> Hi,
>
> I am intending to implement a custom Query using Lucene 6.x and due to the
> lack of documentat
Hi,
I am intending to implement a custom Query using Lucene 6.x and due to the lack
of documentation concerned with a particular topic I have the following
questions.
The query is expected to implement a two-phase search, in the sense that during
the first run it matches documents according t
You'll just have to add additional StoredField instances for all those
facet fields as well.
The FacetField is consumed as an inverted field and not directly stored,
though you could do some work and reconstruct it from the binary doc values
that the facet store.
Mike McCandless
Yes, when I load the doc plainly using IndexSearcher, I got the doc,
but without special faceted fields::
name = firstDoc (stored,indexed,tokenized,omitNorms,indexOptions=DOCS)
category = cars (stored,indexed,tokenized)
But I need all those faceted fields somehow, such as when I was saving
the
luesFacetField, but problem
> is when I load the document and then update it in index, then this
> faceted fields dissapear, because they were not loaded in plain way.
>
> -Vjeran
>
> On Fri, Sep 1, 2017 at 3:02 PM, Michael McCandless
> wrote:
> > You should separately
category));
Document finalDoc = facetConfig.build(doc);
So you see, "category" is faceted field. And as I said, I can do
faceted search due to this SortedSetDocValuesFacetField, but problem
is when I load the document and then update it in index, then this
faceted fields dissapear, because they were n
You should separately add those fields to your document, using StoredField,
if you want to retrieve their values at search time.
Mike McCandless
http://blog.mikemccandless.com
On Thu, Aug 31, 2017 at 1:29 PM, Vjeran Marcinko
wrote:
> I zeroed in the problem with my updating documents hav
Hi!
if you want to read facetized fields you need to search trough facet
collector. For example like this
FacetsCollector facetsCollector =new FacetsCollector();
FacetsCollector.search(indexSearcher, query, pageSize, facetsCollector);
FastTaxonomyFacetCounts customFastFacetCounts =new
I zeroed in the problem with my updating documents having facet
fields... What I need is a way to load document with all fields that
existing when I was saving the document, meaning, together with facet
fields.
Anyway, here's the example.
When I add my document to index, my document is hav
Okay, that makes sense.
Thanks!
On Tue, Aug 22, 2017 at 9:58 AM, Uwe Schindler wrote:
> Hi,
>
> If you use NumericField you can only use NumericRangeQuery to search on
> them (also for single value query aka TermQuery). The same applies for
> Lucene 6s point fields. Most query
Hi,
If you use NumericField you can only use NumericRangeQuery to search on them
(also for single value query aka TermQuery). The same applies for Lucene 6s
point fields. Most query parsers have no schema information, so they can only
create Term* not Numeric* query types, unless you subclass
Hello,
I'm new to this list and a Lucene n00b so I'm hoping you all can help me.
I'm consuming an index that was built with version 4.7.2. The index has
numeric fields. When I try to search on those fields I get no result. I
opened the index up with Luke and it seems there is
Hi,
Note that If you are using Lucene directly, 5.x introduced LUCENE-6064 [1]
[2], which adds checks to ensure that the sort field has a corresponding
DocValue of the expected type. Indexed fields can only be used for sorting
via an UninvertingReader, at a cost of increased heap usage [3]. Solr
1> Is it correct that stored fields can only be sorted on if they become a
DocValue field in 5.x
no. Indexed-only fields can still be used to sort. DocValues are just more
efficient at load time and don't consume as much of the Java heap.
Essentially this latter can be thought of as mo
Hi,
I am in the process of updating a large index from Lucene 4.x to 5.x and have
two questions related to the sorting order.
1. Is it correct that stored fields can only be sorted on if they become a
DocValue field in 5.x?
2. When "updating" stored fields to DocValue fields , is i
o you get know which are the matching
> fields in a document for a query. In my case, I tried DisjunctionMaxQuery
> with tiebreaking matcher as
> 0.01f. what is the meaning of this argument?. After searching with the
> DisjunctionMaxQuery,
> I tried explain method of the searcher
Hi Adrien,
Using Explanation object, how do you get know which are the matching fields
in a document for a query. In my case, I tried DisjunctionMaxQuery with
tiebreaking matcher as
0.01f. what is the meaning of this argument?. After searching with the
DisjunctionMaxQuery,
I tried explain
everyone,
>
> To start, we are using Lucene 4.3.
>
> To search, we prepare several queries and combine these into a
> BooleanQuery.
> What we are looking for is a way to determine on which specific fields a
> certain document matched.
> For example, I create 2 queries: one t
> ?*
I'd assume that it's worse, but in the case of Elasticsearch, since
they already wanted to store the entire source document for other
reasons, storing both the source document *and* the stored fields was
wasteful.
TX
--
asami <
aravinththangas...@gmail.com> wrote:
> Thanks Adrien
>
> On Wed, Jun 21, 2017 at 5:33 PM, Adrien Grand wrote:
>
>> The file is mapped when the index reader is open. Retrieving one or more
>> fields always requires a single disk seek since all values for a given
>> docum
Thanks Adrien
On Wed, Jun 21, 2017 at 5:33 PM, Adrien Grand wrote:
> The file is mapped when the index reader is open. Retrieving one or more
> fields always requires a single disk seek since all values for a given
> document are store together, just make sure to perform a singl
The file is mapped when the index reader is open. Retrieving one or more
fields always requires a single disk seek since all values for a given
document are store together, just make sure to perform a single call to
IndexReader.document with the list of fields that you want to retrieve
rather than
Hi all,
We are doing experiment, that combining multiple fields into single field
as using it as StoredField
While retrieving, Instead of retrieving multiple time, we can do with the
Single call.
we thought of avoiding multiple disk calls for reading multiple fields.
we have an index with
Hey everyone,
To start, we are using Lucene 4.3.
To search, we prepare several queries and combine these into a BooleanQuery.
What we are looking for is a way to determine on which specific fields a
certain document matched.
For example, I create 2 queries: one to search in the "Name"
Hello all,
I'm trying to come up with a reasonable indexing strategy for my document's
metadata, and I'm seeing some weird undocumented behaviours.
My original approach was to build fields like these:
FieldType ft = new FieldType();
ft.setDocValuesType( DocVa
Hi Adrien Grand,
Thanks for the response.
a binary blob that
> stores all the data so that you can perform updates.
Could you elaborate on this? Do you mean to have StoredField as mentioned
below to store all other fields which are needed only for updates? is there
any way to
I think it is hard to come up with a general rule, but there is certainly a
per-field overhead. There are some things that we need to store per field
per segment in memory, so if you multiply the number of fields you have,
you could run out of memory. In most cases I have seen where the index had
Hi All,
Elasticsearch allows 1000 fields by default. In lucene, What are the
indexing and searching performance impacts of having 10 fields vs 3000
fields in a lucene index?
In my case,
while indexing, i index and store all fields and so i can provide update on
one field where we use to take out
It would be much slower and scoring does not work as expected.
Full text search engines flatten the data and are fast because of that. Every
additional field structure should also be flattened. To improve search you also
index various stuff redundant, so you use copy fields to achieve that
Why not turning every term int the search into a BooleanQuery listing
all the diferent fields to be searched? Is there a problem with that?
Nicolás.-
El 21/12/16 a las 13:38, Uwe Schindler escribió:
Hi,
This is the standard approach, there is no better way. This also keeps "scoring&quo
.de
> -Original Message-
> From: suriya prakash [mailto:suriy...@gmail.com]
> Sent: Wednesday, December 21, 2016 1:31 PM
> To: java-user@lucene.apache.org
> Subject: All Fields Search
>
> Hi,
>
> I have 500 fields in a document to index.
>
> I append all the
This sounds like a good approach!
Le mer. 21 déc. 2016 à 13:31, suriya prakash a écrit :
> Hi,
>
> I have 500 fields in a document to index.
>
> I append all the values and index it as separate field to support all
> fields search. I will also have 500 separate fields for
Hi,
I have 500 fields in a document to index.
I append all the values and index it as separate field to support all
fields search. I will also have 500 separate fields for field level search.
Is there any other better way for all fields search?
Regards,
Suriya
: The issue I have is that some promotions are permanent so they don't have
: an endDate set.
:
: I tried doing:
:
: ( +Promotion.endDate:[210100TOvariable containing yesterday's date]
: || -Promotion.endDate:* )
1) mixing prefix ops with "||" like this is most certainly not doing what
Hi,
Match all docs query minus Promotion.endDate:[* TO *]
+*:* -Promotion.endDate:[* TO *]
Ahmet
On Friday, November 11, 2016 5:59 PM, voidmind wrote:
Hi,
I have indexed content about Promotions with effectiveDate and endDate
fields for when the promotions start and end.
I want to query for
Hi,
I have indexed content about Promotions with effectiveDate and endDate
fields for when the promotions start and end.
I want to query for expired promotions so I do have this criteria, which
works fine:
+Promotion.endDate:[210100TOvariable containing yesterday's date]
The is
Hi,
I'm using lucene 4.8.1 an try to get the MLT to give certain fields a
bigger weight in the similarity calculation. Is this even possible? I
only saw that I can give a boost to the MLTQuery itself, but not to a
field. Has anybody any idea?
Regards,
Jürgen.
--
Jürgen A
tively) => numerics fields are not indexed anymore
Sorry, the upgrade tool does not convert your previous (postings
based) numerics to points; you need to reindex to achieve that.
Mike McCandless
http://blog.mikemccandless.com
On Fri, Sep 23, 2016 at 1:01 AM, Ludovic Bertin
wrote:
> Tha
way to index numeric fields :
>
> doc.add(new DoublePoint(name, (Double) value));
> doc.add(new StoredField(name, (Double) value));
>
> But it's about old values in my indexes. I was supposing the migration tool
> would do the same to preserve the indexation of numeric v
Thanks Mike for your answer.
I do have change the way to index numeric fields :
doc.add(new DoublePoint(name, (Double) value));
doc.add(new StoredField(name, (Double) value));
But it's about old values in my indexes. I was supposing the migration tool
would do the same to preserv
You need to change how you index the documents to add e.g. IntPoint,
so that points are actually indexed.
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2016 at 11:01 AM, Ludovic Bertin
wrote:
> Hi,
>
> I have an index with some stored and indexed numeric fields.
&g
Hi,
I have an index with some stored and indexed numeric fields.
After the migration, I can still see the numeric fields stored into my
documents,
But I was expecting to have those fields indexed as point values (see
https://lucene.apache.org/core/6_2_1/core/org/apache/lucene/index
Hi,
I have an index with some stored and indexed numeric fields.
After the migration, I can still see the numeric fields stored into my
documents,
But I was expecting to have those fields indexed as point values (see
https://lucene.apache.org/core/6_2_1/core/org/apache/lucene/index
;
> Regards,
> Jan-Willem
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, September 20, 2016 19:02
> To: java-user
> Subject: Re: Strange index corruption related to numeric fields when
> upgrading from 6.0.1
>
: Strange index corruption related to numeric fields when upgrading
from 6.0.1
A wild shot in the dark: Are the square brackets really part of the field name?
They have never officially been supported, from the Ref
Guide:
"Field names should consist of alphanumeric or underscore charac
> name. Is that something that I should be doing in general? I was under the
> impression that it is OK to use the same name for all three related fields.
>
> Here is the infostream from a test that reproduces the issue:
> http://wikisend.com/download/613238/merges.log
>
> Un
ubleDocValuesField a different
name. Is that something that I should be doing in general? I was under the
impression that it is OK to use the same name for all three related fields.
Here is the infostream from a test that reproduces the issue:
http://wikisend.com/download/613238/merges.log
Unfort
I explain better i created a my class MyStoredField.java for saving a
value.
Reading for understading if it is a normal stored field or not i used
"instanceof".
But in debug the class generated when i read the document id StoredField
not MyStoredField . No way for doing it?
2016-09-07 16:
I have a doubt.
I created a class storing a special value.
When i store the document saving this field all ok.
when i read the document the field is found but it is a different class
(StoredField instead MyStoredField)
it is ok so? or i saw wrong previously in other examples?
and SortedSetDocValuesField classes, which
to accept a byte array. I could attempt chop the long fields into a byte
array, and index that.
Also what is the difference between NumericDocValuesField, and
SortedNumericDocValuesField?
Best regards,
-cam
gt; >
> > >
> >
> https://mail-archives.apache.org/mod_mbox/lucene-java-user/201510.mbox/%3CCAHTScUgTYgSLP9OmoMe2ebVBHw8=trih5b++u7v050vnrqz...@mail.gmail.com%3E
> > >
> > >
> > >
> > > > I would be pretty skeptical of this app
t; > I would be pretty skeptical of this approach You're
> >
> > > mixing numeric data with textual data and I expect
> >
> > > the results to be unpredictable. You already said
> >
> > > "it is working for most of the
> >
> > >
ady said
>
> > "it is working for most of the
>
> > documents except one or two documents." I predict
>
> > you'll find more and more of these as time passes.
>
> >
>
> > Expect many more anomalies. At best you need to
>
> > index both
t; Expect many more anomalies. At best you need to
> index both forms as text rather than mixing numeric
> and text data.
Thanks in advance...
--
Kumaran R
On Sun, Jul 24, 2016 at 1:54 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> On Sat, Jul 23, 2016 at 4:48
On Sat, Jul 23, 2016 at 4:48 AM, Kumaran Ramasubramanian wrote:
> Hi Mike,
>
> *Two different fields can be the same name*
>
> Is it so? You mean we can index one field as docvaluefield and also stored
> field, Using same name?
>
This should be fine, yes.
> And AF
1 - 100 of 1379 matches
Mail list logo