Re: merge problems

2016-10-11 Thread Michael McCandless
OK I have a small test case showing the issue!

I opened https://issues.apache.org/jira/browse/LUCENE-7491

Thanks for reporting this, Hans.

Mike McCandless

http://blog.mikemccandless.com

On Tue, Oct 11, 2016 at 12:08 PM, Hans Lund  wrote:
> hmm you're right - when it revealed a bug in our indexing code I stopped
> wondering ;-) but now I tried to create small tests to show the behavior -
> until now without success. I'm pretty sure that I can reproduce it by
> re-introducing our index bug, unfortunately it occurs after some hours
> parsing and indexing wikipedia dumps - but from there I'll try simplifying a
> test reproducing the setup.
>
> The setup we use is quite forward using MMapDirectory and a NRT setup - the
> only tailored functionality is our own IndexDeletionPolicy using an added
> timestamp in userdata for the index commit keeping a number of snapshots but
> honoring a max retention period, not that I suspect it to be the cause - but
> if fieldinfos from another snapshot is used in the merge that could cause
> problems
>
> Hans Lund
>
> On Tue, Oct 11, 2016 at 12:07 PM, Michael McCandless
>  wrote:
>>
>> Hmm, that should be "OK" from Lucene's standpoint.
>>
>> I mean, it should not result in strange merge exceptions later on.
>>
>> I think there's a bug somewhere in Lucene's efforts to pretend it's
>> fully schema-less ... I'll try to reproduce this.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Tue, Oct 11, 2016 at 4:38 AM, Hans Lund  wrote:
>> > Turned out to be must much simpler - we had added a new 'dynamic' field
>> > to
>> > a stats doc a count on articles based on identified language code.
>> > Having a
>> > set of test documents in German, English, Swedish - no one had suspected
>> > the obvious that the language detection categorized a single document as
>> > being Indonesian, making the stats count id:1.
>> >
>> > I realized that the debug output I added - made output of everything
>> > else
>> > that the interesting field (iterating over already added fields - not
>> > the
>> > field causing the error later on ;-)
>> >
>> >
>> >
>> >
>> >
>> > On Mon, Oct 10, 2016 at 4:32 PM, Adrien Grand  wrote:
>> >
>> >> It looks like the field infos of your index went out of sync with data
>> >> stored in the files about points.
>> >>
>> >> Can you run CheckIndex on your index (potentially with the `-fast`
>> >> option
>> >> so that it only verifies checksums)? It could be that one of these two
>> >> parts of the index got corrupted.
>> >>
>> >> Since you were able to modify the way add(IndexableField) is
>> >> implemented,
>> >> I'm wondering if you are running a fork of Lucene? If yes, maybe you
>> >> did
>> >> some changes that triggered this bug?
>> >>
>> >> Otherwise is your application:
>> >>  - using IndexWriter.addIndexes?
>> >>  - customizing merging in some way, eg. by wrapping the merge readers?
>> >>
>> >> Le mar. 4 oct. 2016 à 16:40, Hans Lund  a écrit :
>> >>
>> >> > After upgrading to 6.2 we are having problems during merges (after
>> >> running
>> >> > for a while).
>> >> >
>> >> > When the problem occurs its always complaining about the same field -
>> >> > and
>> >> > throws:
>> >> >
>> >> > java.lang.IllegalArgumentException: field="id" did not index point
>> >> values
>> >> > at
>> >> >
>> >> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(
>> >> Lucene60PointsReader.java:126)
>> >> > at
>> >> >
>> >> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.
>> >> size(Lucene60PointsReader.java:224)
>> >> > at
>> >> >
>> >> > org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.
>> >> merge(Lucene60PointsWriter.java:169)
>> >> > at
>> >> > org.apache.lucene.index.SegmentMerger.mergePoints(
>> >> SegmentMerger.java:173)
>> >> > at org.apache.lucene.index.SegmentMerger.merge(
>> >> SegmentMerger.java:122)
>> >> > at
>> >> >
>> >> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4312)
>> >> > at
>> >> > org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3889)
>> >> >
>> >> >
>> >> > To figure out where we messed up - I have added some ugly logging to
>> >> > Document:
>> >> >
>> >> > public final void add(IndexableField field) {
>> >> > if ("id".equals(field.name()) &&
>> >> > field.fieldType().pointDimensionCount()
>> >> > != 0) {
>> >> > System.err.println("Point value detected");
>> >> > for (IndexableField i : fields) {
>> >> > System.err.println(i);
>> >> > }
>> >> > }
>> >> > fields.add(field);
>> >> >   }
>> >> >
>> >> > In hope to intercept the document we messed up.
>> >> >
>> >> > But to my surprise toString on the suspected field just says
>> >> > (contains a
>> >> > URN):
>> >> >
>> >> > 

Re: PhraseQuery

2016-10-11 Thread lukes
Thanks Mike. I discovered that earlier. 

Regards.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/PhraseQuery-tp4299871p4300752.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: How to add ASCIIFoldingFilter in ClassicAnalyzer

2016-10-11 Thread Kumaran Ramasubramanian
@Ahmet, Uwe: Thanks a lot for your suggestion. Already i have written
custom analyzer as you said. But just trying to avoid new component in my
search flow.

@Adrien: how to add filter using AnalyzerWrapper. Any pointers?









On Tue, Oct 11, 2016 at 8:16 PM, Uwe Schindler  wrote:

> I'd suggest to use CustomAnalyzer for defining your own analyzer. This
> allows to build your own analyzer with the components (tokenizers and
> filters) you like to have.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Adrien Grand [mailto:jpou...@gmail.com]
> > Sent: Tuesday, October 11, 2016 4:37 PM
> > To: java-user@lucene.apache.org
> > Subject: Re: How to add ASCIIFoldingFilter in ClassicAnalyzer
> >
> > Hi Kumaran,
> >
> > If it is fine to add the ascii folding filter at the end of the analysis
> > chain, then you could use AnalyzerWrapper. Otherwise, you need to create
> a
> > new analyzer that has the same analysis chain as ClassicAnalyzer, plus an
> > ASCIIFoldingFilter.
> >
> > Le mar. 11 oct. 2016 à 16:22, Kumaran Ramasubramanian
> > 
> > a écrit :
> >
> > > Hi All,
> > >
> > >   Is there any way to add ASCIIFoldingFilter over ClassicAnalyzer
> without
> > > writing a new custom analyzer ? should i extend StopwordAnalyzerBase
> > again?
> > >
> > >
> > > I know that ClassicAnalyzer is final. any special purpose for making
> it as
> > > final? Because, StandardAnalyzer was not final before ?
> > >
> > > public final class ClassicAnalyzer extends StopwordAnalyzerBase
> > > >
> > >
> > >
> > > --
> > > Kumaran R
> > >
>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>


Re: merge problems

2016-10-11 Thread Hans Lund
hmm you're right - when it revealed a bug in our indexing code I stopped
wondering ;-) but now I tried to create small tests to show the behavior -
until now without success. I'm pretty sure that I can reproduce it by
re-introducing our index bug, unfortunately it occurs after some hours
parsing and indexing wikipedia dumps - but from there I'll try simplifying
a test reproducing the setup.

The setup we use is quite forward using MMapDirectory and a NRT setup - the
only tailored functionality is our own IndexDeletionPolicy using an added
timestamp in userdata for the index commit keeping a number of snapshots
but honoring a max retention period, not that I suspect it to be the cause
- but if fieldinfos from another snapshot is used in the merge that could
cause problems

Hans Lund

On Tue, Oct 11, 2016 at 12:07 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Hmm, that should be "OK" from Lucene's standpoint.
>
> I mean, it should not result in strange merge exceptions later on.
>
> I think there's a bug somewhere in Lucene's efforts to pretend it's
> fully schema-less ... I'll try to reproduce this.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Tue, Oct 11, 2016 at 4:38 AM, Hans Lund  wrote:
> > Turned out to be must much simpler - we had added a new 'dynamic' field
> to
> > a stats doc a count on articles based on identified language code.
> Having a
> > set of test documents in German, English, Swedish - no one had suspected
> > the obvious that the language detection categorized a single document as
> > being Indonesian, making the stats count id:1.
> >
> > I realized that the debug output I added - made output of everything else
> > that the interesting field (iterating over already added fields - not the
> > field causing the error later on ;-)
> >
> >
> >
> >
> >
> > On Mon, Oct 10, 2016 at 4:32 PM, Adrien Grand  wrote:
> >
> >> It looks like the field infos of your index went out of sync with data
> >> stored in the files about points.
> >>
> >> Can you run CheckIndex on your index (potentially with the `-fast`
> option
> >> so that it only verifies checksums)? It could be that one of these two
> >> parts of the index got corrupted.
> >>
> >> Since you were able to modify the way add(IndexableField) is
> implemented,
> >> I'm wondering if you are running a fork of Lucene? If yes, maybe you did
> >> some changes that triggered this bug?
> >>
> >> Otherwise is your application:
> >>  - using IndexWriter.addIndexes?
> >>  - customizing merging in some way, eg. by wrapping the merge readers?
> >>
> >> Le mar. 4 oct. 2016 à 16:40, Hans Lund  a écrit :
> >>
> >> > After upgrading to 6.2 we are having problems during merges (after
> >> running
> >> > for a while).
> >> >
> >> > When the problem occurs its always complaining about the same field -
> and
> >> > throws:
> >> >
> >> > java.lang.IllegalArgumentException: field="id" did not index point
> >> values
> >> > at
> >> >
> >> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(
> >> Lucene60PointsReader.java:126)
> >> > at
> >> >
> >> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.
> >> size(Lucene60PointsReader.java:224)
> >> > at
> >> >
> >> > org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.
> >> merge(Lucene60PointsWriter.java:169)
> >> > at
> >> > org.apache.lucene.index.SegmentMerger.mergePoints(
> >> SegmentMerger.java:173)
> >> > at org.apache.lucene.index.SegmentMerger.merge(
> >> SegmentMerger.java:122)
> >> > at
> >> > org.apache.lucene.index.IndexWriter.mergeMiddle(
> IndexWriter.java:4312)
> >> > at org.apache.lucene.index.IndexWriter.merge(IndexWriter.
> java:3889)
> >> >
> >> >
> >> > To figure out where we messed up - I have added some ugly logging to
> >> > Document:
> >> >
> >> > public final void add(IndexableField field) {
> >> > if ("id".equals(field.name()) &&
> >> > field.fieldType().pointDimensionCount()
> >> > != 0) {
> >> > System.err.println("Point value detected");
> >> > for (IndexableField i : fields) {
> >> > System.err.println(i);
> >> > }
> >> > }
> >> > fields.add(field);
> >> >   }
> >> >
> >> > In hope to intercept the document we messed up.
> >> >
> >> > But to my surprise toString on the suspected field just says
> (contains a
> >> > URN):
> >> >
> >> > indexed,omitNorms,indexOptions=DOCS
> >> >
> >> > So any hints as to why field.fieldType().pointDimensionCount() != 0
> >> >
> >> > and any suggestions what might cause this?
> >> >
> >> > Regards
> >> > Hans Lund
> >> >
> >>
>


RE: How to add ASCIIFoldingFilter in ClassicAnalyzer

2016-10-11 Thread Uwe Schindler
I'd suggest to use CustomAnalyzer for defining your own analyzer. This allows 
to build your own analyzer with the components (tokenizers and filters) you 
like to have.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Adrien Grand [mailto:jpou...@gmail.com]
> Sent: Tuesday, October 11, 2016 4:37 PM
> To: java-user@lucene.apache.org
> Subject: Re: How to add ASCIIFoldingFilter in ClassicAnalyzer
> 
> Hi Kumaran,
> 
> If it is fine to add the ascii folding filter at the end of the analysis
> chain, then you could use AnalyzerWrapper. Otherwise, you need to create a
> new analyzer that has the same analysis chain as ClassicAnalyzer, plus an
> ASCIIFoldingFilter.
> 
> Le mar. 11 oct. 2016 à 16:22, Kumaran Ramasubramanian
> 
> a écrit :
> 
> > Hi All,
> >
> >   Is there any way to add ASCIIFoldingFilter over ClassicAnalyzer without
> > writing a new custom analyzer ? should i extend StopwordAnalyzerBase
> again?
> >
> >
> > I know that ClassicAnalyzer is final. any special purpose for making it as
> > final? Because, StandardAnalyzer was not final before ?
> >
> > public final class ClassicAnalyzer extends StopwordAnalyzerBase
> > >
> >
> >
> > --
> > Kumaran R
> >


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: How to add ASCIIFoldingFilter in ClassicAnalyzer

2016-10-11 Thread Ahmet Arslan
Hi,

I forgot to include : .addTokenFilter("asciifolding")

Ahmet


On Tuesday, October 11, 2016 5:37 PM, Ahmet Arslan  wrote:
Hi Kumaran,

Writing a custom analyzer is easier than it seems.

Please see how I added kstem to classic analyzer:

return CustomAnalyzer.builder()
.withTokenizer("classic")
.addTokenFilter("classic")
.addTokenFilter("lowercase")
.addTokenFilter("kstem")
.build();

Ahmet




On Tuesday, October 11, 2016 5:22 PM, Kumaran Ramasubramanian 
 wrote:
Hi All,

  Is there any way to add ASCIIFoldingFilter over ClassicAnalyzer without
writing a new custom analyzer ? should i extend StopwordAnalyzerBase again?


I know that ClassicAnalyzer is final. any special purpose for making it as
final? Because, StandardAnalyzer was not final before ?

public final class ClassicAnalyzer extends StopwordAnalyzerBase
>


--
Kumaran R

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: How to add ASCIIFoldingFilter in ClassicAnalyzer

2016-10-11 Thread Ahmet Arslan
Hi Kumaran,

Writing a custom analyzer is easier than it seems.

Please see how I added kstem to classic analyzer:

return CustomAnalyzer.builder()
.withTokenizer("classic")
.addTokenFilter("classic")
.addTokenFilter("lowercase")
.addTokenFilter("kstem")
.build();

Ahmet



On Tuesday, October 11, 2016 5:22 PM, Kumaran Ramasubramanian 
 wrote:
Hi All,

  Is there any way to add ASCIIFoldingFilter over ClassicAnalyzer without
writing a new custom analyzer ? should i extend StopwordAnalyzerBase again?


I know that ClassicAnalyzer is final. any special purpose for making it as
final? Because, StandardAnalyzer was not final before ?

public final class ClassicAnalyzer extends StopwordAnalyzerBase
>


--
Kumaran R

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: How to add ASCIIFoldingFilter in ClassicAnalyzer

2016-10-11 Thread Adrien Grand
Hi Kumaran,

If it is fine to add the ascii folding filter at the end of the analysis
chain, then you could use AnalyzerWrapper. Otherwise, you need to create a
new analyzer that has the same analysis chain as ClassicAnalyzer, plus an
ASCIIFoldingFilter.

Le mar. 11 oct. 2016 à 16:22, Kumaran Ramasubramanian 
a écrit :

> Hi All,
>
>   Is there any way to add ASCIIFoldingFilter over ClassicAnalyzer without
> writing a new custom analyzer ? should i extend StopwordAnalyzerBase again?
>
>
> I know that ClassicAnalyzer is final. any special purpose for making it as
> final? Because, StandardAnalyzer was not final before ?
>
> public final class ClassicAnalyzer extends StopwordAnalyzerBase
> >
>
>
> --
> Kumaran R
>


How to add ASCIIFoldingFilter in ClassicAnalyzer

2016-10-11 Thread Kumaran Ramasubramanian
Hi All,

  Is there any way to add ASCIIFoldingFilter over ClassicAnalyzer without
writing a new custom analyzer ? should i extend StopwordAnalyzerBase again?


I know that ClassicAnalyzer is final. any special purpose for making it as
final? Because, StandardAnalyzer was not final before ?

public final class ClassicAnalyzer extends StopwordAnalyzerBase
>


--
Kumaran R


Clarification Regarding Directory & Merging

2016-10-11 Thread aravinth thangasami
Hi all,

Does Directories (SimpleFSDirectory, NIOFSDirectory, MMapDirectory) have
any performance impact while indexing ?

If Directory improves reading based on platforms, will it have any impact
on merging ?


Thanks
Aravinth


Re: merge problems

2016-10-11 Thread Michael McCandless
Hmm, that should be "OK" from Lucene's standpoint.

I mean, it should not result in strange merge exceptions later on.

I think there's a bug somewhere in Lucene's efforts to pretend it's
fully schema-less ... I'll try to reproduce this.

Mike McCandless

http://blog.mikemccandless.com

On Tue, Oct 11, 2016 at 4:38 AM, Hans Lund  wrote:
> Turned out to be must much simpler - we had added a new 'dynamic' field to
> a stats doc a count on articles based on identified language code. Having a
> set of test documents in German, English, Swedish - no one had suspected
> the obvious that the language detection categorized a single document as
> being Indonesian, making the stats count id:1.
>
> I realized that the debug output I added - made output of everything else
> that the interesting field (iterating over already added fields - not the
> field causing the error later on ;-)
>
>
>
>
>
> On Mon, Oct 10, 2016 at 4:32 PM, Adrien Grand  wrote:
>
>> It looks like the field infos of your index went out of sync with data
>> stored in the files about points.
>>
>> Can you run CheckIndex on your index (potentially with the `-fast` option
>> so that it only verifies checksums)? It could be that one of these two
>> parts of the index got corrupted.
>>
>> Since you were able to modify the way add(IndexableField) is implemented,
>> I'm wondering if you are running a fork of Lucene? If yes, maybe you did
>> some changes that triggered this bug?
>>
>> Otherwise is your application:
>>  - using IndexWriter.addIndexes?
>>  - customizing merging in some way, eg. by wrapping the merge readers?
>>
>> Le mar. 4 oct. 2016 à 16:40, Hans Lund  a écrit :
>>
>> > After upgrading to 6.2 we are having problems during merges (after
>> running
>> > for a while).
>> >
>> > When the problem occurs its always complaining about the same field - and
>> > throws:
>> >
>> > java.lang.IllegalArgumentException: field="id" did not index point
>> values
>> > at
>> >
>> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(
>> Lucene60PointsReader.java:126)
>> > at
>> >
>> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.
>> size(Lucene60PointsReader.java:224)
>> > at
>> >
>> > org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.
>> merge(Lucene60PointsWriter.java:169)
>> > at
>> > org.apache.lucene.index.SegmentMerger.mergePoints(
>> SegmentMerger.java:173)
>> > at org.apache.lucene.index.SegmentMerger.merge(
>> SegmentMerger.java:122)
>> > at
>> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4312)
>> > at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3889)
>> >
>> >
>> > To figure out where we messed up - I have added some ugly logging to
>> > Document:
>> >
>> > public final void add(IndexableField field) {
>> > if ("id".equals(field.name()) &&
>> > field.fieldType().pointDimensionCount()
>> > != 0) {
>> > System.err.println("Point value detected");
>> > for (IndexableField i : fields) {
>> > System.err.println(i);
>> > }
>> > }
>> > fields.add(field);
>> >   }
>> >
>> > In hope to intercept the document we messed up.
>> >
>> > But to my surprise toString on the suspected field just says (contains a
>> > URN):
>> >
>> > indexed,omitNorms,indexOptions=DOCS
>> >
>> > So any hints as to why field.fieldType().pointDimensionCount() != 0
>> >
>> > and any suggestions what might cause this?
>> >
>> > Regards
>> > Hans Lund
>> >
>>

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: merge problems

2016-10-11 Thread Hans Lund
Turned out to be must much simpler - we had added a new 'dynamic' field to
a stats doc a count on articles based on identified language code. Having a
set of test documents in German, English, Swedish - no one had suspected
the obvious that the language detection categorized a single document as
being Indonesian, making the stats count id:1.

I realized that the debug output I added - made output of everything else
that the interesting field (iterating over already added fields - not the
field causing the error later on ;-)





On Mon, Oct 10, 2016 at 4:32 PM, Adrien Grand  wrote:

> It looks like the field infos of your index went out of sync with data
> stored in the files about points.
>
> Can you run CheckIndex on your index (potentially with the `-fast` option
> so that it only verifies checksums)? It could be that one of these two
> parts of the index got corrupted.
>
> Since you were able to modify the way add(IndexableField) is implemented,
> I'm wondering if you are running a fork of Lucene? If yes, maybe you did
> some changes that triggered this bug?
>
> Otherwise is your application:
>  - using IndexWriter.addIndexes?
>  - customizing merging in some way, eg. by wrapping the merge readers?
>
> Le mar. 4 oct. 2016 à 16:40, Hans Lund  a écrit :
>
> > After upgrading to 6.2 we are having problems during merges (after
> running
> > for a while).
> >
> > When the problem occurs its always complaining about the same field - and
> > throws:
> >
> > java.lang.IllegalArgumentException: field="id" did not index point
> values
> > at
> >
> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(
> Lucene60PointsReader.java:126)
> > at
> >
> > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.
> size(Lucene60PointsReader.java:224)
> > at
> >
> > org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.
> merge(Lucene60PointsWriter.java:169)
> > at
> > org.apache.lucene.index.SegmentMerger.mergePoints(
> SegmentMerger.java:173)
> > at org.apache.lucene.index.SegmentMerger.merge(
> SegmentMerger.java:122)
> > at
> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4312)
> > at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3889)
> >
> >
> > To figure out where we messed up - I have added some ugly logging to
> > Document:
> >
> > public final void add(IndexableField field) {
> > if ("id".equals(field.name()) &&
> > field.fieldType().pointDimensionCount()
> > != 0) {
> > System.err.println("Point value detected");
> > for (IndexableField i : fields) {
> > System.err.println(i);
> > }
> > }
> > fields.add(field);
> >   }
> >
> > In hope to intercept the document we messed up.
> >
> > But to my surprise toString on the suspected field just says (contains a
> > URN):
> >
> > indexed,omitNorms,indexOptions=DOCS
> >
> > So any hints as to why field.fieldType().pointDimensionCount() != 0
> >
> > and any suggestions what might cause this?
> >
> > Regards
> > Hans Lund
> >
>


RE: Array as Lucene Field

2016-10-11 Thread ASKozitsin
Oh, my fault! 

Thanks, Uwe!

Alexander





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Array-as-Lucene-Field-tp4300445p4300597.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org