Where the uploaded configset from SOLR into zookeeper ensemble resides?

2017-09-27 Thread Gunalan V
Hello,

Could you please let me know where can I find the uploaded configset from
SOLR into zookeeper ensemble ?

In docs it says they will  "/configs/" but I'm not able to see
the configs directory in zookeeper. Please let me know if I need to check
somewhere else.


Thanks!


SOLR terminology

2017-09-27 Thread Gunalan V
Hello,

Could someone please tell me the difference between Solr Core (core),
Collections, Nodes, SolrCluster referred in SolrColud. It's bit confusing.

If there are any diagrammatic representation or example please share me.


Thanks!


Re: solr 7.0: possible analysis error: startOffset must be non-negative

2017-09-27 Thread Nawab Zada Asad Iqbal
so, it seems like two steps for WordDelimiterGraphFilterFactory (with
different config in each step) were causing the error. I am still not sure
how it ended up in this state and if there is any benefit of having two
lines. But removing one of them fixed my error.


Thanks
Nawab

On Wed, Sep 27, 2017 at 3:12 PM, Nawab Zada Asad Iqbal 
wrote:

> Hi,
>
> I upgraded to solr 7 today and i am seeing tonnes of following errors for
> various fields.
>
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
> Exception writing document id file_3881549 to the index; possible
> analysis error: startOffset must be non-negative, and endOffset must be >=
> startOffset, and offsets must not go backwards 
> startOffset=6,endOffset=8,lastStartOffset=9
> for field 'name_combined'
>
> We don't have a lot of custom code for analysis at indexing time, so my
> suspicion is on the schema definition, can someone suggest how should I
> start debugging this?
>
>  stored="true" omitPositions="false"/>
>   
> 
> 
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" preserveOriginal="1"
> splitOnCaseChange="0" splitOnNumerics="0" stemEnglishPossessive="1"/>
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" preserveOriginal="1"
> splitOnCaseChange="1" splitOnNumerics="1" stemEnglishPossessive="1"/>
>  pattern="^(\p{Punct}*)(.*?)(\p{Punct}*)$" replacement="$2"/>
>  words="stopwords.txt"/>
> 
> 
> 
> 
>  maxTokenCount="1" consumeAllTokens="false"/>
> 
>   
>
>
>  stored="false" multiValued="true" omitPositions="true"/>
>   
> 
> 
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" preserveOriginal="1"
> splitOnCaseChange="0" splitOnNumerics="0" stemEnglishPossessive="1"/>
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" preserveOriginal="1"
> splitOnCaseChange="1" splitOnNumerics="1" stemEnglishPossessive="1"/>
>  pattern="^(\p{Punct}*)(.*?)(\p{Punct}*)$" replacement="$2"/>
>  words="stopwords.txt"/>
> 
> 
>  maxGramSize="255"/>
> 
>  maxTokenCount="1" consumeAllTokens="false"/>
> 
>   
>
>
> Thanks
> nawab
>
>


solr 7.0: possible analysis error: startOffset must be non-negative

2017-09-27 Thread Nawab Zada Asad Iqbal
Hi,

I upgraded to solr 7 today and i am seeing tonnes of following errors for
various fields.

o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Exception
writing document id file_3881549 to the index; possible analysis error:
startOffset must be non-negative, and endOffset must be >= startOffset, and
offsets must not go backwards startOffset=6,endOffset=8,lastStartOffset=9
for field 'name_combined'

We don't have a lot of custom code for analysis at indexing time, so my
suspicion is on the schema definition, can someone suggest how should I
start debugging this?


  












  



  












  


Thanks
nawab


RE: Modifing create_core's instanceDir attribute

2017-09-27 Thread Miller, William K - Norman, OK - Contractor
I understand that this has to be done on the command line, but I don't know 
where to put this structure or what it should look like.  Can you please be 
more specific in this answer?  I have only been working with Solr for about six 
months.




~~~
William Kevin Miller

ECS Federal, Inc.
USPS/MTSC
(405) 573-2158


-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Wednesday, September 27, 2017 3:57 PM
To: solr-user
Subject: Re: Modifing create_core's instanceDir attribute

Standard command-line. You're doing this on the box itself, not through a REST 
API.

Erick

On Wed, Sep 27, 2017 at 10:26 AM, Miller, William K - Norman, OK - Contractor 
 wrote:
> This is my first time to try using the core admin API.  How do I go about 
> creating the directory structure?
>
>
>
>
> ~~~
> William Kevin Miller
>
> ECS Federal, Inc.
> USPS/MTSC
> (405) 573-2158
>
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Wednesday, September 27, 2017 11:45 AM
> To: solr-user
> Subject: Re: Modifing create_core's instanceDir attribute
>
> Right, the core admin API is pretty low-level, it expects the base directory 
> exists, you have to create the directory structure by hand.
>
> Best,
> Erick
>
> On Wed, Sep 27, 2017 at 9:24 AM, Miller, William K - Norman, OK - Contractor 
>  wrote:
>> Thanks Erick for pointing me in this direction.  Unfortunately when I try to 
>> us this I get an error.  Here is the command that I am using and the 
>> response I get:
>>
>> https://solrserver:8983/solr/admin/cores?action=CREATE=mycore
>> s 
>> tanceDir=/var/solr/data/mycore=data=custom_configs
>>
>>
>> [1] 32023
>> [2] 32024
>> [3] 32025
>> -bash: https://solrserver:8983/solr/admin/cores?action=CREATE: No 
>> such file or directory [4] 32026
>> [1] Exit 127
>> https://solrserver:8983/solr/adkmin/cores?action=CREATE
>> [2] Donename=mycore
>> [3]-DoneinstanceDir=/var/solr/data/mycore
>> [4]+DonedataDir=data
>>
>>
>> I even tried to use the UNLOAD action to remove a core and got the same type 
>> of error as the -bash line above.
>>
>> I have tried searching online for an answer and have found nothing so far.  
>> Any ideas why this error is occuring.
>>
>>
>>
>> ~~~
>> William Kevin Miller
>>
>> ECS Federal, Inc.
>> USPS/MTSC
>> (405) 573-2158
>>
>> -Original Message-
>> From: Erick Erickson [mailto:erickerick...@gmail.com]
>> Sent: Tuesday, September 26, 2017 3:33 PM
>> To: solr-user
>> Subject: Re: Modifing create_core's instanceDir attribute
>>
>> I don't think you can. You can, however, use the core admin API to do 
>> that,
>> see:
>> https://lucene.apache.org/solr/guide/6_6/coreadmin-api.html#coreadmin
>> -
>> api
>>
>> Best,
>> Erick
>>
>> On Tue, Sep 26, 2017 at 1:14 PM, Miller, William K - Norman, OK - Contractor 
>>  wrote:
>>
>>> I know that when the create_core command is used that it sets the 
>>> core to the name of the parameter supplied with the “-c” option and 
>>> the instanceDir attribute in the http is also set to the name of the core.
>>> What I want is to tell the create_core to use a different 
>>> instanceDir parameter.  How can I go about doing this?
>>>
>>>
>>>
>>>
>>>
>>> I am using Solr 6.5.1 and it is running on a linux server using the 
>>> apache tomcat webserver.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ~~~
>>>
>>> William Kevin Miller
>>>
>>> [image: ecsLogo]
>>>
>>> ECS Federal, Inc.
>>>
>>> USPS/MTSC
>>>
>>> (405) 573-2158
>>>
>>>
>>>


Re: Modifing create_core's instanceDir attribute

2017-09-27 Thread Erick Erickson
Standard command-line. You're doing this on the box itself, not
through a REST API.

Erick

On Wed, Sep 27, 2017 at 10:26 AM, Miller, William K - Norman, OK -
Contractor  wrote:
> This is my first time to try using the core admin API.  How do I go about 
> creating the directory structure?
>
>
>
>
> ~~~
> William Kevin Miller
>
> ECS Federal, Inc.
> USPS/MTSC
> (405) 573-2158
>
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Wednesday, September 27, 2017 11:45 AM
> To: solr-user
> Subject: Re: Modifing create_core's instanceDir attribute
>
> Right, the core admin API is pretty low-level, it expects the base directory 
> exists, you have to create the directory structure by hand.
>
> Best,
> Erick
>
> On Wed, Sep 27, 2017 at 9:24 AM, Miller, William K - Norman, OK - Contractor 
>  wrote:
>> Thanks Erick for pointing me in this direction.  Unfortunately when I try to 
>> us this I get an error.  Here is the command that I am using and the 
>> response I get:
>>
>> https://solrserver:8983/solr/admin/cores?action=CREATE=mycore
>> tanceDir=/var/solr/data/mycore=data=custom_configs
>>
>>
>> [1] 32023
>> [2] 32024
>> [3] 32025
>> -bash: https://solrserver:8983/solr/admin/cores?action=CREATE: No such
>> file or directory [4] 32026
>> [1] Exit 127
>> https://solrserver:8983/solr/adkmin/cores?action=CREATE
>> [2] Donename=mycore
>> [3]-DoneinstanceDir=/var/solr/data/mycore
>> [4]+DonedataDir=data
>>
>>
>> I even tried to use the UNLOAD action to remove a core and got the same type 
>> of error as the -bash line above.
>>
>> I have tried searching online for an answer and have found nothing so far.  
>> Any ideas why this error is occuring.
>>
>>
>>
>> ~~~
>> William Kevin Miller
>>
>> ECS Federal, Inc.
>> USPS/MTSC
>> (405) 573-2158
>>
>> -Original Message-
>> From: Erick Erickson [mailto:erickerick...@gmail.com]
>> Sent: Tuesday, September 26, 2017 3:33 PM
>> To: solr-user
>> Subject: Re: Modifing create_core's instanceDir attribute
>>
>> I don't think you can. You can, however, use the core admin API to do
>> that,
>> see:
>> https://lucene.apache.org/solr/guide/6_6/coreadmin-api.html#coreadmin-
>> api
>>
>> Best,
>> Erick
>>
>> On Tue, Sep 26, 2017 at 1:14 PM, Miller, William K - Norman, OK - Contractor 
>>  wrote:
>>
>>> I know that when the create_core command is used that it sets the
>>> core to the name of the parameter supplied with the “-c” option and
>>> the instanceDir attribute in the http is also set to the name of the core.
>>> What I want is to tell the create_core to use a different instanceDir
>>> parameter.  How can I go about doing this?
>>>
>>>
>>>
>>>
>>>
>>> I am using Solr 6.5.1 and it is running on a linux server using the
>>> apache tomcat webserver.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ~~~
>>>
>>> William Kevin Miller
>>>
>>> [image: ecsLogo]
>>>
>>> ECS Federal, Inc.
>>>
>>> USPS/MTSC
>>>
>>> (405) 573-2158
>>>
>>>
>>>


Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)

2017-09-27 Thread Cassandra Targett
Regarding not finding the issue, JIRA has a problem with queries when
the user is not logged in (see also
https://jira.atlassian.com/browse/JRASERVER-38511 if you're interested
in the details). There's unfortunately not much we can do about it
besides manually edit issues to remove a security setting which gets
automatically added to issues when they are created (which I've now
done for SOLR-11406).

Your best bet in the future would be to log into JIRA before
initiating a search to be sure you aren't missing one that's "hidden"
inadvertently.

Cassandra

On Wed, Sep 27, 2017 at 1:39 PM, Wayne L. Johnson
 wrote:
> First, thanks for the quick response.  Yes, it sounds like the same problem!!
>
> I did a bunch of searching before repoting the issue, I didn't come across 
> that JIRA or I wouldn't have reported it.  My apologies for the duplication 
> (although it is a new JIRA).
>
> Is there a good place to start searching in the future?  I'm a fairly 
> experiences Solr user, and I don't mind slogging through Java code.
>
> Meanwhile I'll follow the JIRA so I know when it gets fixed.
>
> Thanks!!
>
> -Original Message-
> From: Stefan Matheis [mailto:matheis.ste...@gmail.com]
> Sent: Wednesday, September 27, 2017 12:32 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)
>
> That sounds like 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D11406=DwIFaQ=z0adcvxXWKG6LAMN6dVEqQ=4gLDKHTqOXldY2aQti2VNXYWPtqa1bUKE6MA9VrIJfU=iYU948dQo6G0tKFQUguY6SHOZNZoCOEAEv1sCf4ukcA=HvPPQL--s3bFtNyBdUiz1hNIqfLEVrb4Cu-HIC71dKY=
>   if i'm not mistaken?
>
> -Stefan
>
> On Sep 27, 2017 8:20 PM, "Wayne L. Johnson" 
> wrote:
>
>> I’m testing Solr 7.0.0.  When I start with an empty index, Solr comes
>> up just fine, I can add documents and query documents.  However when I
>> start with an already-populated set of documents (from 6.5.0), Solr
>> will not start.  The relevant portion of the traceback seems to be:
>>
>> Caused by: java.lang.NullPointerException
>>
>> at java.util.Objects.requireNonNull(Objects.java:203)
>>
>> …
>>
>> at java.util.stream.ReferencePipeline.reduce(
>> ReferencePipeline.java:479)
>>
>> at org.apache.solr.index.SlowCompositeReaderWrapper.(
>> SlowCompositeReaderWrapper.java:76)
>>
>> at org.apache.solr.index.SlowCompositeReaderWrapper.wrap(
>> SlowCompositeReaderWrapper.java:57)
>>
>> at org.apache.solr.search.SolrIndexSearcher.(
>> SolrIndexSearcher.java:252)
>>
>> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:
>> 2034)
>>
>> ... 12 more
>>
>>
>>
>> In looking at the de-compiled code (SlowCompositeReaderWrapper), lines
>> 72-77, and it appears that one or more “leaf” files doesn’t have a
>> “min-version” set.  That’s a guess.  If so, does this mean Solr 7.0.0
>> can’t read a 6.5.0 index?
>>
>>
>>
>> Thanks
>>
>>
>>
>> Wayne Johnson
>>
>> 801-240-4024
>>
>> wjohnson...@ldschurch.org
>>
>> [image: familysearch2.JPG]
>>
>>
>>


RE: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)

2017-09-27 Thread Wayne L. Johnson
First, thanks for the quick response.  Yes, it sounds like the same problem!!

I did a bunch of searching before repoting the issue, I didn't come across that 
JIRA or I wouldn't have reported it.  My apologies for the duplication 
(although it is a new JIRA).

Is there a good place to start searching in the future?  I'm a fairly 
experiences Solr user, and I don't mind slogging through Java code.

Meanwhile I'll follow the JIRA so I know when it gets fixed.

Thanks!!

-Original Message-
From: Stefan Matheis [mailto:matheis.ste...@gmail.com] 
Sent: Wednesday, September 27, 2017 12:32 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)

That sounds like 
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D11406=DwIFaQ=z0adcvxXWKG6LAMN6dVEqQ=4gLDKHTqOXldY2aQti2VNXYWPtqa1bUKE6MA9VrIJfU=iYU948dQo6G0tKFQUguY6SHOZNZoCOEAEv1sCf4ukcA=HvPPQL--s3bFtNyBdUiz1hNIqfLEVrb4Cu-HIC71dKY=
  if i'm not mistaken?

-Stefan

On Sep 27, 2017 8:20 PM, "Wayne L. Johnson" 
wrote:

> I’m testing Solr 7.0.0.  When I start with an empty index, Solr comes 
> up just fine, I can add documents and query documents.  However when I 
> start with an already-populated set of documents (from 6.5.0), Solr 
> will not start.  The relevant portion of the traceback seems to be:
>
> Caused by: java.lang.NullPointerException
>
> at java.util.Objects.requireNonNull(Objects.java:203)
>
> …
>
> at java.util.stream.ReferencePipeline.reduce(
> ReferencePipeline.java:479)
>
> at org.apache.solr.index.SlowCompositeReaderWrapper.(
> SlowCompositeReaderWrapper.java:76)
>
> at org.apache.solr.index.SlowCompositeReaderWrapper.wrap(
> SlowCompositeReaderWrapper.java:57)
>
> at org.apache.solr.search.SolrIndexSearcher.(
> SolrIndexSearcher.java:252)
>
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:
> 2034)
>
> ... 12 more
>
>
>
> In looking at the de-compiled code (SlowCompositeReaderWrapper), lines 
> 72-77, and it appears that one or more “leaf” files doesn’t have a 
> “min-version” set.  That’s a guess.  If so, does this mean Solr 7.0.0 
> can’t read a 6.5.0 index?
>
>
>
> Thanks
>
>
>
> Wayne Johnson
>
> 801-240-4024
>
> wjohnson...@ldschurch.org
>
> [image: familysearch2.JPG]
>
>
>


Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)

2017-09-27 Thread Stefan Matheis
That sounds like https://issues.apache.org/jira/browse/SOLR-11406 if i'm
not mistaken?

-Stefan

On Sep 27, 2017 8:20 PM, "Wayne L. Johnson" 
wrote:

> I’m testing Solr 7.0.0.  When I start with an empty index, Solr comes up
> just fine, I can add documents and query documents.  However when I start
> with an already-populated set of documents (from 6.5.0), Solr will not
> start.  The relevant portion of the traceback seems to be:
>
> Caused by: java.lang.NullPointerException
>
> at java.util.Objects.requireNonNull(Objects.java:203)
>
> …
>
> at java.util.stream.ReferencePipeline.reduce(
> ReferencePipeline.java:479)
>
> at org.apache.solr.index.SlowCompositeReaderWrapper.(
> SlowCompositeReaderWrapper.java:76)
>
> at org.apache.solr.index.SlowCompositeReaderWrapper.wrap(
> SlowCompositeReaderWrapper.java:57)
>
> at org.apache.solr.search.SolrIndexSearcher.(
> SolrIndexSearcher.java:252)
>
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:
> 2034)
>
> ... 12 more
>
>
>
> In looking at the de-compiled code (SlowCompositeReaderWrapper), lines
> 72-77, and it appears that one or more “leaf” files doesn’t have a
> “min-version” set.  That’s a guess.  If so, does this mean Solr 7.0.0 can’t
> read a 6.5.0 index?
>
>
>
> Thanks
>
>
>
> Wayne Johnson
>
> 801-240-4024
>
> wjohnson...@ldschurch.org
>
> [image: familysearch2.JPG]
>
>
>


Solr 7.0.0 -- can it use a 6.5.0 data repository (index)

2017-09-27 Thread Wayne L. Johnson

I'm testing Solr 7.0.0.  When I start with an empty index, Solr comes up just 
fine, I can add documents and query documents.  However when I start with an 
already-populated set of documents (from 6.5.0), Solr will not start.  The 
relevant portion of the traceback seems to be:
Caused by: java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
...
at java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:479)
at 
org.apache.solr.index.SlowCompositeReaderWrapper.(SlowCompositeReaderWrapper.java:76)
at 
org.apache.solr.index.SlowCompositeReaderWrapper.wrap(SlowCompositeReaderWrapper.java:57)
at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:252)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2034)
... 12 more

In looking at the de-compiled code (SlowCompositeReaderWrapper), lines 72-77, 
and it appears that one or more "leaf" files doesn't have a "min-version" set.  
That's a guess.  If so, does this mean Solr 7.0.0 can't read a 6.5.0 index?

Thanks

Wayne Johnson
801-240-4024
wjohnson...@ldschurch.org
[familysearch2.JPG]



RE: DataImport Handler Out of Memory

2017-09-27 Thread Allison, Timothy B.
https://wiki.apache.org/solr/DataImportHandlerFaq#I.27m_using_DataImportHandler_with_a_MySQL_database._My_table_is_huge_and_DataImportHandler_is_going_out_of_memory._Why_does_DataImportHandler_bring_everything_to_memory.3F


-Original Message-
From: Deeksha Sharma [mailto:dsha...@flexera.com] 
Sent: Wednesday, September 27, 2017 1:40 PM
To: solr-user@lucene.apache.org
Subject: DataImport Handler Out of Memory

I am trying to create indexes using dataimport handler (Solr 5.2.1). Data is in 
mysql db and the number of records are more than 3.5 million. My solr server 
stops due to OOM (out of memory error). I tried starting solr by giving 12GB of 
RAM but still no luck.


Also, I see that Solr fetches all the documents in 1 request. Is there a way to 
configure Solr to stream the data from DB or any other solution somewhere may 
have tried?


Note: When my records are nearly 2 Million, I am able to create indexes by 
giving Solr 10GB of RAM.


Your help is appreciated.



Thanks

Deeksha




DataImport Handler Out of Memory

2017-09-27 Thread Deeksha Sharma
I am trying to create indexes using dataimport handler (Solr 5.2.1). Data is in 
mysql db and the number of records are more than 3.5 million. My solr server 
stops due to OOM (out of memory error). I tried starting solr by giving 12GB of 
RAM but still no luck.


Also, I see that Solr fetches all the documents in 1 request. Is there a way to 
configure Solr to stream the data from DB or any other solution somewhere may 
have tried?


Note: When my records are nearly 2 Million, I am able to create indexes by 
giving Solr 10GB of RAM.


Your help is appreciated.



Thanks

Deeksha




Re: CSV import to SOLR

2017-09-27 Thread Zisis Simaioforidis
So there is no way of telling SOLR to duplicate a column of CSV by just 
using some parameters during the import request?


Just for the CSV.

The truth is  copyfield crosseb my mind but it's just too brute force 
because it will affect all documents imported. And CSV is NOT the only 
method we are importing. We also use MARC and XML.


Zisis


Στις 27/9/2017 5:52 PM, ο Erick Erickson έγραψε:

If you always want to do this exact thing, it looks like a copyField
directive in your schema.

If it has to be more nuanced, you can use something like
StatelessScriptUpdateProcessorFactory.

Both of these would affect _all_ documents coming in to Solr, so may
be too blunt a hammer.

Best,
Erick

On Wed, Sep 27, 2017 at 3:07 AM, Zisis Simaioforidis  wrote:

Is there a way to map a field value based on another field value without
replicatiing the columns in the CSV itself?

for example i tried : literal.title_fullStr=f.title_short but it doesn't
seem to work.

Thank you





RE: Modifing create_core's instanceDir attribute

2017-09-27 Thread Miller, William K - Norman, OK - Contractor
This is my first time to try using the core admin API.  How do I go about 
creating the directory structure?




~~~
William Kevin Miller

ECS Federal, Inc.
USPS/MTSC
(405) 573-2158


-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Wednesday, September 27, 2017 11:45 AM
To: solr-user
Subject: Re: Modifing create_core's instanceDir attribute

Right, the core admin API is pretty low-level, it expects the base directory 
exists, you have to create the directory structure by hand.

Best,
Erick

On Wed, Sep 27, 2017 at 9:24 AM, Miller, William K - Norman, OK - Contractor 
 wrote:
> Thanks Erick for pointing me in this direction.  Unfortunately when I try to 
> us this I get an error.  Here is the command that I am using and the response 
> I get:
>
> https://solrserver:8983/solr/admin/cores?action=CREATE=mycore
> tanceDir=/var/solr/data/mycore=data=custom_configs
>
>
> [1] 32023
> [2] 32024
> [3] 32025
> -bash: https://solrserver:8983/solr/admin/cores?action=CREATE: No such 
> file or directory [4] 32026
> [1] Exit 127
> https://solrserver:8983/solr/adkmin/cores?action=CREATE
> [2] Donename=mycore
> [3]-DoneinstanceDir=/var/solr/data/mycore
> [4]+DonedataDir=data
>
>
> I even tried to use the UNLOAD action to remove a core and got the same type 
> of error as the -bash line above.
>
> I have tried searching online for an answer and have found nothing so far.  
> Any ideas why this error is occuring.
>
>
>
> ~~~
> William Kevin Miller
>
> ECS Federal, Inc.
> USPS/MTSC
> (405) 573-2158
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, September 26, 2017 3:33 PM
> To: solr-user
> Subject: Re: Modifing create_core's instanceDir attribute
>
> I don't think you can. You can, however, use the core admin API to do 
> that,
> see:
> https://lucene.apache.org/solr/guide/6_6/coreadmin-api.html#coreadmin-
> api
>
> Best,
> Erick
>
> On Tue, Sep 26, 2017 at 1:14 PM, Miller, William K - Norman, OK - Contractor 
>  wrote:
>
>> I know that when the create_core command is used that it sets the 
>> core to the name of the parameter supplied with the “-c” option and 
>> the instanceDir attribute in the http is also set to the name of the core.
>> What I want is to tell the create_core to use a different instanceDir 
>> parameter.  How can I go about doing this?
>>
>>
>>
>>
>>
>> I am using Solr 6.5.1 and it is running on a linux server using the 
>> apache tomcat webserver.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> ~~~
>>
>> William Kevin Miller
>>
>> [image: ecsLogo]
>>
>> ECS Federal, Inc.
>>
>> USPS/MTSC
>>
>> (405) 573-2158
>>
>>
>>


Re: Modifing create_core's instanceDir attribute

2017-09-27 Thread Erick Erickson
Right, the core admin API is pretty low-level, it expects the base
directory exists, you have to create the directory structure by hand.

Best,
Erick

On Wed, Sep 27, 2017 at 9:24 AM, Miller, William K - Norman, OK -
Contractor  wrote:
> Thanks Erick for pointing me in this direction.  Unfortunately when I try to 
> us this I get an error.  Here is the command that I am using and the response 
> I get:
>
> https://solrserver:8983/solr/admin/cores?action=CREATE=mycore=/var/solr/data/mycore=data=custom_configs
>
>
> [1] 32023
> [2] 32024
> [3] 32025
> -bash: https://solrserver:8983/solr/admin/cores?action=CREATE: No such file 
> or directory
> [4] 32026
> [1] Exit 127
> https://solrserver:8983/solr/adkmin/cores?action=CREATE
> [2] Donename=mycore
> [3]-DoneinstanceDir=/var/solr/data/mycore
> [4]+DonedataDir=data
>
>
> I even tried to use the UNLOAD action to remove a core and got the same type 
> of error as the -bash line above.
>
> I have tried searching online for an answer and have found nothing so far.  
> Any ideas why this error is occuring.
>
>
>
> ~~~
> William Kevin Miller
>
> ECS Federal, Inc.
> USPS/MTSC
> (405) 573-2158
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, September 26, 2017 3:33 PM
> To: solr-user
> Subject: Re: Modifing create_core's instanceDir attribute
>
> I don't think you can. You can, however, use the core admin API to do that,
> see:
> https://lucene.apache.org/solr/guide/6_6/coreadmin-api.html#coreadmin-api
>
> Best,
> Erick
>
> On Tue, Sep 26, 2017 at 1:14 PM, Miller, William K - Norman, OK - Contractor 
>  wrote:
>
>> I know that when the create_core command is used that it sets the core
>> to the name of the parameter supplied with the “-c” option and the
>> instanceDir attribute in the http is also set to the name of the core.
>> What I want is to tell the create_core to use a different instanceDir
>> parameter.  How can I go about doing this?
>>
>>
>>
>>
>>
>> I am using Solr 6.5.1 and it is running on a linux server using the
>> apache tomcat webserver.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> ~~~
>>
>> William Kevin Miller
>>
>> [image: ecsLogo]
>>
>> ECS Federal, Inc.
>>
>> USPS/MTSC
>>
>> (405) 573-2158
>>
>>
>>


RE: Modifing create_core's instanceDir attribute

2017-09-27 Thread Miller, William K - Norman, OK - Contractor
Thanks Erick for pointing me in this direction.  Unfortunately when I try to us 
this I get an error.  Here is the command that I am using and the response I 
get:

https://solrserver:8983/solr/admin/cores?action=CREATE=mycore=/var/solr/data/mycore=data=custom_configs


[1] 32023
[2] 32024
[3] 32025
-bash: https://solrserver:8983/solr/admin/cores?action=CREATE: No such file or 
directory
[4] 32026
[1] Exit 127
https://solrserver:8983/solr/adkmin/cores?action=CREATE
[2] Donename=mycore
[3]-DoneinstanceDir=/var/solr/data/mycore
[4]+DonedataDir=data


I even tried to use the UNLOAD action to remove a core and got the same type of 
error as the -bash line above.

I have tried searching online for an answer and have found nothing so far.  Any 
ideas why this error is occuring.



~~~
William Kevin Miller

ECS Federal, Inc.
USPS/MTSC
(405) 573-2158

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Tuesday, September 26, 2017 3:33 PM
To: solr-user
Subject: Re: Modifing create_core's instanceDir attribute

I don't think you can. You can, however, use the core admin API to do that,
see:
https://lucene.apache.org/solr/guide/6_6/coreadmin-api.html#coreadmin-api

Best,
Erick

On Tue, Sep 26, 2017 at 1:14 PM, Miller, William K - Norman, OK - Contractor 
 wrote:

> I know that when the create_core command is used that it sets the core 
> to the name of the parameter supplied with the “-c” option and the 
> instanceDir attribute in the http is also set to the name of the core.  
> What I want is to tell the create_core to use a different instanceDir 
> parameter.  How can I go about doing this?
>
>
>
>
>
> I am using Solr 6.5.1 and it is running on a linux server using the 
> apache tomcat webserver.
>
>
>
>
>
>
>
>
>
>
>
> ~~~
>
> William Kevin Miller
>
> [image: ecsLogo]
>
> ECS Federal, Inc.
>
> USPS/MTSC
>
> (405) 573-2158
>
>
>


vespa

2017-09-27 Thread Diego Ceccarelli (BLOOMBERG/ LONDON)
Hi all, 

Yesterday Yahoo open sourced Vespa (i.e.: The open big data serving engine: 
Store, search, rank and organize big data at user serving time.), looking at 
the API they provide search. 
I did a quick search on the code for lucene, getting only 5 results. 

Does anyone know more about the framework? does it provide a new way to do 
search?  how does it compare with Solr?

https://github.com/vespa-engine/vespa
http://vespa.ai



Re: PatternCaptureGroupTokenFilter

2017-09-27 Thread Emir Arnautović
Thanks Erick,
I’ll add it on my TODO list.

Regards,
Emir

> On 27 Sep 2017, at 17:02, Erick Erickson  wrote:
> 
> No good reason, probably just "nobody got around to it".
> 
> The switch to asciidoc has made it much easier to contribute doc
> changes, if you have the bandwidth please go ahead and create a patch
> for the docs
> 
> Best,
> Erick
> 
> On Wed, Sep 27, 2017 at 1:53 AM, Emir Arnautović
>  wrote:
>> Hi all,
>> Is there some reason why PatternCaptureGroupTokenFilter is not documented 
>> even included in the code base?
>> 
>> Thanks,
>> Emir



Re: Filter Factory question

2017-09-27 Thread Stefan Matheis
> In any case I figured out my problem. I was over thinking it.

Mind to share?

-Stefan

On Sep 27, 2017 4:34 PM, "Webster Homer"  wrote:

> There is a need for a special filter since the input has to be normalized.
> That is the main requirement, splitting into pieces is optional. As far as
> I know there is nothing in solr that knows about molecular formulas.
>
> In any case I figured out my problem. I was over thinking it.
>
> On Wed, Sep 27, 2017 at 3:52 AM, Emir Arnautović <
> emir.arnauto...@sematext.com> wrote:
>
> > Hi Homer,
> > There is no need for special filter, there is one that is for some reason
> > not part of documentation (will ask why so follow that thread if decided
> to
> > go this way): You can use something like:
> >  > pattern=“([A-Z][a-z]?\d+)” preserveOriginal=“true” />
> >
> > This will capture all atom counts as a separate tokens.
> >
> > HTH,
> > Emir
> >
> > > On 26 Sep 2017, at 23:14, Webster Homer 
> wrote:
> > >
> > > I am trying to create a filter that normalizes an input token, but also
> > > splits it inot multiple pieces. Sort of like what the
> WordDelimiterFilter
> > > does.
> > >
> > > It's meant to take a molecular formula like C2H6O and normalize it to
> > C2H6O1
> > >
> > > That part works. However I was also going to have it put out the
> > individual
> > > atom counts as tokens.
> > > C2H6O1
> > > C2
> > > H6
> > > O1
> > >
> > > When I enable this feature in the factory, I don't get any output at
> all.
> > >
> > > I looked over a couple of filters that do what I want and it's not
> > entirely
> > > clear what they're doing. So I have some questions:
> > > Looking at ShingleFilter and WordDelimitierFilter
> > > They both set several attributes:
> > > CharTermAttribute : Seems to be the actual terms being set. Seemed
> > straight
> > > forward, works fine when I only have one term to add.
> > >
> > > PositionIncrementAttribute: What does this do? It appears that
> > > WordDelimiterFilter sets this to 0 most of the time. This has decent
> > > documentation.
> > >
> > > OffsetAttribute: I think that this tracks offsets for each term being
> > > processed. Not really sure though. The documentation mentions tokens.
> So
> > if
> > > I have multiple variations for for a token is this for each variation?
> > >
> > > TypeAttribute: default is "word". Don't know what this is for.
> > >
> > > PositionLengthAttribute: WordDelimiterFilter doesn' use this but
> Shingle
> > > does. It defaults to 1. What's it good for when should I use it?
> > >
> > > Here is my incrementToken method.
> > >
> > >@Override
> > >public boolean incrementToken() throws IOException {
> > >while(true) {
> > >if (!hasSavedState) {
> > >if (! input.incrementToken()) {
> > >return false;
> > >}
> > >if (! generateFragments) { // This part works fine!
> > >String normalizedFormula = molFormula.normalize(new
> > > String(termAttribute.buffer()));
> > >char[]newBuffer = normalizedFormula.toCharArray();
> > >termAttribute.setEmpty();
> > >termAttribute.copyBuffer(newBuffer, 0, newBuffer.length);
> > >return true;
> > >}
> > >formulas = molFormula.normalizeToList(new
> > > String(termAttribute.buffer()));
> > >iterator = formulas.listIterator();
> > >savedPositionIncrement += posIncAttribute.getPositionIncrement();
> > >hasSavedState = true;
> > >first = true;
> > >saveState();
> > >}
> > >if (!iterator.hasNext()) {
> > >posIncAttribute.setPositionIncrement(savedPositionIncrement);
> > >savedPositionIncrement = 0;
> > >hasSavedState = false;
> > >continue;
> > >}
> > >String formula = iterator.next();
> > >int startOffset = savedStartOffset;
> > >
> > >if (first) {
> > >termAttribute.setEmpty();
> > >}
> > >int endOffset = savedStartOffset + formula.length();
> > >System.out.printf("Writing formula %s %d to %d%n", formula,
> > > startOffset, endOffset);;
> > >termAttribute.append(formula);
> > >offsetAttribute.setOffset(startOffset, endOffset);
> > >savedStartOffset = endOffset + 1;
> > >if (first) {
> > >posIncAttribute.setPositionIncrement(0);
> > >} else {
> > >first = false;
> > >posIncAttribute.setPositionIncrement(0);
> > >}
> > >typeAttribute.setType(savedType);
> > >return true;
> > >}
> > >}
> > >
> > > --
> > >
> > >
> > > This message and any attachment are confidential and may be privileged
> or
> > > otherwise protected from disclosure. If you are not the intended
> > recipient,
> > > you must not copy this message or attachment or disclose the contents
> to
> > > any other person. If you have received this transmission in error,
> please
> > > notify the sender immediately and delete the message and any attachment
> > > from 

Re: PatternCaptureGroupTokenFilter

2017-09-27 Thread Erick Erickson
No good reason, probably just "nobody got around to it".

The switch to asciidoc has made it much easier to contribute doc
changes, if you have the bandwidth please go ahead and create a patch
for the docs

Best,
Erick

On Wed, Sep 27, 2017 at 1:53 AM, Emir Arnautović
 wrote:
> Hi all,
> Is there some reason why PatternCaptureGroupTokenFilter is not documented 
> even included in the code base?
>
> Thanks,
> Emir


Re: CSV import to SOLR

2017-09-27 Thread Erick Erickson
If you always want to do this exact thing, it looks like a copyField
directive in your schema.

If it has to be more nuanced, you can use something like
StatelessScriptUpdateProcessorFactory.

Both of these would affect _all_ documents coming in to Solr, so may
be too blunt a hammer.

Best,
Erick

On Wed, Sep 27, 2017 at 3:07 AM, Zisis Simaioforidis  wrote:
> Is there a way to map a field value based on another field value without
> replicatiing the columns in the CSV itself?
>
> for example i tried : literal.title_fullStr=f.title_short but it doesn't
> seem to work.
>
> Thank you
>


Re: Filter Factory question

2017-09-27 Thread Webster Homer
There is a need for a special filter since the input has to be normalized.
That is the main requirement, splitting into pieces is optional. As far as
I know there is nothing in solr that knows about molecular formulas.

In any case I figured out my problem. I was over thinking it.

On Wed, Sep 27, 2017 at 3:52 AM, Emir Arnautović <
emir.arnauto...@sematext.com> wrote:

> Hi Homer,
> There is no need for special filter, there is one that is for some reason
> not part of documentation (will ask why so follow that thread if decided to
> go this way): You can use something like:
>  pattern=“([A-Z][a-z]?\d+)” preserveOriginal=“true” />
>
> This will capture all atom counts as a separate tokens.
>
> HTH,
> Emir
>
> > On 26 Sep 2017, at 23:14, Webster Homer  wrote:
> >
> > I am trying to create a filter that normalizes an input token, but also
> > splits it inot multiple pieces. Sort of like what the WordDelimiterFilter
> > does.
> >
> > It's meant to take a molecular formula like C2H6O and normalize it to
> C2H6O1
> >
> > That part works. However I was also going to have it put out the
> individual
> > atom counts as tokens.
> > C2H6O1
> > C2
> > H6
> > O1
> >
> > When I enable this feature in the factory, I don't get any output at all.
> >
> > I looked over a couple of filters that do what I want and it's not
> entirely
> > clear what they're doing. So I have some questions:
> > Looking at ShingleFilter and WordDelimitierFilter
> > They both set several attributes:
> > CharTermAttribute : Seems to be the actual terms being set. Seemed
> straight
> > forward, works fine when I only have one term to add.
> >
> > PositionIncrementAttribute: What does this do? It appears that
> > WordDelimiterFilter sets this to 0 most of the time. This has decent
> > documentation.
> >
> > OffsetAttribute: I think that this tracks offsets for each term being
> > processed. Not really sure though. The documentation mentions tokens. So
> if
> > I have multiple variations for for a token is this for each variation?
> >
> > TypeAttribute: default is "word". Don't know what this is for.
> >
> > PositionLengthAttribute: WordDelimiterFilter doesn' use this but Shingle
> > does. It defaults to 1. What's it good for when should I use it?
> >
> > Here is my incrementToken method.
> >
> >@Override
> >public boolean incrementToken() throws IOException {
> >while(true) {
> >if (!hasSavedState) {
> >if (! input.incrementToken()) {
> >return false;
> >}
> >if (! generateFragments) { // This part works fine!
> >String normalizedFormula = molFormula.normalize(new
> > String(termAttribute.buffer()));
> >char[]newBuffer = normalizedFormula.toCharArray();
> >termAttribute.setEmpty();
> >termAttribute.copyBuffer(newBuffer, 0, newBuffer.length);
> >return true;
> >}
> >formulas = molFormula.normalizeToList(new
> > String(termAttribute.buffer()));
> >iterator = formulas.listIterator();
> >savedPositionIncrement += posIncAttribute.getPositionIncrement();
> >hasSavedState = true;
> >first = true;
> >saveState();
> >}
> >if (!iterator.hasNext()) {
> >posIncAttribute.setPositionIncrement(savedPositionIncrement);
> >savedPositionIncrement = 0;
> >hasSavedState = false;
> >continue;
> >}
> >String formula = iterator.next();
> >int startOffset = savedStartOffset;
> >
> >if (first) {
> >termAttribute.setEmpty();
> >}
> >int endOffset = savedStartOffset + formula.length();
> >System.out.printf("Writing formula %s %d to %d%n", formula,
> > startOffset, endOffset);;
> >termAttribute.append(formula);
> >offsetAttribute.setOffset(startOffset, endOffset);
> >savedStartOffset = endOffset + 1;
> >if (first) {
> >posIncAttribute.setPositionIncrement(0);
> >} else {
> >first = false;
> >posIncAttribute.setPositionIncrement(0);
> >}
> >typeAttribute.setType(savedType);
> >return true;
> >}
> >}
> >
> > --
> >
> >
> > This message and any attachment are confidential and may be privileged or
> > otherwise protected from disclosure. If you are not the intended
> recipient,
> > you must not copy this message or attachment or disclose the contents to
> > any other person. If you have received this transmission in error, please
> > notify the sender immediately and delete the message and any attachment
> > from your system. Merck KGaA, Darmstadt, Germany and any of its
> > subsidiaries do not accept liability for any omissions or errors in this
> > message which may arise as a result of E-Mail-transmission or for damages
> > resulting from any unauthorized changes of the content of this message
> and
> > any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
> > subsidiaries do not guarantee that this message is free of 

Re: Solr performance issue on querying --> Solr 6.5.1

2017-09-27 Thread Emir Arnautović
Hi Arun,
It is hard to measure something without affecting it, but we could use debug 
results and combine with QTime without debug: If we ignore merging results, it 
seems that majority of time is spent for retrieving docs (~500ms). You should 
consider reducing number of rows if you want better response time (you can ask 
for rows=0 to see max possible time). Also, as Erick suggested, reducing number 
of shards (1 if not plan much more doc) will trim some overhead of merging 
results.

Thanks,
Emir

I noticed that you removed bq - is time with bq acceptable as well?
> On 27 Sep 2017, at 12:34, sasarun  wrote:
> 
> Hi Emir, 
> 
> Please find the response without bq parameter and debugQuery set to true. 
> Also it was noted that Qtime comes down drastically without the debug
> parameter to about 700-800. 
> 
> 
> true
> 0
> 3446
> 
> 
> ("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
> "Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
> Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
> "hybrid electric" "electric powerplant")
> 
> edismax
> on
> 
> host
> title
> url
> customContent
> contentSpecificSearch
> 
> 
> id
> contentOntologyTagsCount
> 
> 0
> OR
> 3985d7e2-3e54-48d8-8336-229e85f5d9de
> 600
> true
> 
> 
>  maxScore="56.74194">...
> 
> 
> 
> solr-prd-cluster-m-GooglePatent_shard4_replica2-1506504238282-20
> 
> 
> 
> 35
> 159
> GET_TOP_IDS
> 41294
> ...
> 
> 
> 29
> 165
> GET_TOP_IDS
> 40980
> ...
> 
> 
> 31
> 200
> GET_TOP_IDS
> 41006
> ...
> 
> 
> 43
> 208
> GET_TOP_IDS
> 41040
> ...
> 
> 
> 181
> 466
> GET_TOP_IDS
> 41138
> ...
> 
> 
> 
> 
> 1518
> 1523
> GET_FIELDS,GET_DEBUG
> 110
> ...
> 
> 
> 1562
> 1573
> GET_FIELDS,GET_DEBUG
> 115
> ...
> 
> 
> 1793
> 1800
> GET_FIELDS,GET_DEBUG
> 120
> ...
> 
> 
> 2153
> 2161
> GET_FIELDS,GET_DEBUG
> 125
> ...
> 
> 
> 2957
> 2970
> GET_FIELDS,GET_DEBUG
> 130
> ...
> 
> 
> 
> 
> 10302.0
> 
> 2.0
> 
> 2.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 
> 10288.0
> 
> 661.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 9627.0
> 
> 
> 
> 
> ("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
> "Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
> Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
> "hybrid electric" "electric powerplant")
> 
> 
> ("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
> "Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
> Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
> "hybrid electric" "electric powerplant")
> 
> 
> (+(DisjunctionMaxQuery((host:hybrid electric powerplant |
> contentSpecificSearch:"hybrid electric powerplant" | customContent:"hybrid
> electric powerplant" | title:hybrid electric powerplant | url:hybrid
> electric powerplant)) DisjunctionMaxQuery((host:hybrid electric powerplants
> | contentSpecificSearch:"hybrid electric powerplants" |
> customContent:"hybrid electric powerplants" | title:hybrid electric
> powerplants | url:hybrid electric powerplants))
> DisjunctionMaxQuery((host:Electric | contentSpecificSearch:electric |
> customContent:electric | title:Electric | url:Electric))
> DisjunctionMaxQuery((host:Electrical | contentSpecificSearch:electrical |
> customContent:electrical | title:Electrical | url:Electrical))
> DisjunctionMaxQuery((host:Electricity | contentSpecificSearch:electricity |
> customContent:electricity | title:Electricity | url:Electricity))
> DisjunctionMaxQuery((host:Engine | contentSpecificSearch:engine |
> customContent:engine | title:Engine | url:Engine))
> DisjunctionMaxQuery((host:fuel economy | contentSpecificSearch:"fuel
> economy" | customContent:"fuel economy" | title:fuel economy | url:fuel
> economy)) DisjunctionMaxQuery((host:fuel efficiency |
> contentSpecificSearch:"fuel efficiency" | customContent:"fuel efficiency" |
> title:fuel efficiency | url:fuel efficiency))
> DisjunctionMaxQuery((host:Hybrid Electric Propulsion |
> contentSpecificSearch:"hybrid electric propulsion" | customContent:"hybrid
> electric propulsion" | title:Hybrid Electric Propulsion | url:Hybrid
> Electric Propulsion)) DisjunctionMaxQuery((host:Power Systems |
> contentSpecificSearch:"power systems" | customContent:"power systems" |
> title:Power Systems | url:Power Systems))
> DisjunctionMaxQuery((host:Powerplant | contentSpecificSearch:powerplant |
> customContent:powerplant | title:Powerplant | url:Powerplant))
> DisjunctionMaxQuery((host:Propulsion | contentSpecificSearch:propulsion |
> customContent:propulsion | title:Propulsion | url:Propulsion))
> DisjunctionMaxQuery((host:hybrid | contentSpecificSearch:hybrid |
> customContent:hybrid | title:hybrid | url:hybrid))
> DisjunctionMaxQuery((host:hybrid electric | contentSpecificSearch:"hybrid
> electric" | customContent:"hybrid 

Solr Spatial Query Problem Hk.

2017-09-27 Thread Can Ezgi Aydemir
hi everyone,

I am trying spatial query in solr such as intersects, within etc. I write below 
query but it is wrong. I try 3 different method this query but all query return 
same error.

How to run spatial query in solr? Such as intersect, iswithin etc.

Best Regards.

1- 
http://localhost:8983/solr/nh/select?fq=geometry.coordinates:%22IsWithin(POLYGON((-80%2029,%20-90%2050,%20-60%2070,%200%200,%20-80%2029)))%20distErrPct=0%22
2- 
http://localhost:8983/solr/nh/select?q={!field%20f=geometry.coordinates}Intersects(POLYGON((-80%2029,%20-90%2050,%20-60%2070,%200%200,%20-80%2029)))
3- 
http://localhost:8983/solr/nh/select?q=*:*={!field%20f=geometry.coordinates}Intersects(POLYGON((-80%2029,%20-90%2050,%20-60%2070,%200%200,%20-80%2029)))


 
  400
  1
  
   geometry.coordinates:"IsWithin(POLYGON((-80 29, -90 50, -60 
70, 0 0, -80 29))) distErrPct=0"
   
  
 
 
  
   org.apache.solr.common.SolrException
   org.apache.solr.common.SolrException
  
  Invalid Number: IsWithin(POLYGON((-80 29, -90 50, -60 70, 0 
0, -80 29))) distErrPct=0
  400
 




[cid:74426A0B-010D-4871-A556-A3590DE88C60@islem.com.tr.]

Can Ezgi AYDEMİR
Oracle Veri Tabanı Yöneticisi

İşlem Coğrafi Bilgi Sistemleri Müh. & Eğitim AŞ.
2024.Cadde No:14, Beysukent 06800, Ankara, Türkiye
T : 0 312 233 50 00 .:. F : 0312 235 56 82
E :  
cayde...@islem.com.tr
 .:. W : https://mail.islem.com.tr/owa/redir.aspx?REF=q0Pp2HH-W10G07gbyIRn7NyrFWyaL2QLhqXKE1SMNj1uXODmM8nUCAFodHRwOi8vd3d3LmlzbGVtLmNvbS50ci8.>

Bu e-posta ve ekindekiler gizli bilgiler içeriyor olabilir ve sadece adreslenen 
kişileri ilgilendirir. Eğer adreslenen kişi siz değilseniz, bu e-postayı 
yaymayınız, dağıtmayınız veya kopyalamayınız. Eğer bu e-posta yanlışlıkla size 
gönderildiyse, lütfen bu e-posta ve ekindeki dosyaları sisteminizden siliniz ve 
göndereni hemen bilgilendiriniz. Ayrıca, bu e-posta ve ekindeki dosyaları virüs 
bulaşması ihtimaline karşı taratınız. İŞLEM GIS® bu e-posta ile taşınabilecek 
herhangi bir virüsün neden olabileceği hasarın sorumluluğunu kabul etmez. Bilgi 
için:b...@islem.com.tr This message may contain confidential information and is 
intended only for recipient name. If you are not the named addressee you should 
not disseminate, distribute or copy this e-mail. Please notify the sender 
immediately if you have received this e-mail by mistake and delete this e-mail 
from your system. Finally, the recipient should check this email and any 
attachments for the presence of viruses. İŞLEM GIS® accepts no liability for 
any damage may be caused by any virus transmitted by this email.” For 
information: b...@islem.com.tr


upgrade to 7.0.0

2017-09-27 Thread Stefano Mancini
Hi,

i’ve just installed solr 7.0.0 and i’ve an error opening an index created with 
6.6.1.

The server works fine if  start it with an empty index so i suppose that 
configurations is ok

this is the stack trace:


Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core [tps]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.core.CoreContainer.lambda$load$118(CoreContainer.java:647)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$128(ExecutorUtil.java:188)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Unable to create core [tps]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:996)
at 
org.apache.solr.core.CoreContainer.lambda$load$117(CoreContainer.java:619)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
... 5 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCore.java:988)
at org.apache.solr.core.SolrCore.(SolrCore.java:843)
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:980)
... 7 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2066)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2186)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1071)
at org.apache.solr.core.SolrCore.(SolrCore.java:960)
... 9 more
Caused by: java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at java.util.Optional.(Optional.java:96)
at java.util.Optional.of(Optional.java:108)
at java.util.stream.ReduceOps$2ReducingSink.get(ReduceOps.java:129)
at java.util.stream.ReduceOps$2ReducingSink.get(ReduceOps.java:107)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:479)
at 
org.apache.solr.index.SlowCompositeReaderWrapper.(SlowCompositeReaderWrapper.java:76)
at 
org.apache.solr.index.SlowCompositeReaderWrapper.wrap(SlowCompositeReaderWrapper.java:57)
at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:252)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2034)
... 12 more


Any hint ?



Re: Solr performance issue on querying --> Solr 6.5.1

2017-09-27 Thread sasarun
Hi Emir, 

Please find the response without bq parameter and debugQuery set to true. 
Also it was noted that Qtime comes down drastically without the debug
parameter to about 700-800. 


true
0
3446


("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
"Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
"hybrid electric" "electric powerplant")

edismax
on

host
title
url
customContent
contentSpecificSearch


id
contentOntologyTagsCount

0
OR
3985d7e2-3e54-48d8-8336-229e85f5d9de
600
true


...



solr-prd-cluster-m-GooglePatent_shard4_replica2-1506504238282-20



35
159
GET_TOP_IDS
41294
...


29
165
GET_TOP_IDS
40980
...


31
200
GET_TOP_IDS
41006
...


43
208
GET_TOP_IDS
41040
...


181
466
GET_TOP_IDS
41138
...




1518
1523
GET_FIELDS,GET_DEBUG
110
...


1562
1573
GET_FIELDS,GET_DEBUG
115
...


1793
1800
GET_FIELDS,GET_DEBUG
120
...


2153
2161
GET_FIELDS,GET_DEBUG
125
...


2957
2970
GET_FIELDS,GET_DEBUG
130
...




10302.0

2.0

2.0


0.0


0.0


0.0


0.0


0.0


0.0


0.0


0.0



10288.0

661.0


0.0


0.0


0.0


0.0


0.0


0.0


0.0


9627.0




("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
"Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
"hybrid electric" "electric powerplant")


("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
"Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
"hybrid electric" "electric powerplant")


(+(DisjunctionMaxQuery((host:hybrid electric powerplant |
contentSpecificSearch:"hybrid electric powerplant" | customContent:"hybrid
electric powerplant" | title:hybrid electric powerplant | url:hybrid
electric powerplant)) DisjunctionMaxQuery((host:hybrid electric powerplants
| contentSpecificSearch:"hybrid electric powerplants" |
customContent:"hybrid electric powerplants" | title:hybrid electric
powerplants | url:hybrid electric powerplants))
DisjunctionMaxQuery((host:Electric | contentSpecificSearch:electric |
customContent:electric | title:Electric | url:Electric))
DisjunctionMaxQuery((host:Electrical | contentSpecificSearch:electrical |
customContent:electrical | title:Electrical | url:Electrical))
DisjunctionMaxQuery((host:Electricity | contentSpecificSearch:electricity |
customContent:electricity | title:Electricity | url:Electricity))
DisjunctionMaxQuery((host:Engine | contentSpecificSearch:engine |
customContent:engine | title:Engine | url:Engine))
DisjunctionMaxQuery((host:fuel economy | contentSpecificSearch:"fuel
economy" | customContent:"fuel economy" | title:fuel economy | url:fuel
economy)) DisjunctionMaxQuery((host:fuel efficiency |
contentSpecificSearch:"fuel efficiency" | customContent:"fuel efficiency" |
title:fuel efficiency | url:fuel efficiency))
DisjunctionMaxQuery((host:Hybrid Electric Propulsion |
contentSpecificSearch:"hybrid electric propulsion" | customContent:"hybrid
electric propulsion" | title:Hybrid Electric Propulsion | url:Hybrid
Electric Propulsion)) DisjunctionMaxQuery((host:Power Systems |
contentSpecificSearch:"power systems" | customContent:"power systems" |
title:Power Systems | url:Power Systems))
DisjunctionMaxQuery((host:Powerplant | contentSpecificSearch:powerplant |
customContent:powerplant | title:Powerplant | url:Powerplant))
DisjunctionMaxQuery((host:Propulsion | contentSpecificSearch:propulsion |
customContent:propulsion | title:Propulsion | url:Propulsion))
DisjunctionMaxQuery((host:hybrid | contentSpecificSearch:hybrid |
customContent:hybrid | title:hybrid | url:hybrid))
DisjunctionMaxQuery((host:hybrid electric | contentSpecificSearch:"hybrid
electric" | customContent:"hybrid electric" | title:hybrid electric |
url:hybrid electric)) DisjunctionMaxQuery((host:electric powerplant |
contentSpecificSearch:"electric powerplant" | customContent:"electric
powerplant" | title:electric powerplant | url:electric
powerplant/no_coord


+((host:hybrid electric powerplant | contentSpecificSearch:"hybrid electric
powerplant" | customContent:"hybrid electric powerplant" | title:hybrid
electric powerplant | url:hybrid electric powerplant) (host:hybrid electric
powerplants | contentSpecificSearch:"hybrid electric powerplants" |
customContent:"hybrid electric powerplants" | title:hybrid electric
powerplants | url:hybrid electric powerplants) (host:Electric |
contentSpecificSearch:electric | customContent:electric | title:Electric |
url:Electric) (host:Electrical | contentSpecificSearch:electrical |
customContent:electrical | title:Electrical | url:Electrical)
(host:Electricity | contentSpecificSearch:electricity |
customContent:electricity | title:Electricity | url:Electricity)
(host:Engine | contentSpecificSearch:engine | customContent:engine |
title:Engine | url:Engine) (host:fuel 

CSV import to SOLR

2017-09-27 Thread Zisis Simaioforidis
Is there a way to map a field value based on another field value without 
replicatiing the columns in the CSV itself?


for example i tried : literal.title_fullStr=f.title_short but it doesn't 
seem to work.


Thank you



Re: Solr performance issue on querying --> Solr 6.5.1

2017-09-27 Thread sasarun
Hi Erick, 

Qtime comes down with rows set as 1. Also it was noted that qtime comes down
when debug parameter is not added with the query. It comes to about 900.

Thanks, 
Arun 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr performance issue on querying --> Solr 6.5.1

2017-09-27 Thread Toke Eskildsen
On Tue, 2017-09-26 at 07:43 -0700, sasarun wrote:
> Allocated heap size for young generation is about 8 gb and old 
> generation is about 24 gb. And gc analysis showed peak
> size utlisation is really low compared to these values.

That does not come as a surprise. Your collections would normally be
considered small, if not tiny, looking only at their size measured in
bytes. Again, if you expect them to grow significantly (more than 10x),
your allocation might make sense. If you do not expect such a growth in
the near future, you will be better off with a much smaller heap: The
peak heap utilization that you have logged (or twice that to err on the
cautious side) seems a good starting point.

And whatever you do, don't set Xmx to 32GB. Use <31GB or significantly
more than 32GB:
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-mem
ory-oddities/


Are you indexing while you search? If so, you need to set auto-warm or
state a few explicit warmup-queries. If not, your measuring will not be
representative as it will be on first-searches, which are always slower
than warmed-searches.


- Toke Eskildsen, Royal Danish Library



PatternCaptureGroupTokenFilter

2017-09-27 Thread Emir Arnautović
Hi all,
Is there some reason why PatternCaptureGroupTokenFilter is not documented even 
included in the code base?

Thanks,
Emir

Re: Filter Factory question

2017-09-27 Thread Emir Arnautović
Hi Homer,
There is no need for special filter, there is one that is for some reason not 
part of documentation (will ask why so follow that thread if decided to go this 
way): You can use something like:


This will capture all atom counts as a separate tokens.

HTH,
Emir

> On 26 Sep 2017, at 23:14, Webster Homer  wrote:
> 
> I am trying to create a filter that normalizes an input token, but also
> splits it inot multiple pieces. Sort of like what the WordDelimiterFilter
> does.
> 
> It's meant to take a molecular formula like C2H6O and normalize it to C2H6O1
> 
> That part works. However I was also going to have it put out the individual
> atom counts as tokens.
> C2H6O1
> C2
> H6
> O1
> 
> When I enable this feature in the factory, I don't get any output at all.
> 
> I looked over a couple of filters that do what I want and it's not entirely
> clear what they're doing. So I have some questions:
> Looking at ShingleFilter and WordDelimitierFilter
> They both set several attributes:
> CharTermAttribute : Seems to be the actual terms being set. Seemed straight
> forward, works fine when I only have one term to add.
> 
> PositionIncrementAttribute: What does this do? It appears that
> WordDelimiterFilter sets this to 0 most of the time. This has decent
> documentation.
> 
> OffsetAttribute: I think that this tracks offsets for each term being
> processed. Not really sure though. The documentation mentions tokens. So if
> I have multiple variations for for a token is this for each variation?
> 
> TypeAttribute: default is "word". Don't know what this is for.
> 
> PositionLengthAttribute: WordDelimiterFilter doesn' use this but Shingle
> does. It defaults to 1. What's it good for when should I use it?
> 
> Here is my incrementToken method.
> 
>@Override
>public boolean incrementToken() throws IOException {
>while(true) {
>if (!hasSavedState) {
>if (! input.incrementToken()) {
>return false;
>}
>if (! generateFragments) { // This part works fine!
>String normalizedFormula = molFormula.normalize(new
> String(termAttribute.buffer()));
>char[]newBuffer = normalizedFormula.toCharArray();
>termAttribute.setEmpty();
>termAttribute.copyBuffer(newBuffer, 0, newBuffer.length);
>return true;
>}
>formulas = molFormula.normalizeToList(new
> String(termAttribute.buffer()));
>iterator = formulas.listIterator();
>savedPositionIncrement += posIncAttribute.getPositionIncrement();
>hasSavedState = true;
>first = true;
>saveState();
>}
>if (!iterator.hasNext()) {
>posIncAttribute.setPositionIncrement(savedPositionIncrement);
>savedPositionIncrement = 0;
>hasSavedState = false;
>continue;
>}
>String formula = iterator.next();
>int startOffset = savedStartOffset;
> 
>if (first) {
>termAttribute.setEmpty();
>}
>int endOffset = savedStartOffset + formula.length();
>System.out.printf("Writing formula %s %d to %d%n", formula,
> startOffset, endOffset);;
>termAttribute.append(formula);
>offsetAttribute.setOffset(startOffset, endOffset);
>savedStartOffset = endOffset + 1;
>if (first) {
>posIncAttribute.setPositionIncrement(0);
>} else {
>first = false;
>posIncAttribute.setPositionIncrement(0);
>}
>typeAttribute.setType(savedType);
>return true;
>}
>}
> 
> -- 
> 
> 
> This message and any attachment are confidential and may be privileged or 
> otherwise protected from disclosure. If you are not the intended recipient, 
> you must not copy this message or attachment or disclose the contents to 
> any other person. If you have received this transmission in error, please 
> notify the sender immediately and delete the message and any attachment 
> from your system. Merck KGaA, Darmstadt, Germany and any of its 
> subsidiaries do not accept liability for any omissions or errors in this 
> message which may arise as a result of E-Mail-transmission or for damages 
> resulting from any unauthorized changes of the content of this message and 
> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its 
> subsidiaries do not guarantee that this message is free of viruses and does 
> not accept liability for any damages caused by any virus transmitted 
> therewith.
> 
> Click http://www.emdgroup.com/disclaimer to access the German, French, 
> Spanish and Portuguese versions of this disclaimer.



Re: DocValues, Long and SolrJ

2017-09-27 Thread Emir Arnautović
I did not look at the code, but after deleting make sure all segments are gone 
(maybe optimize), make sure you reloaded the core and if nothing works (and 
this is the recommended solution) recreate your collection instead of deleting 
all documents. 

HTH,
Emir

> On 26 Sep 2017, at 23:04, Phil Scadden  wrote:
> 
> I get it after I have deleted the index with a delete query and start trying 
> to populate it again with new documents. The error occurs when the indexer 
> tries to add a new document. And yes, I did change the schema before I 
> started the operation.
> 
> -Original Message-
> From: Emir Arnautović [mailto:emir.arnauto...@sematext.com]
> Sent: Tuesday, 26 September 2017 8:49 p.m.
> To: solr-user@lucene.apache.org
> Subject: Re: DocValues, Long and SolrJ
> 
> Hi Phil,
> Are you saying that you get this error when you create fresh core/collection? 
> This sort of errors are usually related to schema being changed after some 
> documents being indexed.
> 
> Thanks,
> Emir
> 
>> On 25 Sep 2017, at 23:42, Phil Scadden  wrote:
>> 
>> I ran into a problem with indexing documents which I worked around by 
>> changing data type, but I am curious as to how the setup could be made to 
>> work.
>> 
>> Solr 6.5.1 - Field type Long, multivalued false, DocValues.
>> 
>> In indexing with Solr, I set the value of field with:
>>   Long accessLevel
>>   ...
>>   accessLevel = qury.val(1);
>>   ...
>>   Document.addField("access", accessLevel);
>> 
>> Solr fails to add the document with this message:
>> 
>> "cannot change DocValues type from SORTED_SET to NUMERIC for field"
>> 
>> ??? So how do you configure a single-valued Long type?
>> Notice: This email and any attachments are confidential and may not be used, 
>> published or redistributed without the prior written consent of the 
>> Institute of Geological and Nuclear Sciences Limited (GNS Science). If 
>> received in error please destroy and immediately notify GNS Science. Do not 
>> copy or disclose the contents.
> 
> Notice: This email and any attachments are confidential and may not be used, 
> published or redistributed without the prior written consent of the Institute 
> of Geological and Nuclear Sciences Limited (GNS Science). If received in 
> error please destroy and immediately notify GNS Science. Do not copy or 
> disclose the contents.



Re: Solr performance issue on querying --> Solr 6.5.1

2017-09-27 Thread Emir Arnautović
Hi Arun,
This is not the most simple query either - a dozen of phrase queries on several 
fields + the same query as bq. Can you provide debugQuery info.
I did not look much into debug times and what includes what, but one thing that 
is strange to me is that QTime is 4s while query in debug is 1.3s. Can you try 
running without bq? Can you include boost factors in the main query?

Thanks,
Emir

> On 26 Sep 2017, at 16:43, sasarun  wrote:
> 
> Hi All, 
> I have been using Solr for some time now but mostly in standalone mode. Now
> my current project is using Solr 6.5.1 hosted on hadoop. My solrconfig.xml
> has the following configuration. In the prod environment the performance on
> querying seems to really slow. Can anyone help me with few pointers on
> howimprove on the same. 
> 
> 
>${solr.hdfs.home:}
> name="solr.hdfs.blockcache.enabled">${solr.hdfs.blockcache.enabled:true}
> name="solr.hdfs.blockcache.slab.count">${solr.hdfs.blockcache.slab.count:1}
> name="solr.hdfs.blockcache.direct.memory.allocation">${solr.hdfs.blockcache.direct.memory.allocation:false}
> name="solr.hdfs.blockcache.blocksperbank">${solr.hdfs.blockcache.blocksperbank:16384}
> name="solr.hdfs.blockcache.read.enabled">${solr.hdfs.blockcache.read.enabled:true}
> name="solr.hdfs.blockcache.write.enabled">${solr.hdfs.blockcache.write.enabled:false}
> name="solr.hdfs.nrtcachingdirectory.enable">${solr.hdfs.nrtcachingdirectory.enable:true}
> name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">${solr.hdfs.nrtcachingdirectory.maxmergesizemb:16}
> name="solr.hdfs.nrtcachingdirectory.maxcachedmb">${solr.hdfs.nrtcachingdirectory.maxcachedmb:192}
> 
>hdfs
> It has 6 collections of following size 
> Collection 1 -->6.41 MB
> Collection 2 -->634.51 KB 
> Collection 3 -->4.59 MB 
> Collection 4 -->1,020.56 MB 
> Collection 5 --> 607.26 MB
> Collection 6 -->102.4 kb
> Each Collection has 5 shards each. Allocated heap size for young generation
> is about 8 gb and old generation is about 24 gb. And gc analysis showed peak
> size 
> utlisation is really low compared to these values. 
> But querying to Collection 4 and collection 5 is giving really slow response
> even thoughwe are not using any complex queries.Output of debug quries run
> with debug=timing
> are given below for reference. Can anyone help suggest a way improve the
> performance.
> 
> Response to query
> 
> 
> true
> 0
> 3962
> 
> 
> ("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
> "Electrical" "Electricity" "Engine" "fuel economy" "fuel efficiency" "Hybrid
> Electric Propulsion" "Power Systems" "Powerplant" "Propulsion" "hybrid"
> "hybrid electric" "electric powerplant")
> 
> edismax
> true
> on
> 
> host
> title
> url
> customContent
> contentSpecificSearch
> 
> 
> id
> contentTagsCount
> 
> 0
> OR
> OR
> 3985d7e2-3e54-48d8-8336-229e85f5d9de
> 600
> 
> ("hybrid electric powerplant"^100.0 "hybrid electric powerplants"^100.0
> "Electric"^50.0 "Electrical"^50.0 "Electricity"^50.0 "Engine"^50.0 "fuel
> economy"^50.0 "fuel efficiency"^50.0 "Hybrid Electric Propulsion"^50.0
> "Power Systems"^50.0 "Powerplant"^50.0 "Propulsion"^50.0 "hybrid"^15.0
> "hybrid electric"^15.0 "electric powerplant"^15.0)
> 
> 
> 
> 
> 
> 15374.0
> 
> 2.0
> 
> 2.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 
> 15363.0
> 
> 1313.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 0.0
> 
> 
> 14048.0
> 
> 
> 
> 
> 
> Thanks,
> Arun
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html