RE: create collection from existing managed-schema

2018-07-26 Thread Rahul Chhiber
Hi,

If you want to share schema and/or other configurations between collections, 
you need to create a configset. Then, specify this configset while creating any 
collections.

Any changes made to that configset or schema will reflect in all collections 
that are using it.

By default, Solr has the _default configset for any collections created without 
explicit configset.

Regards,
Rahul Chhiber

-Original Message-
From: Chuming Chen [mailto:chumingc...@gmail.com] 
Sent: Thursday, July 26, 2018 11:35 PM
To: solr-user@lucene.apache.org
Subject: create collection from existing managed-schema

Hi All,

>From Solr Admin interface, I have created a collection and added field 
>definitions. I can get its managed-schema from the Admin interface. 

Can I use this managed-schema to create a new collection? How?

Thanks,

Chuming




RE: cmd to enable debug logs

2018-07-09 Thread Rahul Chhiber
Use -v option in the bin/solr start command.

Regards,
Rahul Chhiber


-Original Message-
From: Prateek Jain J [mailto:prateek.j.j...@ericsson.com] 
Sent: Monday, July 09, 2018 4:26 PM
To: solr-user@lucene.apache.org
Subject: cmd to enable debug logs


Hi All,

What's the command (from CLI) to enable debug logs for a core in solr? To be 
precise, I am using solr 4.8.1. I looked into admin guide and it talks about 
how to do it from UI but nothing from CLI perspective.  Any help pointers will 
be of help.

Note: I can't update solrconfig.xml.


Regards,
Prateek Jain



RE: Using lucene to post-process Solr query results

2018-01-24 Thread Rahul Chhiber
Exactly. I want to validate each lucene document with the query and discard the 
ones that don't match.

Regards,
Rahul 

-Original Message-
From: Diego Ceccarelli (BLOOMBERG/ LONDON) [mailto:dceccarel...@bloomberg.net] 
Sent: Tuesday, January 23, 2018 7:35 PM
To: solr-user@lucene.apache.org
Subject: RE: Using lucene to post-process Solr query results

And you want to show to the users only the Lucene documents that matched the 
original query sent to Solr? (what if a lucene document matches only part of 
the query?) 

From: solr-user@lucene.apache.org At: 01/23/18 13:55:46To:  Diego Ceccarelli 
(BLOOMBERG/ LONDON ) ,  solr-user@lucene.apache.org
Subject: RE: Using lucene to post-process Solr query results

Hi Diego,

Basically, each Solr document has a text field , which contains large amount of 
text separated by some delimiters. I split this text into parts and then assign 
each part to a separate lucene Document object.

The field could also be multi-valued, in which case I create a Lucene document 
for each different value for that field in the same Solr document.

Regards,
Rahul 


-Original Message-
From: Diego Ceccarelli (BLOOMBERG/ LONDON) [mailto:dceccarel...@bloomberg.net] 
Sent: Tuesday, January 23, 2018 7:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results

Rahul, can you provide more details on how you decide that the smaller lucene 
objects are part of the same solr document? 


From: solr-user@lucene.apache.org At: 01/23/18 09:59:17To:  
solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results

Hi Rahul,
Looks like Streaming expressions can probably can help you.

Is there something else you have tried for this?

Atita


On Jan 23, 2018 3:24 PM, "Rahul Chhiber" <rahul.chhi...@cumulus-systems.com>
wrote:

Hi All,

For our business requirement, once our Solr client (Java) gets the results of a 
search query from the Solr server, we need to further search across and also 
within the content of the returned documents. To accomplish this, I am 
attempting to create on the client-side an in-memory lucene index 
(RAMDirectory), convert the SolrDocument objects into smaller lucene Document 
objects, add them into the index and then search within it.

Has something like this been attempted yet? And does it sound like a workable 
idea ?

P.S. - Reason for this approach is basically that we need search on the data at 
a certain fine granularity but don't want to index the data at such high 
granularity for indexing performance reasons i.e. we need to keep the total 
number of documents small.

Appreciate any help.

Regards,
Rahul Chhiber




RE: Using lucene to post-process Solr query results

2018-01-23 Thread Rahul Chhiber
Hi Diego,

Basically, each Solr document has a text field , which contains large amount of 
text separated by some delimiters. I split this text into parts and then assign 
each part to a separate lucene Document object.

The field could also be multi-valued, in which case I create a Lucene document 
for each different value for that field in the same Solr document.

Regards,
Rahul 


-Original Message-
From: Diego Ceccarelli (BLOOMBERG/ LONDON) [mailto:dceccarel...@bloomberg.net] 
Sent: Tuesday, January 23, 2018 7:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results

Rahul, can you provide more details on how you decide that the smaller lucene 
objects are part of the same solr document? 


From: solr-user@lucene.apache.org At: 01/23/18 09:59:17To:  
solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results

Hi Rahul,
Looks like Streaming expressions can probably can help you.

Is there something else you have tried for this?

Atita


On Jan 23, 2018 3:24 PM, "Rahul Chhiber" <rahul.chhi...@cumulus-systems.com>
wrote:

Hi All,

For our business requirement, once our Solr client (Java) gets the results of a 
search query from the Solr server, we need to further search across and also 
within the content of the returned documents. To accomplish this, I am 
attempting to create on the client-side an in-memory lucene index 
(RAMDirectory), convert the SolrDocument objects into smaller lucene Document 
objects, add them into the index and then search within it.

Has something like this been attempted yet? And does it sound like a workable 
idea ?

P.S. - Reason for this approach is basically that we need search on the data at 
a certain fine granularity but don't want to index the data at such high 
granularity for indexing performance reasons i.e. we need to keep the total 
number of documents small.

Appreciate any help.

Regards,
Rahul Chhiber




RE: Using lucene to post-process Solr query results

2018-01-23 Thread Rahul Chhiber
Hi Atita,

Haven't tried anything else. I considered writing a plugin , custom 
SearchComponent or such , but being fairly ignorant of the Solr internals I 
thought of first trying out this approach, and if this works then maybe moving 
the processing inside a plugin.

I will take a look at streaming expressions, looks interesting.

Regards,
Rahul Chhiber

-Original Message-
From: Atita Arora [mailto:atitaar...@gmail.com] 
Sent: Tuesday, January 23, 2018 3:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results

Hi Rahul,
Looks like Streaming expressions can probably can help you.

Is there something else you have tried for this?

Atita



On Jan 23, 2018 3:24 PM, "Rahul Chhiber" <rahul.chhi...@cumulus-systems.com>
wrote:

Hi All,

For our business requirement, once our Solr client (Java) gets the results of a 
search query from the Solr server, we need to further search across and also 
within the content of the returned documents. To accomplish this, I am 
attempting to create on the client-side an in-memory lucene index 
(RAMDirectory), convert the SolrDocument objects into smaller lucene Document 
objects, add them into the index and then search within it.

Has something like this been attempted yet? And does it sound like a workable 
idea ?

P.S. - Reason for this approach is basically that we need search on the data at 
a certain fine granularity but don't want to index the data at such high 
granularity for indexing performance reasons i.e. we need to keep the total 
number of documents small.

Appreciate any help.

Regards,
Rahul Chhiber


Using lucene to post-process Solr query results

2018-01-23 Thread Rahul Chhiber
Hi All,

For our business requirement, once our Solr client (Java) gets the results of a 
search query from the Solr server, we need to further search across and also 
within the content of the returned documents. To accomplish this, I am 
attempting to create on the client-side an in-memory lucene index 
(RAMDirectory), convert the SolrDocument objects into smaller lucene Document 
objects, add them into the index and then search within it.

Has something like this been attempted yet? And does it sound like a workable 
idea ?

P.S. - Reason for this approach is basically that we need search on the data at 
a certain fine granularity but don't want to index the data at such high 
granularity for indexing performance reasons i.e. we need to keep the total 
number of documents small.

Appreciate any help.

Regards,
Rahul Chhiber