Re: full name free text search problem

2018-01-30 Thread Rahul Sood
Hi Deepak,
Look at the score of your response results.
You can do this in Debug mode.
Rahul.

On Wed, Jan 31, 2018 at 4:18 AM, Deepak Udapudi  wrote:

> Hi all,
>
> I have the below scenario in full name search that we are trying to
> implement.
>
> Solr configuration :-
>
> fieldType name="keywords_text" class="solr.TextField">
> 
>   
>   
> 
> 
>delimiter="/"/>
>   
> 
>   
>
>
>  multiValued="true" />
>   
>   
>   
> 
>
> Scenario :-
>
> Solr configuration has office name, facility name and the full name as
> displayed above.
> We are searching based on the input name with the records sorts by
> distance.
>
> Problem :-
>
> I am getting the records matching the full name sorted by distance.
> If the input string(for ex Dae Kim) is provided, I am getting the records
> other than Dae Kim(for ex Rodney Kim) too at the top of the search results
> including Dae Kim
> just before the next Dae Kim because Kim is matching with all the fields
> like full name, facility name and the office name. So, the hit frequency is
> high and it's
> distance is less compared to the next Dae Kim in the search results with
> higher distance.
>
> Expected results :-
>
> I want to see all the records for Dae Kim to be seen at the top of the
> search results sorted by distance without any irrelevant results.
>
> Queries :-
>
> What is the fix for the above problem if anyone has faced it?
> How do I handle the problem?
>
> Any inputs would be highly appreciated.
>
> Thanks in advance.
>
> Regards,
> Deepak
>
>
>
>
> The information contained in this email message and any attachments is
> confidential and intended only for the addressee(s). If you are not an
> addressee, you may not copy or disclose the information, or act upon it,
> and you should delete it entirely from your email system. Please notify the
> sender that you received this email in error.
>



-- 
"Learning is not necessary, neither is survival"


DataImportHandler not able to Import a Custom XML - documents array is EMPTY

2018-01-30 Thread Rahul Sood
Hi,
Struggling to Import an XML containing an XSL transformation from
dataImport.
Do we need to run in Cloud mode for this ?
When I start solr in DIH mode, my other Cores are not visible.
1) My SolrConfig.XML has this:
  

  
  rahul-data-config.xml

  
2) My rahul-data-config.xml looks like this:



http://localhost/xml/1998.1.651698.xml;
transformer="RegexTransformer,DateFormatTransformer"
>



3) My XML looks like this:



  
Audio, Navigation, Monitors, Alarms,
SRS
 B 65 16 98
Service Engineering

  February
  1999

  

4) My XSL looks like this:















I am hosting the XML and XSL on XAMPP (Apache) server.
5) My debug response from dataimport looks like this:
{
  "responseHeader": {
"status": 0,
"QTime": 54
  },
  "initArgs": [
"defaults",
[
  "config",
  "rahul-data-config.xml"
]
  ],
  "command": "full-import",
  "mode": "debug",
  "documents": [],
  "verbose-output": [],
  "status": "idle",
  "importResponse": "",
  "statusMessages": {
"Total Requests made to DataSource": "1",
"Total Rows Fetched": "0",
"Total Documents Processed": "0",
"Total Documents Skipped": "0",
"Full Dump Started": "2018-01-31 04:35:02",
"": "Indexing completed. Added/Updated: 0 documents. Deleted 0
documents.",
"Committed": "2018-01-31 04:35:02",
"Time taken": "0:0:0.33"
  }
}

Please note that documents is an empty array.

What am I doing wrong ?

Best Regards,
Rahul.


Re: Distributed search cross cluster

2018-01-30 Thread Erick Erickson
Jan:

Hmmm, must Solr do the work? On some level it seems easier if your
middle layer (behind your single IP) has 10 CloudSolrClient thread
pools, one for each cluster and just merged the docs when it got them
back. That would take care of all of the goodness of internal LBs and
all that.

Somewhere you have to know about all 10 ZK ensembles (or the IP
address of one of your Solr instances in each cluster or). You're
talking about building that into a SearchComponent, but would a simple
client work as well? It wouldn't even have to be on a node hosting
Solr.

Streaming doesn't really seem to fit the bill, I don't think. It's
built to handle large result sets and it doesn't sound like this is
that. It _used_ to read to end of stream even when closed, although
that's been fixed but check your version (don't have the JIRA number
offhand).

If you use a "shards" approach, don't you then have a single point of
failure if it's going to just "some Solr node" in each of the other
collections?

I admit I just scanned your post and haven't thought about it very deeply

Erick

On Tue, Jan 30, 2018 at 8:09 AM, Jan Høydahl  wrote:
> Hi,
>
> A customer has 10 separate SolrCloud clusters, with same schema across all, 
> but different content.
> Now they want users in each location to be able to federate a search across 
> all locations.
> Each location is 100% independent, with separate ZK etc. Bandwidth and 
> latency between the
> clusters is not an issue, they are actually in the same physical datacenter.
>
> Now my first thought was using a custom  parameter, and let the 
> receiving node fan
> out to all shards of all clusters. We’d need to contact the ZK for each 
> environment and find
> all shards and replicas participating in the collection and then construct 
> the shards=A1|A2,B1|B2…
> sting which would be quite big, but if we get it right, it should “just work".
>
> Now, my question is whether there are other smarter ways that would leave it 
> up to existing Solr
> logic to select shards and load balance, that would also take into account 
> any shard.keys/_route_
> info etc. I thought of these
>   * =collA,collB  — but it only supports collections local to one 
> cloud
>   * Create a collection ALIAS to point to all 10 — but same here, only local 
> to one cluster
>   * Streaming expression top(merge(search(q=,zkHost=blabla))) — but we want 
> it with pure search API
>   * Write a custom ShardHandler plugin that knows about all clusters — but 
> this is complex stuff :)
>   * Write a custom SearchComponent plugin that knows about all clusters and 
> adds the = param
>
> Another approach would be for the originating cluster to fan out just ONE 
> request to each of the other
> clusters and then write some SearchComponent to merge those responses. That 
> would let us query
> the other clusters using one LB IP address instead of requiring full 
> visibility to all solr nodes
> of all clusters, but if we don’t need that isolation, that extra merge code 
> seems fairly complex.
>
> So far I opt for the custom SearchComponent and = param approach. Any 
> useful input from
> someone who tried a similar approach would be priceless!
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>


Re: Mixing simple and nested docs in same update?

2018-01-30 Thread Tomas Fernandez Lobbe
I believe the problem is that:
* BlockJoin queries do not know about your “types”, in the BlockJoin query 
world, everything that’s not a parent (matches the parentFilter) is a child.
* All docs indexed before a parent are considered childs of that doc.
That’s why in your first case it considers “friend” (not a parent, then a 
child) to be a child of the first parent it can find in the segment (mother). 
In the second case, the “friend” doc would have no parent. No parent document 
matches the filter after it, so it’s not considered a match. 
Maybe if you try your query with parentFilter=-type:child, this particular 
example works (I haven’t tried it)?

Note that when you send docs with childs to Solr, Solr will make sure the 
childs are indexed before the parent. Also note that there are some other open 
bugs related to child docs, and in particular, with mixing child docs with 
non-child docs, depending on which features you need this may be a problem.

Tomás

> On Jan 30, 2018, at 5:48 AM, Jan Høydahl  wrote:
> 
> Pasting the GIST link :-) 
> https://gist.github.com/45640fe3bad696d53ef8a0930a35d163 
> 
> Anyone knows if this is expected behavior?
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>> 15. jan. 2018 kl. 14:08 skrev Jan Høydahl :
>> 
>> Radio silence…
>> 
>> Here is a GIST for easy reproduction. Is this by design?
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>>> 11. jan. 2018 kl. 00:42 skrev Jan Høydahl :
>>> 
>>> Hi,
>>> 
>>> We index several large nested documents. We found that querying the data 
>>> behaves differently depending on how the documents are indexed.
>>> 
>>> To reproduce:
>>> 
>>> solr start
>>> solr create -c nested
>>> # Index one plain document, “friend" and a nested one, “mother” and 
>>> “daughter”, in same request:
>>> curl localhost:8983/solr/nested/update -d ‘
>>> 
>>> 
>>>   friend
>>>   other
>>> 
>>> 
>>>   mother
>>>   parent
>>>   
>>> daughter
>>> child
>>>   
>>> 
>>> '
>>> 
>>> # Query for mother’s children using either child transformer or child query 
>>> parser
>>> curl 
>>> "localhost:8983/solr/a/query?q=id:mother=%2A%2C%5Bchild%20parentFilter%3Dtype%3Aparent%5D”
>>> {
>>> "responseHeader":{
>>>  "zkConnected":true,
>>>  "status":0,
>>>  "QTime":4,
>>>  "params":{
>>>"q":"id:mother",
>>>"fl":"*,[child parentFilter=type:parent]"}},
>>> "response":{"numFound":1,"start":0,"docs":[
>>>{
>>>  "id":"mother",
>>>  "type":["parent"],
>>>  "_version_":1589249812802306048,
>>>  "type_str":["parent"],
>>>  "_childDocuments_":[
>>>  {
>>>"id":"friend",
>>>"type":["other"],
>>>"_version_":1589249812729954304,
>>>"type_str":["other"]},
>>>  {
>>>"id":"daughter",
>>>"type":["child"],
>>>"_version_":1589249812802306048,
>>>"type_str":["child"]}]}]
>>> }}
>>> 
>>> As you can see, the “friend” got included as a child of “mother”.
>>> If you index the exact same request, putting “friend” after “mother” in the 
>>> xml,
>>> the query works as expected.
>>> 
>>> Inspecting the index, everything looks correct, and only “daughter” and 
>>> “mother” have _root_=mother.
>>> Is there a rule that you should start a new update request for each type of 
>>> parent/child relationship
>>> that you need to index, and not mix them in the same request?
>>> 
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>> 
>> 
> 



facet.method=uif not working in solr cloud?

2018-01-30 Thread Wei
Hi,

I am using the following parameters for faceting, request solr to use the
UIF method;

=on=color=*:*=uif=1=true

It works as expected in my local standalone solr:


   - facet-debug:
   {
  - elapse: 2,
  - sub-facet:
  [
 -
 {
- processor: "SimpleFacets",
- elapse: 2,
- action: "field facet",
- maxThreads: 0,
- sub-facet:
[
   -
   {
  - elapse: 2,
  - requestedMethod: "UIF",
  - appliedMethod: "UIF",
  - inputDocSetSize: 8191,
  - field: "color"
  }
   ]
}
 ]
  },


However when I apply the same query to solr cloud with multiple shards, the
appliedMethod is alway FC instead of UIF:

{

   - processor: "SimpleFacets",
   - elapse: 18,
   - action: "field facet",
   - maxThreads: 0,
   - sub-facet:
   [
  -
  {
 - elapse: 58,
 - requestedMethod: "UIF",
 - appliedMethod: "FC",
 - inputDocSetSize: 33487,
 - field: "color",
 - numBuckets: 238
 }
  ]

}

I also see that in standalone mode fieldValueCache is used with UIF
applied, but in cloud mode fieldValueCache is always empty.  Are there any
other parameters I need to apply UIF faceting in solr cloud?

Thanks,
Wei


full name free text search problem

2018-01-30 Thread Deepak Udapudi
Hi all,

I have the below scenario in full name search that we are trying to implement.

Solr configuration :-

fieldType name="keywords_text" class="solr.TextField">

  
  


  
  

  



  
  
  


Scenario :-

Solr configuration has office name, facility name and the full name as 
displayed above.
We are searching based on the input name with the records sorts by distance.

Problem :-

I am getting the records matching the full name sorted by distance.
If the input string(for ex Dae Kim) is provided, I am getting the records other 
than Dae Kim(for ex Rodney Kim) too at the top of the search results including 
Dae Kim
just before the next Dae Kim because Kim is matching with all the fields like 
full name, facility name and the office name. So, the hit frequency is high and 
it's
distance is less compared to the next Dae Kim in the search results with higher 
distance.

Expected results :-

I want to see all the records for Dae Kim to be seen at the top of the search 
results sorted by distance without any irrelevant results.

Queries :-

What is the fix for the above problem if anyone has faced it?
How do I handle the problem?

Any inputs would be highly appreciated.

Thanks in advance.

Regards,
Deepak




The information contained in this email message and any attachments is 
confidential and intended only for the addressee(s). If you are not an 
addressee, you may not copy or disclose the information, or act upon it, and 
you should delete it entirely from your email system. Please notify the sender 
that you received this email in error.


Sort for spatial search

2018-01-30 Thread Leila Deljkovic
Hiya,

So I have some nested documents in my index with this kind of structure:
{   
"id": “parent",
"gridcell_rpt": "POLYGON((30 10, 40 40, 20 40, 10 20, 30 10))",
"density": “30"

"_childDocuments_" : [
{
"id":"child1",
"gridcell_rpt":"MULTIPOLYGON(((30 20, 45 40, 10 40, 30 20)))",
"density":"25"
},
{
"id":"child2",
"gridcell_rpt":"MULTIPOLYGON(((15 5, 40 10, 10 20, 5 10, 15 5)))",
"density":"5"
}
]
}

The parent document is a WKT shape, and its children are “grid cells”, which 
are just divisions of the main shape (ie; cutting up the parent shape to get 
children shapes). The “density" is the feature count in each shape. When I 
query (through the Solr UI) I use “Intersects” to return parents which touch 
the search area (note that if a child is touching, the parent must also be 
touching).

eg; fq={!field f=gridcell_rpt}Intersects(POLYGON((-20 70, -50 80, -20 
20, 30 60, -10 40, -20 70)))

and I want to sort the results by the sum of the densities of all the children 
touching the search area (so which parent has children that touch the search 
area, and how big the sum of these children’s densities is)
something like {!parent which=is_parent:true score=total 
v='+is_parent:false +{!func}density'} desc

The problem is that this includes children that DON’T touch the search area in 
the sum. How can I only include the shapes from the first query above in my 
sort?

Cheers :)

CoreDescriptor not found in CoreCache

2018-01-30 Thread shefalid
I am using solr 6.6.2. In case of an init failure in core, CoreDescriptor of
such a core is not present in the core cache
(TransientSolrCoreCacheDefault). Is this behavior expected?



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Help with Boolean search using Solr parser edismax

2018-01-30 Thread Wendy2
Hi Emlr,

Thank you for reading my post and for your reply. I updated my post with
debug info and a better view of the definition of  /search request handler. 

Any suggestion on what I should try? 

Thanks,

Wendy



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Help with Boolean search using Solr parser edismax

2018-01-30 Thread Wendy2
Hi Emir,

Thank you so much for your response. I updated my post with an image which
display the configuration of the /search request handler. Any suggestions?

Thanks,

Wendy




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


literal.* use in posting PDF files

2018-01-30 Thread TK Solr
I have a schema.xml defined to require two fields, "id" and "libDocumentID". 
solrconfig.xml is the standard one.


Using curl, I tried posting a PDF file like this:

curl 
'http://localhost:8983/solr/update/extract?literal.id=foodf=foo=true' 
-F "myfile=@foo.pdf"


but I got:

[doc=foo.pdf] missing required field: 
libDocumentID400


Can I specify more than one litera.name=value ? Do I have to define 
literal.libDocumentID in solrconfig.xml?


I'm using Solr 5.3.1 (please don't ask...).

TK




RE: SOLR 7.1 queries not including empty fields in results

2018-01-30 Thread Hodder, Rick
Hi Chris,

:Are you still using the same solrconfig.xml you had in 4.10, or did you switch 
to using a newer sample/default set (or in some other way
modified) solrconfig.xml?

:I ask because even if you are using the ClassicIndexSchemaFactory, your update 
processor chain might be using TrimFieldUpdateProcessorFactory and/or 
RemoveBlankFieldUpdateProcessorFactory ?

Right on the money - 

I started with a 7.1 solrconfig.xml slowly moved over settings from 4.10, so my 
solrconfig.xml had RemoveBlankFieldUpdateProcessorFactory configured in its 
updateProcessor - turned that off and now all is working as under 4.10 (better 
even)

Thanks!
Rick

-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org] 
Sent: Wednesday, January 24, 2018 6:18 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR 7.1 queries not including empty fields in results



:Are you still using the same solrconfig.xml you had in 4.10, or did you switch 
to using a newer sample/default set (or in some other way
modified) solrconfig.xml?

:I ask because even if you are using the ClassicIndexSchemaFactory, your update 
processor chain might be using TrimFieldUpdateProcessorFactory and/or 
RemoveBlankFieldUpdateProcessorFactory ?




Searching for an efficient and scalable way to filter query results using non-indexed and dynamic range values

2018-01-30 Thread Luigi Caiazza
Hello,

I am working on a project that simulates a selective, large-scale crawling.
The system adapts its behaviour according with some external user queries
received at crawling time. Briefly, it analyzes the already crawled pages
in the top-k results for each query, and prioritizes the visit of the
discovered links accordingly. In a generic experiment, I measure the time
units as the number of crawling cycles completed so far, i.e., with an
integer value. Finally, I evaluate the experiment by analyzing the
documents fetched over the crawling cycles. In this work I am using Lucene
7.2.1, but this should not be an issue since I need just some conceptual
help.

In my current implementation, an experiment starts with an empty index.
When a Web page is fetched during the crawling cycle *x*, the system builds
a document with the URL as StringField, the title and the body as
TextFields, and *x* as an IntPoint. When I get an external user query, I
submit it  to get the top-k relevant documents crawled so far. When I need
to retrieve the documents indexed from cycle *i* to cycle *j*, I execute a
range query over this last IntPoint field. This strategy does the job, but
of course the write operations take some hours overall for a single
experiment, even if I crawl just half a million of Web pages.

Since I am not crawling real-time data, but I am working over a static set
of many billions of Web pages (whose contents are already stored on disk),
I am investigating some opportunities to reduce the number of writes during
an experiment. For instance, I could avoid to index everything from scratch
for each run. I would be happy to index all the static contents of my
dataset (i.e., URL, title and body of a Web page) once and for all. Then,
for a single experiment, I would mark a document as crawled at cycle
*x* without
storing this information permanently, in order both to filter out the
documents that in the current simulation have not been crawled when
processing the external queries, and to still perform the range queries at
evaluation time. Do you have any idea on how to do that?

Thank you in advance for your support.


Re: Broken Feature in Solr 6.6

2018-01-30 Thread Joel Bernstein
Your welcome

Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Jan 30, 2018 at 11:00 AM, Antelmo Aguilar  wrote:

> Hi Joel,
>
> Thank you!  Changing the class from SearchHandler to ExportHandler worked.
> I appreciate you looking into it.
>
> -Antelmo
>
> On Tue, Jan 30, 2018 at 10:43 AM, Joel Bernstein 
> wrote:
>
> > I think the best approach is to use the /export handler. The wt=xsort I
> > believe has been removed from the system. The configuration for the
> /export
> > handler uses wt=json now.
> >
> > The configurations in the implicitPlugins.js look like this:
> >
> > "/export": {
> >   "class": "solr.ExportHandler",
> >   "useParams":"_EXPORT",
> >   "components": [
> > "query"
> >   ],
> >   "defaults": {
> > "wt": "json"
> >   },
> >   "invariants": {
> > "rq": "{!xport}",
> > "distrib": false
> >   }
> >
> >
> >
> >
> >
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> > On Tue, Jan 30, 2018 at 8:23 AM, Antelmo Aguilar 
> wrote:
> >
> > > Hi Joel,
> > >
> > > I apologize, I should have been more specific.  We do not use the
> export
> > > handler that is defined by Solr.  We use a couple export handlers that
> we
> > > defined using the convention explained in the ticket that implemented
> the
> > > feature.
> > >
> > > We did this because we have "categories" of things we export so there
> are
> > > additional invariants for each category so we do not have to worry
> about
> > > them when constructing the query.
> > >
> > > It seems that with version 6.6, these custom export handlers do not
> work
> > > anymore.
> > >
> > > Best,
> > > Antelmo
> > >
> > >
> > > On Jan 29, 2018 7:37 PM, "Joel Bernstein"  wrote:
> > >
> > > There was a change in the configs between 6.1 and 6.6. If you upgraded
> > you
> > > system and kept the old configs then the /export handler won't work
> > > properly. Check solrconfig.xml and remove any reference to the /export
> > > handler. You also don't need to specify the rq or wt when you access
> the
> > > /export handler anymore. This should work fine:
> > >
> > > http://host:port/solr/collection/export?q=*:*=
> > > exp_id_s=exp_id_s+asc
> > >
> > > Joel Bernstein
> > > http://joelsolr.blogspot.com/
> > >
> > > On Mon, Jan 29, 2018 at 4:59 PM, Antelmo Aguilar 
> > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I was using this feature in Solr 6.1:
> > > > https://issues.apache.org/jira/browse/SOLR-5244
> > > >
> > > > It seems that this feature is broken in Solr 6.6.  If I do this query
> > in
> > > > Solr 6.1, it works as expected.
> > > >
> > > > q=*:*=exp_id_s={!xport}=xsort=exp_id_s+asc
> > > >
> > > > However, doing the same query in Solr 6.6 does not return all the
> > > results.
> > > > It just returns 10 results.
> > > >
> > > > Also, it seems that the wt=xsort parameter does not do anything since
> > it
> > > > returns the results in xml format.  In 6.1 it returned the results in
> > > > JSON.  I asked same question in the IRC channel and they told me that
> > it
> > > is
> > > > supposed to still work the same way.  Had to leave so hopefully
> someone
> > > can
> > > > help me out through e-mail.  I would really appreciate it.
> > > >
> > > > Thank you,
> > > > Antelmo
> > > >
> > >
> >
>


Re: Computing record score depending on its association with other records

2018-01-30 Thread Alessandro Benedetti
Accordingly to what I understood the feature weight is present in your second
collection.
You should express the feature weight in the model resource ( not even in
the original collection)
Is actually necessary for the feature weight to be in a separate Solr
collection ?



-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Computing record score depending on its association with other records

2018-01-30 Thread Gintautas Sulskus
Thanks, Alessandro, for your reply.

Indeed, LTR looks like what I need.

However, all of the LRT examples that I have found use a single collection
as a data source.
My data spans across two collections. Does LTR support this somehow or
should I 'denormalise' the data and merge both collections?
My concern is that the denormalisation will lead to a significant increase
in size on the drive.

Best,
Gintas


On Tue, Jan 30, 2018 at 2:30 PM, Alessandro Benedetti 
wrote:

> Hi Ginsul,
> let's try to wrap it up :
>
> 1) you have an item win N binary features ( given the fact that you
> represent the document with a list of feature Ids ( and no values) I would
> assume that it means that when the feature is in the list, it has a value
> of
> 1 for the item
>
> 2) you want to score (or maybe re-rank ? ) your documents giving the score
> you defined
>
> You could solve this problem with a number of possible customizations.
> Starting from an easy one, you could try to use the LTR re-ranker[1] .
>
> Specifically you can define your set of feature( and that should be
> possible
> using the component out of the box) and then a linear model ( and you
> already have the weights for the features so you don't need to train it).
>
> This can be close to what you want but you may want to customize a bit (
> given the fact that you may want to average the weight).
> For example you could define an extension of the linear model that does the
> average of the score ect ect...
>
>
> [1] https://lucene.apache.org/solr/guide/6_6/learning-to-rank.html
>
>
>
>
> -
> ---
> Alessandro Benedetti
> Search Consultant, R Software Engineer, Director
> Sease Ltd. - www.sease.io
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Distributed search cross cluster

2018-01-30 Thread Jan Høydahl
Hi,

A customer has 10 separate SolrCloud clusters, with same schema across all, but 
different content.
Now they want users in each location to be able to federate a search across all 
locations.
Each location is 100% independent, with separate ZK etc. Bandwidth and latency 
between the
clusters is not an issue, they are actually in the same physical datacenter.

Now my first thought was using a custom  parameter, and let the 
receiving node fan
out to all shards of all clusters. We’d need to contact the ZK for each 
environment and find
all shards and replicas participating in the collection and then construct the 
shards=A1|A2,B1|B2…
sting which would be quite big, but if we get it right, it should “just work".

Now, my question is whether there are other smarter ways that would leave it up 
to existing Solr
logic to select shards and load balance, that would also take into account any 
shard.keys/_route_
info etc. I thought of these
  * =collA,collB  — but it only supports collections local to one 
cloud
  * Create a collection ALIAS to point to all 10 — but same here, only local to 
one cluster
  * Streaming expression top(merge(search(q=,zkHost=blabla))) — but we want it 
with pure search API
  * Write a custom ShardHandler plugin that knows about all clusters — but this 
is complex stuff :)
  * Write a custom SearchComponent plugin that knows about all clusters and 
adds the = param

Another approach would be for the originating cluster to fan out just ONE 
request to each of the other
clusters and then write some SearchComponent to merge those responses. That 
would let us query
the other clusters using one LB IP address instead of requiring full visibility 
to all solr nodes
of all clusters, but if we don’t need that isolation, that extra merge code 
seems fairly complex.

So far I opt for the custom SearchComponent and = param approach. Any 
useful input from
someone who tried a similar approach would be priceless!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com



Re: Mixing simple and nested docs in same update?

2018-01-30 Thread Jan Høydahl
Pasting the GIST link :-) 
https://gist.github.com/45640fe3bad696d53ef8a0930a35d163 

Anyone knows if this is expected behavior?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 15. jan. 2018 kl. 14:08 skrev Jan Høydahl :
> 
> Radio silence…
> 
> Here is a GIST for easy reproduction. Is this by design?
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>> 11. jan. 2018 kl. 00:42 skrev Jan Høydahl :
>> 
>> Hi,
>> 
>> We index several large nested documents. We found that querying the data 
>> behaves differently depending on how the documents are indexed.
>> 
>> To reproduce:
>> 
>> solr start
>> solr create -c nested
>> # Index one plain document, “friend" and a nested one, “mother” and 
>> “daughter”, in same request:
>> curl localhost:8983/solr/nested/update -d ‘
>> 
>>  
>>friend
>>other
>>  
>>  
>>mother
>>parent
>>
>>  daughter
>>  child
>>
>>  
>> '
>> 
>> # Query for mother’s children using either child transformer or child query 
>> parser
>> curl 
>> "localhost:8983/solr/a/query?q=id:mother=%2A%2C%5Bchild%20parentFilter%3Dtype%3Aparent%5D”
>> {
>> "responseHeader":{
>>   "zkConnected":true,
>>   "status":0,
>>   "QTime":4,
>>   "params":{
>> "q":"id:mother",
>> "fl":"*,[child parentFilter=type:parent]"}},
>> "response":{"numFound":1,"start":0,"docs":[
>> {
>>   "id":"mother",
>>   "type":["parent"],
>>   "_version_":1589249812802306048,
>>   "type_str":["parent"],
>>   "_childDocuments_":[
>>   {
>> "id":"friend",
>> "type":["other"],
>> "_version_":1589249812729954304,
>> "type_str":["other"]},
>>   {
>> "id":"daughter",
>> "type":["child"],
>> "_version_":1589249812802306048,
>> "type_str":["child"]}]}]
>> }}
>> 
>> As you can see, the “friend” got included as a child of “mother”.
>> If you index the exact same request, putting “friend” after “mother” in the 
>> xml,
>> the query works as expected.
>> 
>> Inspecting the index, everything looks correct, and only “daughter” and 
>> “mother” have _root_=mother.
>> Is there a rule that you should start a new update request for each type of 
>> parent/child relationship
>> that you need to index, and not mix them in the same request?
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
> 



Re: Broken Feature in Solr 6.6

2018-01-30 Thread Antelmo Aguilar
Hi Joel,

Thank you!  Changing the class from SearchHandler to ExportHandler worked.
I appreciate you looking into it.

-Antelmo

On Tue, Jan 30, 2018 at 10:43 AM, Joel Bernstein  wrote:

> I think the best approach is to use the /export handler. The wt=xsort I
> believe has been removed from the system. The configuration for the /export
> handler uses wt=json now.
>
> The configurations in the implicitPlugins.js look like this:
>
> "/export": {
>   "class": "solr.ExportHandler",
>   "useParams":"_EXPORT",
>   "components": [
> "query"
>   ],
>   "defaults": {
> "wt": "json"
>   },
>   "invariants": {
> "rq": "{!xport}",
> "distrib": false
>   }
>
>
>
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Jan 30, 2018 at 8:23 AM, Antelmo Aguilar  wrote:
>
> > Hi Joel,
> >
> > I apologize, I should have been more specific.  We do not use the export
> > handler that is defined by Solr.  We use a couple export handlers that we
> > defined using the convention explained in the ticket that implemented the
> > feature.
> >
> > We did this because we have "categories" of things we export so there are
> > additional invariants for each category so we do not have to worry about
> > them when constructing the query.
> >
> > It seems that with version 6.6, these custom export handlers do not work
> > anymore.
> >
> > Best,
> > Antelmo
> >
> >
> > On Jan 29, 2018 7:37 PM, "Joel Bernstein"  wrote:
> >
> > There was a change in the configs between 6.1 and 6.6. If you upgraded
> you
> > system and kept the old configs then the /export handler won't work
> > properly. Check solrconfig.xml and remove any reference to the /export
> > handler. You also don't need to specify the rq or wt when you access the
> > /export handler anymore. This should work fine:
> >
> > http://host:port/solr/collection/export?q=*:*=
> > exp_id_s=exp_id_s+asc
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> > On Mon, Jan 29, 2018 at 4:59 PM, Antelmo Aguilar 
> wrote:
> >
> > > Hi All,
> > >
> > > I was using this feature in Solr 6.1:
> > > https://issues.apache.org/jira/browse/SOLR-5244
> > >
> > > It seems that this feature is broken in Solr 6.6.  If I do this query
> in
> > > Solr 6.1, it works as expected.
> > >
> > > q=*:*=exp_id_s={!xport}=xsort=exp_id_s+asc
> > >
> > > However, doing the same query in Solr 6.6 does not return all the
> > results.
> > > It just returns 10 results.
> > >
> > > Also, it seems that the wt=xsort parameter does not do anything since
> it
> > > returns the results in xml format.  In 6.1 it returned the results in
> > > JSON.  I asked same question in the IRC channel and they told me that
> it
> > is
> > > supposed to still work the same way.  Had to leave so hopefully someone
> > can
> > > help me out through e-mail.  I would really appreciate it.
> > >
> > > Thank you,
> > > Antelmo
> > >
> >
>


Re: Broken Feature in Solr 6.6

2018-01-30 Thread Joel Bernstein
I think the best approach is to use the /export handler. The wt=xsort I
believe has been removed from the system. The configuration for the /export
handler uses wt=json now.

The configurations in the implicitPlugins.js look like this:

"/export": {
  "class": "solr.ExportHandler",
  "useParams":"_EXPORT",
  "components": [
"query"
  ],
  "defaults": {
"wt": "json"
  },
  "invariants": {
"rq": "{!xport}",
"distrib": false
  }







Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Jan 30, 2018 at 8:23 AM, Antelmo Aguilar  wrote:

> Hi Joel,
>
> I apologize, I should have been more specific.  We do not use the export
> handler that is defined by Solr.  We use a couple export handlers that we
> defined using the convention explained in the ticket that implemented the
> feature.
>
> We did this because we have "categories" of things we export so there are
> additional invariants for each category so we do not have to worry about
> them when constructing the query.
>
> It seems that with version 6.6, these custom export handlers do not work
> anymore.
>
> Best,
> Antelmo
>
>
> On Jan 29, 2018 7:37 PM, "Joel Bernstein"  wrote:
>
> There was a change in the configs between 6.1 and 6.6. If you upgraded you
> system and kept the old configs then the /export handler won't work
> properly. Check solrconfig.xml and remove any reference to the /export
> handler. You also don't need to specify the rq or wt when you access the
> /export handler anymore. This should work fine:
>
> http://host:port/solr/collection/export?q=*:*=
> exp_id_s=exp_id_s+asc
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Mon, Jan 29, 2018 at 4:59 PM, Antelmo Aguilar  wrote:
>
> > Hi All,
> >
> > I was using this feature in Solr 6.1:
> > https://issues.apache.org/jira/browse/SOLR-5244
> >
> > It seems that this feature is broken in Solr 6.6.  If I do this query in
> > Solr 6.1, it works as expected.
> >
> > q=*:*=exp_id_s={!xport}=xsort=exp_id_s+asc
> >
> > However, doing the same query in Solr 6.6 does not return all the
> results.
> > It just returns 10 results.
> >
> > Also, it seems that the wt=xsort parameter does not do anything since it
> > returns the results in xml format.  In 6.1 it returned the results in
> > JSON.  I asked same question in the IRC channel and they told me that it
> is
> > supposed to still work the same way.  Had to leave so hopefully someone
> can
> > help me out through e-mail.  I would really appreciate it.
> >
> > Thank you,
> > Antelmo
> >
>


Re: Perform incremental import with PDF Files

2018-01-30 Thread Emir Arnautović
Hi Karan,
clean=false will not delete existing documents in index, but if you reimport 
documents with the same ID they will be overwritten. If you see the same doc 
with updated timestamp, then it probably means that you did full-import of docs 
with the same file name.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 30 Jan 2018, at 08:34, Karan Saini  wrote:
> 
> Hi Emir,
> 
> There is one behavior i noticed while performing the incremental import. I
> added a new field into the managed-schema.xml to test the incremental
> nature of using the clean=false.
> 
> * default="NOW" multiValued="false"/>*
> 
> Now xtimestamp is having a new value even on every DIH import with
> clean=false property. Now i am confused that how will i know, if
> clean=false is working or not ?
> Please suggest.
> 
> Kind regards,
> Karan
> 
> 
> 
> On 29 January 2018 at 20:12, Emir Arnautović 
> wrote:
> 
>> Hi Karan,
>> Glad it worked for you.
>> 
>> I am not sure how to do it in C# client, but adding clean=false parameter
>> in URL should do the trick.
>> 
>> Thanks,
>> Emir
>> --
>> Monitoring - Log Management - Alerting - Anomaly Detection
>> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>> 
>> 
>> 
>>> On 29 Jan 2018, at 14:48, Karan Saini  wrote:
>>> 
>>> Thanks Emir :-) . Setting the property *clean=false* worked for me.
>>> 
>>> Is there a way, i can selectively clean the particular index from the
>>> C#.NET code using the SolrNet API ?
>>> Please suggest.
>>> 
>>> Kind regards,
>>> Karan
>>> 
>>> 
>>> On 29 January 2018 at 16:49, Emir Arnautović <
>> emir.arnauto...@sematext.com>
>>> wrote:
>>> 
 Hi Karan,
 Did you try running full import with clean=false?
 
 Emir
 --
 Monitoring - Log Management - Alerting - Anomaly Detection
 Solr & Elasticsearch Consulting Support Training - http://sematext.com/
 
 
 
> On 29 Jan 2018, at 11:18, Karan Saini  wrote:
> 
> Hi folks,
> 
> Please suggest the solution for importing and indexing PDF files
> *incrementally*. My requirements is to pull the PDF files remotely from
 the
> network folder path. This network folder will be having new sets of PDF
> files after certain intervals (for say 20 secs). The folder will be
 forced
> to get empty, every time the new sets of PDF files are copied into it.
>> I
 do
> not want to loose the earlier saved index of the old files, while doing
 the
> next incremental import.
> 
> Currently, i am using Solr 6.6 version for the research.
> 
> The dataimport handler config is currently like this :-
> 
> 
> 
> 
>   dataSource="null"
> recursive = "true"
> baseDir="\\CLDSINGH02\*RemoteFileDepot*"
> fileName=".*pdf" rootEntity="false">
> 
> 
>  -->
>  >>> name="lastmodified" />
> 
>   >>> onError="skip"
>   url="${K2FileEntity.
>> fileAbsolutePath}"
 format="text">
> 
> >>> meta="true"/>
> >>> meta="true"/>
> 
>   
>  
> 
> 
> 
> Kind regards,
> Karan Singh
 
 
>> 
>> 



Re: Help with Boolean search using Solr parser edismax

2018-01-30 Thread Emir Arnautović
Hi Wendy,
It is most likely that you need to list fields that can appear in query using 
uf. The best way to see what is going on is to use debugQuery and you can see 
more details how your query is parsed.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 29 Jan 2018, at 20:56, Wendy2  wrote:
> 
> Hi Solr users,I am having an issue on boolean search with Solr parser
> edismax. The search "OR" doesn't work. The image below shows the different
> results tested on different Solr versions. There are two types of search
> requester handlers, /select vs /search. The /select requester uses Lucene
> default parser, while /search uses Solr edismax parser.  I also listed the
> search requester handler below. I am expecting the result count of 997 (844
> + 153) but I only get the correct count via the default /select request
> handler on Solr v5.3.0 and 6.2.0.  I I go back to use the old Lucene default
> parser via /select request handler, I lose all the nice customization
> ranking and sorting :-(Does anyone know some workaround/solution to fix this
> type of search issue? THANKS! 
> 
> Part of the /search request handler in solrconfig.xml
> file:trueexplicitedismaxpdb_id^5.0struct.title^35.0citation.title^25.0title_fields_stem^3.0...rest_fields_stem
> ^0.3score desc,release_date desc,pdb_id desc7100text
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: Computing record score depending on its association with other records

2018-01-30 Thread Alessandro Benedetti
Hi Ginsul,
let's try to wrap it up :

1) you have an item win N binary features ( given the fact that you
represent the document with a list of feature Ids ( and no values) I would
assume that it means that when the feature is in the list, it has a value of
1 for the item

2) you want to score (or maybe re-rank ? ) your documents giving the score
you defined

You could solve this problem with a number of possible customizations.
Starting from an easy one, you could try to use the LTR re-ranker[1] .

Specifically you can define your set of feature( and that should be possible
using the component out of the box) and then a linear model ( and you
already have the weights for the features so you don't need to train it).

This can be close to what you want but you may want to customize a bit (
given the fact that you may want to average the weight).
For example you could define an extension of the linear model that does the
average of the score ect ect...


[1] https://lucene.apache.org/solr/guide/6_6/learning-to-rank.html




-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Computing record score depending on its association with other records

2018-01-30 Thread Gintautas Sulskus
Hi,

I have two collections. The first collection 'items' stores associations
between items and their features. The second collection 'features' stores
importance score for each feature.

   items: item_id- one-to-many - feature_id
features: feature_id - one-to-one  - importance_score_int

The following describes a simplified scenario of what I would like to
achieve using Solr (6.5) queries and/or Streaming Expressions.

I would like to select the first two items from the 'items' collection
and rank them by their features' importance score.

Suppose we have two items i1 and i2. The first item has two features f1 and
f2 and the second item i2 has only one feature f1:
i1, f1
i1, f2
i2, f1

The score is computed by a function f(...) that simply returns the average
of feature importance scores. Provided the scores are as stated below, i2
would be ranked first with a score of 2/2=1 and i2 would come second with
the score of (2 - (-1))/2=0.5:
f1 - 2
f2 - (-1)

The natural flow would be to gather features for each item, compute the
average of their scores and then associate that average with a
corresponding item id.

Any pointers are very much welcome!

Thanks,
Gintas


Re: Broken Feature in Solr 6.6

2018-01-30 Thread Antelmo Aguilar
Hi Joel,

I apologize, I should have been more specific.  We do not use the export
handler that is defined by Solr.  We use a couple export handlers that we
defined using the convention explained in the ticket that implemented the
feature.

We did this because we have "categories" of things we export so there are
additional invariants for each category so we do not have to worry about
them when constructing the query.

It seems that with version 6.6, these custom export handlers do not work
anymore.

Best,
Antelmo


On Jan 29, 2018 7:37 PM, "Joel Bernstein"  wrote:

There was a change in the configs between 6.1 and 6.6. If you upgraded you
system and kept the old configs then the /export handler won't work
properly. Check solrconfig.xml and remove any reference to the /export
handler. You also don't need to specify the rq or wt when you access the
/export handler anymore. This should work fine:

http://host:port/solr/collection/export?q=*:*=exp_id_s=exp_id_s+asc

Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Jan 29, 2018 at 4:59 PM, Antelmo Aguilar  wrote:

> Hi All,
>
> I was using this feature in Solr 6.1:
> https://issues.apache.org/jira/browse/SOLR-5244
>
> It seems that this feature is broken in Solr 6.6.  If I do this query in
> Solr 6.1, it works as expected.
>
> q=*:*=exp_id_s={!xport}=xsort=exp_id_s+asc
>
> However, doing the same query in Solr 6.6 does not return all the results.
> It just returns 10 results.
>
> Also, it seems that the wt=xsort parameter does not do anything since it
> returns the results in xml format.  In 6.1 it returned the results in
> JSON.  I asked same question in the IRC channel and they told me that it
is
> supposed to still work the same way.  Had to leave so hopefully someone
can
> help me out through e-mail.  I would really appreciate it.
>
> Thank you,
> Antelmo
>


Re: Custom Solr function

2018-01-30 Thread Atita Arora
Hope this helps -
https://dzone.com/articles/how-write-custom-solr

On Tue, Jan 30, 2018 at 2:06 PM, LOPEZ-CORTES Mariano-ext <
mariano.lopez-cortes-...@pole-emploi.fr> wrote:

> Can we create a custom function in Java?
>
> Example :
>
> sort = func([USER-ENTERED TEXT]) desc
>
> func returns will numeric value
>
> Thanks in advance
>


Re: Solr 4.8.1 multiple client updates the same collection

2018-01-30 Thread Alessandro Benedetti
"At last, please let me ask another question, is it true that after every 
commit, even if I had only updated one document, the SolrCloud cache is 
invalidated (i.e. Solr must open a new searcher)? 
Because this what the second clients does, updating a document at time and 
commit. 
In other words, how is good/bad having multiple hard commit in a short time 
(few seconds)? "

Your affirmation is correct when you open a new Searcher.
Caches are invalidated and possibly warmed up again (if you configured to do
that).
This is valid for both hard and soft commits when you open a new searcher.
To have visibility you need to open a new searcher.

Given that, I would definitely not recommend one commit ( even a soft commit
which is lighter than an hard one, but still not for free) per document.
The overhead will be consistent, especially if you update few documents per
second.
I would go with an auto hard and soft commit on the updater client as well.
You can set up a timing between the commits which is compatible with the
maximum latency you can accept for updates to show up.
Solr gives support to Near Real Time search (through soft commits), and it
is definitely possible to tune it for "seconds" updates,
but I always recommend to start from the most acceptable latency in updates
and then reduce it if necessary.

I will mention again a very valid blog from Erick, which explains in detail
the different type of commits :

https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/



-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Using Solr with SharePoint Online

2018-01-30 Thread Charlie Hull

On 30/01/2018 07:57, Mohammed.Adnan2 wrote:

Hello Team,

I am a beginner learning Apache Solr. I am trying to check the compatibility of 
solr with SharePoint Online, but I am not getting anything concrete related to 
this in the website documentation. Can you please help me in providing some 
information on this? How I can index my SharePoint content with solr and then 
use solr on my SharePoint sites? I really appreciate your help on this.

Thanks,
Adnan


Hi Adnan,

There are various things you need to consider:
1. Why do you need Solr at all - Sharepoint Online has its own built-in 
search engine.
2. Installing Solr on a Windows server with access to Sharepoint Online 
- shouldn't be a huge problem, Solr is a Java application so you'll also 
need Java installed. You might want to run Solr as a Windows Service so 
it's always there in the background - look up NSSM.
3. You need a way to get the content out of Sharepoint and into Solr. 
The best way to do this will be to crawl the Sharepoint site. There are 
some commercially available connectors from BA Insight and Lucidworks or 
you'll have to roll your own. This https://github.com/golincode/SPOC 
might be a good starting point. If you go this route you'll certainly 
need to condition the data before you index it with Solr, so you'll have 
to understand how Solr schemas, analyzers etc. work.
4. Then you'll need a UI to talk to Solr to carry out queries - if this 
is to live within the Sharepoint world you'll need to write a web 
application compatible with Sharepoint.


HTH,

Charlie

--
Charlie Hull
Flax - Open Source Enterprise Search

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.flax.co.uk


AW: Build suggester in different directory (not /tmp).

2018-01-30 Thread Clemens Wyss DEV
> I almost guarantee that buildOnCommit will be unsatisfactory
if not "on commit" when should suggestions/spellcheckings be updated? And how? 

Spellchecking/suggestions@solr:  
what are the best (up-to-date) sources/links for spellchecking and suggestions?

-Ursprüngliche Nachricht-
Von: Erick Erickson [mailto:erickerick...@gmail.com] 
Gesendet: Mittwoch, 20. Dezember 2017 19:09
An: solr-user 
Betreff: Re: Build suggester in different directory (not /tmp).

bq: this means I will need to set buildOnCommit and buildOnStartup to false.

Be _very_ careful with these settings. Building your suggester can read the 
stored field(s) from _every_ document in your index to build which can take a 
very long time (perhaps hours). You'd pay that penalty every time you started 
Solr or committed docs. I almost guarantee that buildOnCommit will be 
unsatisfactory.

This is one of those things that works fine for testing a small corpus but can 
fall over when you scale up.

As for why the suggester gets built in /tmp, perhaps Mike McCandless has magic 
to control that, nice find and thanks for sharing it!

Best,
Erick

On Wed, Dec 20, 2017 at 9:27 AM, Matthew Roth  wrote:
> I have an incomplete solution. I was trying to build three suggester's 
> at once. If I added the ?suggest.dictionary= parameter and built 
> one at a time it worked out fine. However, this means I will need to 
> set buildOnCommit and buildOnStartup to false. This is less than ideal.
> Building in a different directory would still be preferable.
>
>
> Best,
> Matt
>
> On Wed, Dec 20, 2017 at 12:05 PM, Matthew Roth  wrote:
>
>> Hi List,
>>
>> I am building a few suggester's and I am receiving the error that I 
>> have no space left on device.
>>
>>
>> 
>> No space left on device 
>> java.io.IOException: No space left on device at 
>> sun.nio.ch.FileDispatcherImpl.write0(Native Method) at ...
>>
>>
>>
>> At first this threw me. df showed I had over 100 G free. the /data 
>> dir the suggester is being constructed from is only 4G. On a 
>> subsequent run I notice that the suggester is first being built in 
>> /tmp. When setting up the LVM I only allotted 2g's to that directory 
>> and I prefer to keep it that way. Is there a way to build the 
>> suggester's in an alternative dir? I am not seeing anything in the 
>> documentation (https://lucene.apache.org/
>> solr/guide/6_6/suggester.html)
>>
>> I should note that I am using solr 6.6.0
>>
>> Best,
>> Matt
>>


Custom Solr function

2018-01-30 Thread LOPEZ-CORTES Mariano-ext
Can we create a custom function in Java?

Example :

sort = func([USER-ENTERED TEXT]) desc

func returns will numeric value

Thanks in advance