Re: Using multi valued field in solr cloud Graph Traversal Query

2017-04-24 Thread mganeshs
Hi Joel,

Any idea from when multi value field is supported for gatherNodes ? I am
using version 6.5 ? Is it already there ? 

Kindly update,
Ganesh



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-tp4324379p4331663.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Graph Visualizing tool

2017-07-22 Thread mganeshs
Tried this, but it's not working as expected.

http://solr.pl/en/2016/04/25/graph-visualization-using-solr-6/


Any of you used this or any other tool ? 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Graph-Visualizing-tool-tp4347240p4347241.html
Sent from the Solr - User mailing list archive at Nabble.com.


Graph Visualizing tool

2017-07-22 Thread mganeshs
Hello Solr Experts,

Does, any one used any tool or plugin to visualize the graph data based
node_ids and edge_ids ? 

Pls suggest,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Graph-Visualizing-tool-tp4347240.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Graph Visualizing tool

2017-07-24 Thread mganeshs
Hi,

Thanks for suggestion. But my csv is based on the documents which has
node_id and edges in the same document. But the tool which you suggested
looks like asking for two different entries for nodes separately and edges
separately.

My documents looks like this
node_id, in_edges_ss ( multi value field ), out_edges_ss ( multi value field
), document name, document field1, document field2, document field3.
document field n

Let me know is that possible to create graph in that tool for this document
model ?





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Graph-Visualizing-tool-tp4347240p4347349.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Allow Join over two sharded collection

2017-06-29 Thread mganeshs
Hi Erick,

Initially I also thought of using Streaming for Joins. But looks like Joins
with Streaming is not for heavy QPS sort of queries and that's my use case. 
Currently things are working fine with normal join for us as we have only
one shard. But in coming days number of documents to be indexed is going to
be increased drastically. So we need to split shards. The time I split
shards I can't use Joins.

We thought of going with Implict routing for sharding. But if we go with
Implicit routing, all indexing will not be distributed and so one shard
could be getting more load which we don't want. 
So we badly looking for default Join.
As I have posted in different questions in this forum itself and you too
have replied our joins are between real documents and it's ACL
documents. ACL document has multi value field whose value would be user or
groups. Why we want to keep ACL separately instead of keeping it in same
real document itself. It's because that our ACL can grow till 1L of users or
even more. and for every change in ACL or its permission we don't want to
re-index the real document as well. 

Do you think is there any better alternative ? or the way we have kept ACLs
are wrong ? 

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Allow-Join-over-two-sharded-collection-tp4343443p4343582.html
Sent from the Solr - User mailing list archive at Nabble.com.


Allow Join over two sharded collection

2017-06-29 Thread mganeshs
All,

Any idea when this  ticket   
will be addressed. 

https://issues.apache.org/jira/browse/SOLR-8297

One of the comments says by SOLR 7.0. Can we expect that by 7.0 ?

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Allow-Join-over-two-sharded-collection-tp4343443.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Allow Join over two sharded collection

2017-07-01 Thread mganeshs
Hi Susheel,

Currently we have around 20M documents already and we are expecting now on
that every month 1M of documents. 
The reason why don't want to for time based implicit routing is that, all
documents will end up with recent shard and so indexing will be heavy for
the new shard, where as older shards will be used just for query purpose. 
If we have default sharding, then load for indexing is distributed across
all the shards. That's the reason we would like to stick to default
sharding. But Join is the issue over here when default sharding is used :-(



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Allow-Join-over-two-sharded-collection-tp4343443p4343803.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Allow Join over two sharded collection

2017-07-03 Thread mganeshs
Hi Susheel,

To make use of Joins only option is I should go for manual routing. If I go
for manual routing based on time, we miss the power of distributing the load
while indexing. It will end up with all indexing happens in newly created
shard, which we feel this will not be efficient approach and degrades the
performance of indexing as we have lot of jvms running, but still all
indexing going to one single shard for indexing and we are also expecting
1M+ docs per month in coming days. 

For your question on whether we will query old aged document... ? Mostly we
won't query old aged documents. With querying pattern, it's clear we should
go for manual routing and creating alias. But when it comes to indexing, in
order to distribute the load of indexing, we felt default routing is the
best option, but Join will not work. And that's the reason for asking when
this feature will be in place ?

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Allow-Join-over-two-sharded-collection-tp4343443p4344098.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Graph traversel

2017-04-25 Thread mganeshs
Dear Solr experts,

Can you any one over here explain about why graph traversal is not working
as expected in Solr 6.5 ?

It's not traversing all the child nodes. It traverse only few nodes and not
getting all the mid level and leaf nodes.

As I explained below, 

For this query 

http://localhost:8983/solr/graph/query?q=*:*={!graph%20from=parent_id%20to=id}id:1
 

( which is to get all node getting traversed via node 1 ) 

I get the result as 
"docs":[ 
  { 
"id":"1"}, 
  { 
"id":"11"}, 
  { 
"id":"12"}, 
  { 
"id":"13"}, 
  { 
"id":"122"}] 

Where as I expect result as 1,11,12,13,121, 122, 131. 

What's going wrong ? 

Following is the data I uploaded

[{
"id": "1",
"name": "Root document one"
},
{
"id": "2",
"name": "Root document two"
},
{
"id": "3",
"name": "Root document three"
},
{
"id": "11",
"parent_id": "1",
"name": "First level document 1, child one"
},
{
"id": "12",
"parent_id": "1",
"name": "First level document 1, child two"
},
{
"id": "13",
"parent_id": "1",
"name": "First level document 1, child three"
},
{
"id": "21",
"parent_id": "2",
"name": "First level document 2, child one"
},
{
"id": "22",
"parent_id": "2",
"name": "First level document 2, child two"
},
{
"id": "121",
"parent_id": "12",
"name": "Second level document 12, child one"
},
{
"id": "122",
"parent_id": "12",
"name": "Second level document 12, child two"
},
{
"id": "131",
"parent_id": "13",
"name": "Second level document 13, child three"
}]







--
View this message in context: 
http://lucene.472066.n3.nabble.com/Graph-traversel-tp4331207p4331799.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr performance on EC2 linux

2017-04-29 Thread mganeshs
We use Solr 6.2 in EC2 instance with Cent OS 6.2 and we don't see any
difference in performance between EC2 and in local environment. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-performance-on-EC2-linux-tp4332467p4332553.html
Sent from the Solr - User mailing list archive at Nabble.com.


Graph Query Parser

2017-05-03 Thread mganeshs
All, Is any one using graph query parser with Solr 6+ versions? Is that
working fine as expected ? Can you guys guide me with some working data
model and configurations to set ?

I tried with sample provided over here
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-GraphQueryParser
 

But it's not working as expected.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Graph-Query-Parser-tp405.html
Sent from the Solr - User mailing list archive at Nabble.com.


Using of Streaming to join between shards

2017-06-23 Thread mganeshs
Hi,

So far we had only one shards so joins are working fine. And now as our data
is growing, we would like to go for new shards and we would like to go with
only default sharding mechanism for various reasons.

Due to this, join will fail. as it's not supported if we have more than one
shards.

For this reason we are planning to use join. 

Can you suggest whether streaming can be used like we used join before ?
Will there be any penalty wrt response time and CPU utilization ? 

Currently we are using simple join which is like one to one mapping sort of
join. For this when I move to Streaming, What kind of join Should I go for ?
hashJoin or leftOuterJoin or innerJoin etc ? 

Pls suggest,




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-of-Streaming-to-join-between-shards-tp4342563.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Using of Streaming to join between shards

2017-06-25 Thread mganeshs
Hi Erick,

My scenario goes with two kind of SOLR documents

Document #1 - Real document
#D_uniqueId #D_documentId(unique), #D_documentname, #D_documentdesc,
#D_documentinfo1, #D_documentInfo2, #D_documentInfo3, ... 

Document #2 - to hold documents ACL
#P_uniqueId #P_acl_perm ( multi value field, it contains values of user like
U1, U2, U3, U4.. etc )

Now currently (we have only one shard as of now ) with simple join my query
looks like {!join from=P_uniqueId to=D_uniqueId)P_acl_perm:U1

Number of ACL values per document can grow up to 1M fields.

Now as the number of documents are increasing. we are planning to add one
more shard, by splitting the shard to two. 

As join won't be working with multiple shards. we are planning to use
streams. 

So what should be streaming query to replace this normal join query ( {!join
from=P_uniqueId to=D_uniqueId)P_acl_perm:U1 ) ?

Early responses would be really appreciated !

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-of-Streaming-to-join-between-shards-tp4342563p4342778.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Join not working in Solr 6.5

2017-05-22 Thread mganeshs
Thanks for bringing up performance perspective. Is there any bench mark on
join performance when number of shards is more than 10 where documents are
indexed based on router.field.

Are you suggesting instead of router.field go for streaming expressions or
use join with router.field and then go for streaming expressions ? Can you
detail out pls ?

Thanks,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Join-not-working-in-Solr-6-5-tp4336247p4336451.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Joins using graph queries - solr 6.0

2017-05-22 Thread mganeshs
Hi, Sorry that this reply is not an answer for your post, but want to know
whether graph is working fine for you as expected. is that traverse working
fine in the graph ?

I posted a question over here,
http://lucene.472066.n3.nabble.com/Graph-traversel-td4331207.html#a4331799

but no response. 

So just curious whether graph works for you and can you share me your sample
data and query you use to traverse the graph?

Thanks,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Joins-using-graph-queries-solr-6-0-tp4336214p4336455.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Join not working in Solr 6.5

2017-05-22 Thread mganeshs
Is there any possibility of supporting joins across multiple shards in near
future ? How to achieve the join when our data is spread-ed across multiple
shards. This is very much mandatory when we need to scale out. 

Any workarounds if out-of-box possibility is not there ? 

Thanks,





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Join-not-working-in-Solr-6-5-tp4336247p4336256.html
Sent from the Solr - User mailing list archive at Nabble.com.


Join not working in Solr 6.5

2017-05-21 Thread mganeshs
Hi,

I have following records / documents with Parent entity

id,type_s,P_hid_s,P_name_s,P_pid_s
11,PERSON,11,Parent1,11

And following records / documents with child entity

id,type_s,C_hid_s,C_name_s,C_pid_s
12,PERSON,12,Child2,11
13,PERSON,13,Child3,11
14,PERSON,14,Child4,11

Now when I try to join and get all children of parent1 whose id is
11,

http://localhost:8983/solr/basicns/select?indent=on={!join from id to
C_pid_s} type_s:PERSON=json


I get following exception
 "error":{
"trace":"java.lang.NullPointerException\r\n\tat
org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:525)\r\n\tat
org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)\r\n\tat
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1754)\r\n\tat
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:609)\r\n\tat
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:547)\r\n\tat
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\r\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\r\n\tat
org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)\r\n\tat
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)\r\n\tat
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)\r\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:347)\r\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:298)\r\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\r\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\r\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\r\n\tat
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\r\n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\r\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\r\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\r\n\tat
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\r\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\r\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\r\n\tat
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\r\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\r\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\r\n\tat
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\r\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\r\n\tat
org.eclipse.jetty.server.Server.handle(Server.java:534)\r\n\tat
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\r\n\tat
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\r\n\tat
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\r\n\tat
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\r\n\tat
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\r\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\r\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\r\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\r\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\r\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\r\n\tat
java.lang.Thread.run(Thread.java:745)\r\n",
"code":500}}


Is there a bug in 6.5? or something going wrong. I have used basic config
comes with example and created collection with one shard only and not using
multiple shards.

Early response will be very much appreciated






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Join-not-working-in-Solr-6-5-tp4336247.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Join not working in Solr 6.5

2017-05-21 Thread mganeshs
Perfect !

Sorry I overlooked and missed "="

Thanks,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Join-not-working-in-Solr-6-5-tp4336247p4336251.html
Sent from the Solr - User mailing list archive at Nabble.com.


Data from 4.10 to 6.5.1

2017-05-26 Thread mganeshs
Hi,
 
I am planning the following for moving my old solr index data created in
4.10 to new solr server with 6.5.1. Let me know whether it will work out or
not.

* Setup Solr and Collections with version 5.5
* Copy data folder ( in old solr server 4.10 ) to the corresponding
collection's data folder
* Optmize the collection
* Now setup new solr and collections with version 6.5.1
* Copy the data folder of corresponding collections in 5.5 server ( which
got optmised ) to data folder in 6.5.1 server

Will this be suffice ? 

Let us know your opinions.
Early responses will be very much appreciated.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Data-from-4-10-to-6-5-1-tp4337410.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Data from 4.10 to 6.5.1

2017-05-28 Thread mganeshs
Thanks for the reply. Sure will pay attention. 

Indeed our approach was also to use the latest managed schema and configs
only and add our custom schema from the old version. Luckily we have only
one shard of data and others are replica only and also we are not using any
fields types ( pint, plong etc ) which are all deprecated in new version. So
I guess we are in safer side. Will keep you posted on the results.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Data-from-4-10-to-6-5-1-tp4337410p4337852.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Data from 4.10 to 6.5.1

2017-05-30 Thread mganeshs
All,

As I mentioned above, thought  I will update on steps we followed to move my
data from 4.10 to 6.5.1

Our setup has 6 collections containing only one shard in each and couple of
replicas in each collections

* Install Solr 5.5.4
* Create configs for each collection. Copied basic_configs ( thats comes by
default )
* In Manage schema add our custom field types needed for that corresponding
collection
* Start Solr in cloud mode
* Upconfig the configs for all collections
* Creating Collection with numShards as 1 using HTTP command as mentioned in
over  here
  
* Stop the solr
* In the created shards's data directory, delete the index folder and copy
the 4.10 index folder and make sure write.lock is deleted if exists.
* Now start the solr again. In the solr admin UI, we can see the num docs
will be as per your data copied from 4.10 version. 
* Optimize the index
* Do this for all collection.

Now Install 6.5.1 and repeat same above steps. 

* Install Solr 6.5.1
* Create configs for each collection. Copied basic_configs ( thats comes by
default )
* In Manage schema add our custom field types needed for that corresponding
collection
* Start Solr in cloud mode
* Upconfig the configs for all collections
* Creating Collection with numShards as 1 using HTTP command as mentioned in
over  here
  
* Stop the solr
* In the created shards's data directory, delete the index folder and copy
the 5.5.4 index folder and make sure write.lock is deleted if exists.
* Now start the solr again. In the solr admin UI, we can see the num docs
will be as per your data copied from 5.5.4  version. 
* Do this for all collection.

Now we can create REPLICA as per our need for each collection your
ADDREPLICA command.

This worked fine for us without any issues.

Hope this helps for others who wants to move from older version of SOLR 4.x
to 6.X.

Thanks and regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Data-from-4-10-to-6-5-1-tp4337410p4338133.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: can't create collection using solrcloud

2017-05-30 Thread mganeshs
Couple of times I faced this issue when firewall "Endpoint security" was on.
Once I disabled it then it started working.

Also for creating collection, I usually do in the following way,

upconfig the configuration to zookeeper using the command

bin/solr zk upconfig -n collection1_configs -z srv-nl-com12:2181 -d
collection1_configs

then for creating collection use HTTP command ( REST API )

http://srv-nl-com13:8983/solr/admin/collections?action=CREATE=collection1=1=2=2=collection1_configs

This works fine for us...

Hope this helps...




--
View this message in context: 
http://lucene.472066.n3.nabble.com/can-t-create-collection-using-solrcloud-tp4338092p4338135.html
Sent from the Solr - User mailing list archive at Nabble.com.


SOLR query validation

2017-05-31 Thread mganeshs
Hi,

In my use case, we need to validate the solr query which is getting fired to
SOLR in the solr layer. 

Validation like, we want few fields to be passed always in the query, we
don't want few fields not to be passed in the query. 

Which is the right place to do in the SOLR ? Currently we are using it in
Filter level. Is there any other better place to validate the query before
handing over the query to execute.

Also, in the response, we would like to add the few fields additional, for
example, for  each document fields, say for Employee document,  we will have
only employee id. But we would like to add employee name as well in the
response. This is because employee name is not indexed or stored in the
document, but only employee id is stored and indexed. So we want to get
employee name from our store and add it in the response. Currently we are
doing this by implementing QueryResponseWriter interface. Is there any
better alternative for this ? 

Early responses would be appreciated !

Thanks and Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-query-validation-tp4338183.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: fq performance

2017-06-11 Thread mganeshs
Thanks for suggestions Erick, Micheal and all. I guess using of single field
as access_control will make sense. we can have access_control_user as multi
value field to hold user list ( hold permission given to user alone
individually ) and another field access_control_group as multi value field
to hold group list ( hold permission given to groups ) for that document. I
tried with this example with 6 million of documents and in fq i used almost
50 values as following
fq={!cache=false}acl_groups_ss:(G43 G96 G72 G80 G7 G24 G16 G67 G43 G57 G84
G23 G8 G38 G33 G10 G13 G65 G57 G72 G44 G34 G63 G90 G100 G63)

Tried these queries with 20 users concurrently also... Got less than 1 sec
response time. So it should be fine for now for us.

But curious to know how this would be handled in bigger applications like
linkedin and other social medias. What would be the schema, will it be like
keep access control in the same documents / resources itself or it's kept
outside and they do join in the query.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/fq-performance-tp4325326p4340057.html
Sent from the Solr - User mailing list archive at Nabble.com.


Custom Response writer

2017-06-16 Thread mganeshs
Hi, 

We have requirement like in the response we would like to add description of
an item with item id(this field comes from solr response by default) or
employee name along with employee id ( this is just an example use case ). 

In the solr document what we have is only item id or employee id. But after
the query executes and result is returned, in the response we need to fill
employee name or item name. 

So we decided to customize xmlwriter and in the place of filling the xml
tags, we are checking and creating the additional tag for employee name or
item name. These names / additional info are read from our own read through
cache which are loaded in a static block.

Is there a better way/option than customizing the xmlwriter ? 

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Custom-Response-writer-tp4340908.html
Sent from the Solr - User mailing list archive at Nabble.com.


Adding shards dynamically when the automatic routing is used

2017-05-08 Thread mganeshs
All,

Is there possiblity in near future in coming new releases, adding shards
dynamically though compositeId ( default ) based routing is used.
Currently only option is we need to split the shard, instead we should able
to add shards dynamically and then on all new documents should go on new
shards.
Is there a plan to include this in coming release(s) ?

Regards,
Ganesh



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Adding-shards-dynamically-when-the-automatic-routing-is-used-tp4333883.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Using of Streaming to join between shards

2017-06-27 Thread mganeshs
Hi Susheel,

Thanks for your reply and as you suggested we will start with innerJoin.

But what I want know is that, Is Streaming can be used instead of normal
default Join ? 

For ex. currently we fire request for every user clicks on menu in the page
to show list of his documents with default JOIN and it works well without
any issues with 100 concurrent users as well or even more than that
concurrency.

Can we do same for streaming join as well ? I just want to know whether
concurrent streaming request will create heavy load to solr server or it's
same as default join. What would be penalty of using streaming concurrently
instead of default join ?

Kindly throw some light on this topic.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-of-Streaming-to-join-between-shards-tp4342563p4343005.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Using of Streaming to join between shards

2017-06-27 Thread mganeshs
Hi Joel,

Thanks for confirming that Streaming would be too costly for high qps loads.

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-of-Streaming-to-join-between-shards-tp4342563p4343104.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Default Index config

2018-04-27 Thread mganeshs
To add it further, in 6.5.1, while indexing... even sometimes one of solr
node goes down for a while and comes up automatically. During those period
all our calls to index fails. Even in the Solr admin UI, we can see node not
being active for a while and coming up again.

All these happens in 4 core machine ( r4-xlarge). If we move to r4-2xlarge (
8 core machine )... everything goes smooth without any issue. Even CPU also
stays with 50%.

Is that mean that we need 8 core machine to run the index and query with
such a data and heavy indexing rate ? 





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: docvalues set to true, and indexed is false and stored is set to false

2018-02-13 Thread mganeshs
Hi,

I guess my point is not conceived correctly. 

Here I am talking about the field  "In Place Updates

 
"

As per above link, it says that complete document will not be re-indexed
during updates, if the field is set as docValues="true" and indexed and
stored is set as false.

But I want to know whether complete document will re-index, when I delete a
field of type "docvalue" is set as true, but indexed and stored is set as
false. Also when I add new field of type "docvalue"is set as true, but
indexed and stored is set as false. 

Hope my question is clear now. 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: docvalues set to true, and indexed is false and stored is set to false

2018-02-13 Thread mganeshs
Hi,

Thanks for quick response. 

I forgot to mention that after adding it, I have re-indexed all the data
with dynamic fields Field_one, Field_two etc. 

In that case, by adding new field ( docvalue field ) or removing existing
docvalue field, Will the whole document will re-indexed again, or only this
field alone will be deleted and added correspondingly.

Regards,



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


docvalues set to true, and indexed is false and stored is set to false

2018-02-13 Thread mganeshs
Hi,

If I have set following in the schema



What will be the impact of deleting a single field, "Fields_one" field or
what's the impact of adding a new field "Fields_100" ?

Will the whole document will re-indexed again, or only this field alone will
be deleted and added correspondingly.

Idea here is we are trying to avoid complete re-indexing of document ( as
document would be very huge one and number of documents are also in huge,
and we have a situation, where we may need to add one new dynamic field to
all the documents or to remove a dynamic from all the documents ).

Early responses are really appreciated !

Regards,



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: docvalues set to true, and indexed is false and stored is set to false

2018-02-13 Thread mganeshs
Hi,

Thanks for clearing.

But as per this  link

 
(Enabling DocValues) it says that it supports strField and UUID field also. 

Again, what you mean by it's not free for large segments. Can you point me
to some documentation on that ?

Regards,
Ganesh



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: docvalues set to true, and indexed is false and stored is set to false

2018-02-14 Thread mganeshs
Hi Emir,

Thanks for confirming that strField is not considered / available for in
place updates. 

As per documentation, it says...

*An atomic update operation is performed using this approach only when the
fields to be updated meet these three conditions:

are non-indexed (indexed="false"), non-stored (stored="false"), single
valued (multiValued="false") numeric docValues (docValues="true") fields;

the _version_ field is also a non-indexed, non-stored single valued
docValues field; and,

copy targets of updated fields, if any, are also non-indexed, non-stored
single valued numeric docValues fields.*

Let's consider I have declared following three fields in the schema

id





With this I am trying to create couple of solr document ( id =1) with only
Field1 and Field2 and it's also indexed. And I could search the documents
based on Field1 and Field2

Now after a while, I am adding a new field called Field3 by passing the id
field ( id=1) and Field3 ( Field3=100 ( which is docvalues field in our case
).

What will happen now ? Will the complete document gets re indexed or only
Field3 get added under docValues ?

Pls confirm.

Regards,



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


In Place Updates not work as expected

2018-02-15 Thread mganeshs
All,

I have (say 1M, in real time it would be more even) solr documents which has
lot of fields and it's bit huge. We have a functionality, where we need to
go and update a specific field or add new field in to that document. Since
we have to do this for all 1M documents, it's taking up more time and it's
not acceptable. 

So we thought of using "In Place Updates".

As per documentation, we have made sure it's following this criteria
---
*An atomic update operation is performed using this approach only when the
fields to be updated meet these three conditions:

are non-indexed (indexed="false"), non-stored (stored="false"), single
valued (multiValued="false") numeric docValues (docValues="true") fields;

the _version_ field is also a non-indexed, non-stored single valued
docValues field; and,

copy targets of updated fields, if any, are also non-indexed, non-stored
single valued numeric docValues fields.*
---
To check whether it's working as expected, 
* First we tried to update a normal field and it took around 1.5 Hours to
update all 1M docs, as the complete documents is getting re-indexed.

* We also tried to update the docvalue field and it also took around 1.5
hours to complete for 1M docs. 

As in the second case, we are updating docvalue field type, and as it won't
re-index the complete document, isn't that should take lesser time ? 

What could be going wrong ? I am using Sorl 6.5.1. Is this a bug or expected
behavior ? 

Regards,




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: In Place Updates not work as expected

2018-03-14 Thread mganeshs
Hi Emir,

I am using solrj to update the document. Is there any spl API to be used for
in place Updates ? 

Yes are we are updating in Batch of 1000 documents. 

As I mentioned before, since I am updating only docvalues i expect it should
update in faster than updating normal field. Isn't it ?

Regards,



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Default Index config

2018-04-11 Thread mganeshs
Hi Shawn, 

We found following  link

  
where its mentioned like in 6.2.1 it's faster where as in 6.6 its slower.
Keep this, we too tried with 6.2.1 in our performance environment and we
found that CPU usage came down to 60 to 70% where as in 6.5.1 it was always
more than 95% 

Settings are same and data size and indexing speed remains same. Pls check
the  JVM snapshot

  
when we index using 6.2.1


Following is the  snapshot

 
taken with 6.5.1

Is there any reason why such a huge difference with CPU usage patterns
between 6.2.1 and 6.5.1 ? 

Can we do something in 6.5.1 to make it as 6.2.1? Because we don't want to
downgrade to 6.2.1 from 6.5.1. 

Let us know your thoughts on this.

Thanks and Regards,





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Default Index config

2018-04-09 Thread mganeshs
Hi Shawn,

Thanks for the reply. 

Yes we use only one solr client. Though collection name is passed in the
function, we are using same client for now.

Regarding merge config, after reading lot of forums and listening to
presentation of revolution 2017, idea is to reduce the merge frequency, so
that CPU usage pattern will come down from 100 to 70% for a while and only
when merges happens it will go to 100% ( where as now it's always above 95%
) which we see it as not a good sign of CPU always more than 95% since we
run other components as well in this server. So to reduce the merge
frequency,  i was trying that. 

Thanks for sharing your config, will try to check with that too and post you
an update on the result. 

Thanks and Regards,




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Default Index config

2018-04-09 Thread mganeshs
Hi Shawn,

Regarding CPU high, when we are troubleshooting, we found that Merge threads
are keep on running and it's take most CPU time ( as per Visual JVM ). GC is
not causing any issue as we use the default GC and also tried with G1 as you
suggested over  here
  

Though it's only background process, we are suspecting whether it's causing
CPU to go high. 

Since we are using SOLR as real time indexing of data and depending on its
result immd. to show it in UI as well. So we keep adding document around 100
to 200 documents in parallel in a sec. Also it would be in batch of 20 solr
documents list in one add... 

*Note*: following is the code snippet we use for indexing / adding solr
document in batch per collection

/for (SolrCollectionList solrCollection : SolrCollectionList.values()) {
CollectionBucket collectionBucket = getCollectionBucket(solrCollection);
List solrInputDocuments =
collectionBucket.getSolrInputDocumentList();
String collectionName = collectionBucket.getCollectionName();
try {
if(solrInputDocuments.size() > 0) {
CloudSolrClient solrClient =
PlatformIndexManager.getInstance().getCloudSolrClient(collectionName);
solrClient.add(collectionName, solrInputDocuments);
}
}/

*where solrClient is created as below
*
/this.cloudSolrClient = new
CloudSolrClient.Builder().withZkHost(zooKeeperHost).withHttpClient(HttpClientUtil.HttpClientFactory.createHttpClient()).build();
this.cloudSolrClient.setZkClientTimeout(3);
/

Hard commit is kept as automatic and set to 15000 ms.
In this process, we also see, when merge is happening, and already
maxMergeCount ( default one ) is reached, commits are getting delayed and
solrj client ( where we add document ) is getting blocked and once once of 
Merge thread process the merge, then solrj client returns the result.
How do we avoid this blocking of solrj client ? Do I need to go out of
default config for this scenario? I mean change the merge factor
configuration ? 

Can you suggest what would be merge config for such a scenario ? Based on
forums, I tried to change the merge settings to the following,


30
30
30
2048
512
0.1
2048
2.0
10.0


But couldn't see any much change in the behaviour.

In same solr node, we have multiple index / collection. In that case,
whether TieredMergePolicyFactory will be right option or for multiple
collection in same node we should go for other merge policy ( like LogByte
etc ) 


Can you throw some light on this aspects ?
Regards,

 Regarding auto commit, we discussed lot with our product owners and atlast
> we are forced to keep it to 1sec and we couldn't increase further. As this
> itself, sometimes our customers says that they have to refresh their pages
> for couple of times to get the update from solr. So we can't increase
> further.

I understand pressure from nontechnical departments for very low 
response times. Executives, sales, and marketing are usually the ones 
making those kinds of demands. I think you should push back on that 
particular requirement on technical grounds.

A soft commit interval that low *can* contribute to performance issues.  
It doesn't always cause them, I'm just saying that it *can*.  Maybe 
increasing it to five or ten seconds could help performance, or maybe it 
will make no real difference at all.

> Yes. As of now only solr is running in that machine. But intially we were
> running along with hbase region servers and was working fine. But due to
> CPU
> spikes and OS disk cache, we are forced to move solr to separate machine.
> But just I checked, our solr data folder size is coming only to 17GB. 2
> collection has around 5GB and other are have 2 to 3 GB of size. If you say
> that only 2/3 of total size comes to OS disk cache, in top command VIRT
> property it's always 28G, which means more than what we have. Why is
> that...
> Pls check that top command & GC we used in this  doc
> https://docs.google.com/document/d/1SaKPbGAKEPP8bSbdvfX52gaLsYWnQfDqfmV802hWIiQ/edit?usp=sharing;

The VIRT memory should be about equivalent to the RES size plus the size 
of all the index data on the system.  So that looks about right.  The 
actual amount of memory allocated by Java for the heap and other memory 
structures is approximately equal to RES minus SHR.

I am not sure whether the SHR size gets counted in VIRT. It probably 
does.  On some Linux systems, SHR grows to a very high number, but when 
that happens, it typically doesn't reflect actual memory usage.  I do 
not know why this sometimes happens.That is a question for Oracle, since 
they are the current owners of Java.

Only 5GB is in the buff/cache area.  The system has 13GB of free 
memory.  That system is NOT low on memory.

With 4 CPUs, a load average in the 3-4 range is an indication that the 

Re: Performance & CPU Usage of 6.2.1 vs 6.5.1 & above

2018-04-17 Thread mganeshs
Regarding query times, we couldn't see big improvements. Both are more or
less same.

Our main worry is that, why CPU usage is so high in 6.5.1 and above ? What's
going wrong ? 

Is any one else facing this sort of issue ? If yes, how to bring down the
CPU usage? Is there any settings which we need to set ( not default one ) in
6.5.1 ? 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Performance & CPU Usage of 6.2.1 vs 6.5.1 & above

2018-04-18 Thread mganeshs
Hello Deepak,

We are not querying when indexing is going on. Whatever CPU graph I shared
for 6.2.1 and 6.5.1 was only while we do batch indexing. During that time we
don't query and no queries are getting executed.

We index in a batch with a rate of around 100 documents / sec. And it's not
so high too. But same piece of code and same config, with 6.2.1 CPU is
normal and in 6.5.1 it always stays above 90% or 95%. 

@Solr Experts, 

>From one of the thread by " Yasoob
<http://lucene.472066.n3.nabble.com/CommitScheduler-Thread-blocked-due-to-excessive-number-of-Merging-Threads-tp4353964p4354334.html>
 
" it's mentioned as 

/I compared the source code for the two versions and found that different 
merge functions were being used to merge the postings. In 5.4, the default 
merge method of FieldsConsumer class was being used. While in 6.6, the 
PerFieldPostingsFormat's merge method is being used. I checked and it looks 
like this change went in Solr 6.3. So I replaced the 6.6 instance with 6.2.1 
and re-indexed all the data, and it is working very well, even with the 
settings I had initially used. /

Is anyone else facing this issue or any fixes got released in future build
for this ? 

Keep us posted


Deepak Goel wrote
> Please post the exact results. Many a times the high cpu utilisation may
> be
> a boon as it improves query response times
> 
> On Tue, 17 Apr 2018, 13:55 mganeshs, 

> mganeshs@

>  wrote:
> 
>> Regarding query times, we couldn't see big improvements. Both are more or
>> less same.
>>
>> Our main worry is that, why CPU usage is so high in 6.5.1 and above ?
>> What's
>> going wrong ?
>>
>> Is any one else facing this sort of issue ? If yes, how to bring down the
>> CPU usage? Is there any settings which we need to set ( not default one )
>> in
>> 6.5.1 ?
>>
>>
>>
>> --
>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>>


Deepak Goel wrote
> Please post the exact results. Many a times the high cpu utilisation may
> be
> a boon as it improves query response times
> 
> On Tue, 17 Apr 2018, 13:55 mganeshs, 

> mganeshs@

>  wrote:
> 
>> Regarding query times, we couldn't see big improvements. Both are more or
>> less same.
>>
>> Our main worry is that, why CPU usage is so high in 6.5.1 and above ?
>> What's
>> going wrong ?
>>
>> Is any one else facing this sort of issue ? If yes, how to bring down the
>> CPU usage? Is there any settings which we need to set ( not default one )
>> in
>> 6.5.1 ?
>>
>>
>>
>> --
>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>>





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Performance & CPU Usage of 6.2.1 vs 6.5.1 & above

2018-04-15 Thread mganeshs
Solr experts,

We found following  link

  
where its mentioned like in 6.2.1 it's faster where as in 6.6 its slower. 

We are also facing same issue...with 6.2.1 in our performance environment
and we 
found that CPU usage is around 60 to 70% where as in 6.5.1 it was always 
more than 95% 

Settings are same and data size and indexing speed remains same. Pls check 
the  JVM snapshot 

   
when we index using 6.2.1 


Following is the  snapshot 

 
taken with 6.5.1 

Is there any reason why such a huge difference with CPU usage patterns 
between 6.2.1 and 6.5.1 ? 

Can we do something in 6.5.1 to make it as 6.2.1? Because we don't want to 
downgrade to 6.2.1 from 6.5.1. 

Let us know your thoughts on this. 

Thanks and Regards, 




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Performance & CPU Usage of 6.2.1 vs 6.5.1 & above

2018-04-16 Thread mganeshs
Hi Bernd,

We didn't change any default settings. 

Both 6.2.1 and 6.5.1 is running with same settings, same volume of data,
same code, which means indexing rate is also same. 

In Case of 6.2.1 CPU is around 60 to 70%. But in 6.5.1 it's always around
95%. The CPU % in 6.5.1 is alarming for us and we keep getting alerts as
it's always more than 95%. 

Basically, my question is why is that in 6.2.1 CPU is low and for 6.5.1 it's
very high ? I though only I am facing this issue, but one more in the forum
also raised this issue, but nothing concluded so far. 

In another thread Shawn also suggested changes wrt merge policy numbers. But
CPU % didn't come down. But in 6.2.1 with default settings itself, it works
fine and CPU is also normal. So created new thread to discuss wrt CPU
utilization between old version (6.2.1 ) and new version (6.5.1+)

Regards,



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Default Index config

2018-03-27 Thread mganeshs
Hi Shawn,

Thanks for detail mail. Yes I am behind the IndexConfig only.

Regarding 5GB size of collection, it's not one document. It has almost 3M of
docs in that collection. 

I am using the default configuration, as all solr experts say default one
suits for most of the cases and so following the defaults. We changed only
the commits part and using 15000 for hard commits and 1 sec for soft commit. 
All other setting like locking, deleting policy, merge, directory, etc are
left to default ones.

One strange thing we noticed after moving from solr 5.x to solr 6.5.1 is
that CPU and RAM usage is increased drastically. We have two solr nodes, one
for data and another for replica. It's EC2 r4.xlarge machine. We have almost
6 collection  and each carries around 5GB of data in average and in couple
of collection we have frequent updates too. 

In solr 5.x we didn't feel this much of RAM and CPU usages. CPU is always 80
to 90% even if we are trying to index or update some 50 docs at one shot and
RAM it occupies whatever we give. We started with 8GB of Heap. But its
always 8GB.  Initially we were using CMS GC and tried with G1 GC. Only
difference is that, In case of CMS, even after starting solr, cpu goes to
80%, where as in G1, after started solr it's around 10% and when load comes
( around 100 to 200 docs  in 5 mins ) it's goes 90% ( in both CMS and G1 )

When we tried to profile the solr process, we found that merging is keep on
happening. 

This spikes of CPU and memory, we are seeing only after moving to 6.5.1. Is
that stable version ? moving to latest stable will solve this issue or we
miss something wrt configurations ? Do we need to change the solr default
config ? 


Advice... 




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Default Index config

2018-03-28 Thread mganeshs
Hi Shawn,

Thanks again for detailed reply.

Regarding auto commit, we discussed lot with our product owners and atlast
we are forced to keep it to 1sec and we couldn't increase further. As this
itself, sometimes our customers says that they have to refresh their pages
for couple of times to get the update from solr. So we can't increase
further.

We have kept openSearcher, it's as typical configuration & recommended only.

Yes. As of now only solr is running in that machine. But intially we were
running along with hbase region servers and was working fine. But due to CPU
spikes and OS disk cache, we are forced to move solr to separate machine. 
But just I checked, our solr data folder size is coming only to 17GB. 2
collection has around 5GB and other are have 2 to 3 GB of size. If you say
that only 2/3 of total size comes to OS disk cache, in top command VIRT
property it's always 28G, which means more than what we have. Why is that...
Pls check that top command & GC we used in this  doc

  

Yes we are using solr cloud only.

Queries are quiet fast, most of time simple queries with fq. Regarding
index, during peak hours, we index around 100 documents in a second in a
average. 

I also shared the CPU utilization in the same  doc

  

Regarding release, initially we tried with 6.4.1 and since many discussions
over here, mentioned like moving to 6.5.x will solve lot of performance
issues etc, so we moved to 6.5.1. We will move to 6.6.3 in near future. 

Hope I have given enough information. One strange thing is that, CPU and
memory spike are not seen when we move to r4.xlarge to r4.2xlarge ( which is
8 core with 60 GB RAM ). But this would not be cost effective. What's making
CPU and memory to go high in this new version ( due to doc values )? If I
switch off docvalues will CPU & Memory spikes will get reduced ? 

Let me know...

 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Default Index config

2018-03-26 Thread mganeshs
Hi,

I haven't changed the solr config wrt index config, which means it's all
commented in the solrconfig.xml.

It's something like what I pasted before. But I would like to know whats the
default value of each of this. 

Coz.. after loading to 6.5.1 and our document size also crossed 5GB in each
of our collection. Now update of document is taking time. So would like to
know whether we need to change any default configurations.




















${solr.lock.type:native}













  




Advice...



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: In Place Updates not work as expected

2018-03-16 Thread mganeshs
Hi Emir,

It's normal setfield and addDocument

for ex.
in a for loop 
   solrInputDocument.setField(sFieldId, fieldValue);
and after this, we add the created document.
   solrClient.add(collectionName, solrInputDocuments);

I just want to know whether, we need to do something specific for in-place
updates ? 

Kindly let me know,

Regards,




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Allow Join over two sharded collection

2019-02-05 Thread mganeshs
All, 

Any idea, whether this will be taken care or addressed in near future ? 

https://issues.apache.org/jira/browse/SOLR-8297

Regards,




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Solr query with best match returning high score

2019-07-03 Thread mganeshs
Hello Experts,

I have a following query

product:TV or os:Android or size:(55 60 65) or brand:samsung or issmart:yes
or ram:[4 TO *] or rate:[10 TO *] or bezel : no or sound:dolby

In Total there are 9 conditions. 

Now I need the document with best match should return top. Best match I mean
which satisfies all the 9 conditions ( like using AND instead of or ).
document where product is tv and os is android and size is 55 and brand is
samsung and issmart is yes and rame is 4 and rate is 115000 and bezel is no
and sound is dolby. 

Next could be documents which matches any of 8 conditions. I also have
scenerio with boosting certain fields ( brand:samsung) should have some
priority, so i can give boost for this.

Let me know how this could be achieved. Normal scoring is working bit
differently. Term which is rare among all the documents is having high
scoring. How can we disable that part. 




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Fetch related documents from Custom Function

2020-05-18 Thread mganeshs
Is there a easy possibility of reading the few field from related documents
from Custom function ? 

For ex, Project document contains, project id, project name, Project manager
id  ( which is nothing but employee id ). & Employee document contains field
( Employee id, Employee name ). Now while querying the Project documents, in
a custom function want to pass project manager id, and would like to read
employee document of that Project manager and return employee name of that
project manager. 

WE can do Join, but for various reason, for me Join won't work. So would
like to read the employee document from the custom function. As Custom
function is getting executed inside SOLR, what's the easy to read the other
documents in SOLR, instead of establishing new connection via solrj and read
it.

Thanks in advance.



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Fetch related documents from Custom Function

2020-05-18 Thread mganeshs
Yes. But being inside solr ( I mean code getting executing via Custom
function ), do we have option to read the other solr documents in a easy
way.



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Fetch related documents from Custom Function

2020-05-19 Thread mganeshs
Solr Experts, any easy way for reading other solr docs ( other docs ) from
solr custom function ? 



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html