Re: Issue with delta import

2017-08-16 Thread vrindavda
yes.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Issue-with-delta-import-tp4347680p4350734.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Issue with delta import

2017-08-10 Thread vrindavda
refer this :

http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-td4338162.html#a4339168



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Issue-with-delta-import-tp4347680p4350157.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Proximity Search using edismax parser.

2017-06-12 Thread vrindavda
hi you can refer : http://yonik.com/solr/query-syntax/



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Proximity-Search-using-edismax-parser-tp4340115p4340133.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Number of requests spike up, when i do the delta Import.

2017-06-06 Thread vrindavda
I found this article helpful.

https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-tp4338162p4339168.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Number of requests spike up, when i do the delta Import.

2017-06-02 Thread vrindavda
Thanks Erick ,

Could you please suggest some alternative to go with SolrNET.

@jlman, I tried your way, that do reduces the number of request, but
delta-import still take longer than full-import. There is no improvement in
performance. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-tp4338162p4338591.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Number of requests spike up, when i do the delta Import.

2017-06-01 Thread vrindavda
Thanks Erick,

 But how do I solve this? I tried creating Stored proc instead of plain
query, but no change in performance.

For delta import it in processing more documents than the total documents.
In this case delta import is not helping at all, I cannot switch to full
import each time. This was working fine with less data.

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-tp4338162p4338444.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Number of requests spike up, when i do the delta Import.

2017-05-31 Thread vrindavda
Exactly, Delta import in taking More than Delta

Here are the details required. 

When I do the delta import for 600(of total 291,633) documents is get this :

Indexing completed. Added/Updated: 360,000 documents. Deleted 0 documents.
(Duration: 6m 58s)

For Full import :

Indexing completed. Added/Updated: 291,633 documents. Deleted 0 documents.
(Duration: 3m 07s)

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-tp4338162p4338167.html
Sent from the Solr - User mailing list archive at Nabble.com.


Number of requests spike up, when i do the delta Import.

2017-05-31 Thread vrindavda
Hello,
Number of requests spike up, whenever I do the delta import in Solr.
Please help me understand this.


 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-tp4338162.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr licensing for commercial product.

2017-05-09 Thread vrindavda
Thanks Shawn,

One more question. I found below snippet in license file. Do I need to
mention my product owner details in highlighted section?


  APPENDIX: How to apply the Apache License to your work.

  To apply the Apache License to your work, attach the following
  boilerplate notice, with the fields enclosed by brackets "[]"
  replaced with your own identifying information. (Don't include
  the brackets!)  The text should be enclosed in the appropriate
  comment syntax for the file format. We also recommend that a
  file or class name and description of purpose be included on the
  same "printed page" as the copyright notice for easier
  identification within third-party archives.

*   Copyright [] [name of copyright owner]*

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-licensing-for-commercial-product-tp4334146p4334230.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr licensing for commercial product.

2017-05-09 Thread vrindavda
Hello,

Please let me know what all things do I need to consider for licensing
before shipping solr with commercial product.

How will Solr know that what client is using it.

Thank you,
Vrinda Davda 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-licensing-for-commercial-product-tp4334146.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SPLITSHARD Working

2017-05-08 Thread vrindavda
Thanks I go it.

But I see that distribution of shards and replicas is not equal.

 For Example in my case :
I had shard 1 and shard2  on Node 1 and their replica_1 and replica_2 on
Node 2. 
I did SHARDSPLIT on shard1  to get shard1_0 and shard1_1  such that 
and shard1_0_replica0 are created on Node 1 and shard1_0_replica1,
shard1_1_replica1 and  shard1_1_replica0 on Node 2.

Is this expected behavior ? 

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SPLITSHARD-Working-tp4333876p4333922.html
Sent from the Solr - User mailing list archive at Nabble.com.


SPLITSHARD Working

2017-05-08 Thread vrindavda
Hi,

I need to SPLITSHARD such that one split remains on the same machine as
original and another uses new machines for leader and replicas. Is this
possible ? Please let me know what properties do I need to specify in
Collection API to achieve this.

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SPLITSHARD-Working-tp4333876.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Backup not working

2017-04-21 Thread vrindavda
I realized that Segments_1 is getting created in Shard2 and Segments_2 in
Shard1.

Backup API is looking for Segments_1 in Shard1. Please correct if I have
configured something wrongly. I have created collection using collection API
and am using data_driven_schema_configs configs.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Backup-not-working-tp4331094p4331172.html
Sent from the Solr - User mailing list archive at Nabble.com.


Backup not working

2017-04-21 Thread vrindavda
Hello,

I am trying to backup the Solr index data using collection API.

I have \collection2_shard1_replica1\data\index\segments_6 in my data folder,
but when I try to backup files, It expects
\collection2_shard1_replica1\data\index\segments_5 which is not there in
data folder, hence giving exception.

Now when I reindex, I have segments_7 instead of segments_6. But at this
time Backup API is expecting segments_6.

Please help.

Here is exception that I get :




500
479



org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
from server at http://172.25.7.50:8983/solr: Failed to backup
core=collection2_shard1_replica1 because java.nio.file.NoSuchFileException:
D:\solr-6.4.0\solr-6.4.0\example\cloud\node1\solr\collection2_shard1_replica1\data\index\segments_5



org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Could not backup all replicas


Could not backup all replicas
500



org.apache.solr.common.SolrException
org.apache.solr.common.SolrException

Could not backup all replicas

org.apache.solr.common.SolrException: Could not backup all replicas at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:287)
at
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:218)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
at
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534) at
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)

500





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Backup-not-working-tp4331094.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Architecture suggestions

2017-03-24 Thread vrindavda
Thanks Shawn,
 In my case query rate will be average or say low, 100-120 concorrent
requests.

As per my understanding replica too aid shards in getting result documents,
correct if I am wrong.

Moreover, I intend to have fault tolerant architecture, hence opting for
shards/replicas on different server.

Please advice.

Thanks,
Vrinda Davda



On 24-Mar-2017 6:53 PM, "Shawn Heisey-2 [via Lucene]" <
ml-node+s472066n4326641...@n3.nabble.com> wrote:

On 3/24/2017 1:15 AM, vrindavda wrote:
> Thanks Erick and Emir , for your prompt reply.
>
> We are expecting around 50M documents to sit on 80GB . I understand that
> there is no equation to predict the number/size of server. But
considering
> to have minimal fault tolerant architecture, Will 2 shards and 2 replicas
> with 128GB RAM, 4 core solr instance be advisable ? Will that suffice ?
>
> I am planning to use two solr instances for shards and replicas each and
3
> instances for zookeeper. Please suggest if I am in right direction.

If you have two servers with 128GB and the entire index will be 80GB in
size, this should work well.  The heap would likely be fine at around
8GB, so each server would have a complete copy of the index and would
have enough memory available to cache it entirely.  With two servers,
you want two replicas, regardless of the number of shards.  When I say
two replicas, I am talking about a total of two copies -- not a leader
and two followers.

If the query rate is very low, then sharding would be worthwhile,
because multiple CPUs will be used by a single query.  If the query rate
is high, then you would want all the documents in a single shard, so the
CPUs are not overwhelmed.  If you don't know what the query rate will
be, assume it will be high.

A more detailed discussion:

https://wiki.apache.org/solr/SolrPerformanceProblems

Thanks,
Shawn



--
If you reply to this email, your message will be added to the discussion
below:
http://lucene.472066.n3.nabble.com/Architecture-
suggestions-tp4326436p4326641.html
To unsubscribe from Architecture suggestions, click here
<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code=4326436=dnJpbmRhdmRhQGdtYWlsLmNvbXw0MzI2NDM2fDk1NzAxODI5NA==>
.
NAML
<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Architecture-suggestions-tp4326436p4326642.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Architecture suggestions

2017-03-24 Thread vrindavda
Thanks Erick and Emir , for your prompt reply.

We are expecting around 50M documents to sit on 80GB . I understand that
there is no equation to predict the number/size of server. But considering
to have minimal fault tolerant architecture, Will 2 shards and 2 replicas
with 128GB RAM, 4 core solr instance be advisable ? Will that suffice ?

I am planning to use two solr instances for shards and replicas each and 3
instances for zookeeper. Please suggest if I am in right direction.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Architecture-suggestions-tp4326436p4326612.html
Sent from the Solr - User mailing list archive at Nabble.com.


Architecture suggestions

2017-03-23 Thread vrindavda
Hello,

My production index is expected to contain 50 million documents, with
addition of around 1 million every year.

Should I go for 64GB RAM (4 Shards /4 Replicas) Or 128GB (2 Shards/ 2
Replicas) ?

Please suggest if above assumptions are incorrect. What all parameters
should I consider ?


Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Architecture-suggestions-tp4326436.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Query Suggestion

2017-03-07 Thread vrindavda
Hi Emir,Grouping is exactly what I wanted to achieve. Thanks !!Thank
you,Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Query-Suggestion-tp4323180p4323743.html
Sent from the Solr - User mailing list archive at Nabble.com.

Solr Query Suggestion

2017-03-03 Thread vrindavda
Hello,

I have indexed data of 3 categories say Category-1,Category-2,Category-3.

I need suggestions to form query as to get top 3 results from each
categories -  Category-1(3),Category-2(3),Category-3(3). - Total 9.

Is this possible ?

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Query-Suggestion-tp4323180.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOLR JOIN

2017-03-01 Thread vrindavda
Hi Nitin,

You can use   Streaming Expressions
  
for Joins in SolrCloud only (For Collections Not Core).

Again this can affect you performance, I would suggest to copy fields from
one collection to another any seamlessly use features like facet. Facets get
difficult and complex as you use Joins ( Ref

 
).



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-JOIN-tp4322744p4322781.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr6.3.0 SolrJ API for Basic Authentication

2017-02-16 Thread vrindavda
Hi Bryan,

Thanks for your quick response.

I am trying to ingest data into SolrCloud,  Hence I will not have any solr
query. Will it be right approach to use QueryRequest to index data ? Do I
need to put any dummy solrQuery instead ?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr6-3-0-SolrJ-API-for-Basic-Authentication-tp4320238p4320675.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr6.3.0 SolrJ API for Basic Authentication

2017-02-14 Thread vrindavda
Hello ,

I am trying to connect SolrCloud using SolrJ API using following code :

  String zkHostString = "localhost:9983";
  String USER = "solr";
  String PASSWORD = "SolrRocks";
 
 
  CredentialsProvider credentialsProvider = new
BasicCredentialsProvider(); 
  credentialsProvider.setCredentials(AuthScope.ANY, new
UsernamePasswordCredentials(USER, PASSWORD)); 
  CloseableHttpClient httpClient =   
HttpClientBuilder.create().setDefaultCredentialsProvider(credentialsProvider).build();
  
  CloudSolrClient solr = new
CloudSolrClient.Builder().withZkHost(zkHostString).withHttpClient(httpClient).build();
  ((CloudSolrClient)solr).setDefaultCollection("gettingstarted"); 




But getting Error As :


Exception in thread "main"
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException:
IOException occured when talking to server at:
http://192.168.0.104:8983/solr/gettingstarted_shard2_replica1
at
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:767)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1173)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:190)
at com.app.graphiti.TextParser.main(TextParser.java:92)
Caused by: org.apache.solr.client.solrj.SolrServerException: IOException
occured when talking to server at:
http://192.168.0.104:8983/solr/gettingstarted_shard2_replica1
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:742)
at java.util.concurrent.FutureTask.run(Unknown Source)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.http.client.ClientProtocolException
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:498)
... 10 more
Caused by: org.apache.http.client.NonRepeatableRequestException: Cannot
retry request with a non-repeatable request entity.
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:225)
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
... 13 more
16:55:40.289 [main-SendThread(0:0:0:0:0:0:0:1:9983)] DEBUG
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid:
0x15a3bc76e1f000e after 1ms
16:55:43.624 [main-SendThread(0:0:0:0:0:0:0:1:9983)] DEBUG
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid:
0x15a3bc76e1f000e after 1ms
16:55:46.958 [main-SendThread(0:0:0:0:0:0:0:1:9983)] DEBUG
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid:
0x15a3bc76e1f000e after 1ms
\


Please help.
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr6-3-0-SolrJ-API-for-Basic-Authentication-tp4320238.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Import from S3

2016-11-24 Thread vrindavda
Thanks for the quick response Aniket, 

Do i need to make any specific configurations to get data from Amazon S3
storage ?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Import-from-S3-tp4307382p4307384.html
Sent from the Solr - User mailing list archive at Nabble.com.


Import from S3

2016-11-24 Thread vrindavda
Hello,

I have some data in S3, say in text/CSV format, Please provide pointers how
can i ingest this data into Solr.

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Import-from-S3-tp4307382.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Monitoring Apache Solr

2016-08-30 Thread vrindavda
Hi Hardika,

To stop/restart solr you can try exploring  monit
  ( for Solr  Solr monit
  ) great tool to monitor you
services.

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Monitoring-Apache-Solr-tp4293938p4293946.html
Sent from the Solr - User mailing list archive at Nabble.com.


Change password

2016-08-30 Thread vrindavda
Hi,

I have enabled SSL for Solr following steps  here
   

Now, I am trying to change -keypass and  -storepass (to say "secret123" from
"secret") while generating new .jks file, and then updating the same
password in /bin/solr.in.cmd.

But, while staring solr i get this error :


Waiting up to 30 to see Solr running on port 8983
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.jetty.start.Main.invokeMain(Main.java:214)
at org.eclipse.jetty.start.Main.start(Main.java:457)
at org.eclipse.jetty.start.Main.main(Main.java:75)
Caused by: java.io.IOException: *Keystore was tampered with, or password was
incorrect*
at
sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
at
sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:5
6)
at
sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.
java:224)
at
sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeySt
ore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Change-password-tp4293940.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6: Use facet with Streaming Expressions- LeftOuterJoin

2016-08-19 Thread vrindavda
Thanks again !

I will try this and followup.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Use-facet-with-Streaming-Expressions-LeftOuterJoin-tp4290526p4292341.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6: Use facet with Streaming Expressions- LeftOuterJoin

2016-08-18 Thread vrindavda
I am not able to get count(*) for more than one field



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Use-facet-with-Streaming-Expressions-LeftOuterJoin-tp4290526p4292208.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6: Use facet with Streaming Expressions- LeftOuterJoin

2016-08-12 Thread vrindavda
Hey Joel,

Thanks for you quick response, I was able to merge documents using
OutherHashJoin. But I am not able to use rollup() to get count(*) for
multiple fields, as we get using facets.

Please suggest if last option is to merge documents using atomic updates,
and then use facets(or json.facet). Is there any other way to merge
documents permanently ?

Thank you,
Vrinda Davda



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Use-facet-with-Streaming-Expressions-LeftOuterJoin-tp4290526p4291397.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Streaming expressions malfunctioning

2016-08-05 Thread vrindavda
Hello,

I am looking for similar use case. Will it be possible for you to share the
corrected syntax ?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Streaming-expressions-malfunctioning-tp4281016p4290528.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 6: Use facet with Streaming Expressions- LeftOuterJoin

2016-08-05 Thread vrindavda
Hello,
I have two collections and need to join the results on uniqueIds.

I am able to do that with Streaming Expressions- LeftOuterJoin. Is there any
way to use facets along with this?





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Use-facet-with-Streaming-Expressions-LeftOuterJoin-tp4290526.html
Sent from the Solr - User mailing list archive at Nabble.com.