Re: Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Shashank Bellary
I've not upgraded mysql driver to latest, I can you explain whats the issue 
here?
thanks

On 11/22/19, 4:40 PM, "Shashank Bellary"  wrote:

Note - This message originated from outside Care.com - Please use caution 
before opening attachments, clicking on links or sharing information.


Yes, I'm on java 8 and mysql driver 5.1.13

Thanks

On 11/22/19, 3:00 PM, "Jörn Franke"  wrote:

Note - This message originated from outside Care.com - Please use 
caution before opening attachments, clicking on links or sharing information.


Did you update the java version to 8? Did you upgrade the MySQL driver 
to the latest version?

> Am 22.11.2019 um 20:43 schrieb Shashank Bellary :
>
>
>
> Hi Folks
> I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is 
working. I use `JdbcDataSource` and here the config file is attached
> 1) I started seeing OutOfMemory issue since MySQL JDBC driver has 
that issue of not respecting `batchSize` (though Solr4 didn't show this 
behavior). So, I added `batchSize=-1` for that
> 2) After adding that I'm running into ResultSet closed exception as 
shown below while fetching the child entity
>
> getNext() failed for query ' SELECT REVIEW AS REVIEWS FROM 
SOLR_SITTER_SERVICE_PROFILE_REVIEWS WHERE SERVICE_PROFILE_ID = '17' ; 
':org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.sql.SQLException: Operation not allowed after ResultSet closed
> at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
> at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
> at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
> at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
> at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
> at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
> at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
> at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
> at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
> at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
> at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
> at 
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Operation not allowed after 
ResultSet closed
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
> at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
> at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
> at 
com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
> at 
com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
> at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
> ... 13 more
>
> Is this a known issue? How do I fix this, any help is greatly 
appreciated.
>
> Thanks
> Shashank
> This email is intended for the person(s) to whom it is addressed and 
may contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized 
use, distribution, copying, or disclosure by any person other than the 
addressee(s) is strictly prohibited. If you have received this email in error, 
please notify the sender immediately by return email and delete the message and 
any attachments from your system.
> 


This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.


This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any 

Re: Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Shashank Bellary
Yes, I'm on java 8 and mysql driver 5.1.13

Thanks

On 11/22/19, 3:00 PM, "Jörn Franke"  wrote:

Note - This message originated from outside Care.com - Please use caution 
before opening attachments, clicking on links or sharing information.


Did you update the java version to 8? Did you upgrade the MySQL driver to 
the latest version?

> Am 22.11.2019 um 20:43 schrieb Shashank Bellary :
>
>
>
> Hi Folks
> I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is 
working. I use `JdbcDataSource` and here the config file is attached
> 1) I started seeing OutOfMemory issue since MySQL JDBC driver has that 
issue of not respecting `batchSize` (though Solr4 didn't show this behavior). 
So, I added `batchSize=-1` for that
> 2) After adding that I'm running into ResultSet closed exception as shown 
below while fetching the child entity
>
> getNext() failed for query ' SELECT REVIEW AS REVIEWS FROM 
SOLR_SITTER_SERVICE_PROFILE_REVIEWS WHERE SERVICE_PROFILE_ID = '17' ; 
':org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.sql.SQLException: Operation not allowed after ResultSet closed
> at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
> at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
> at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
> at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
> at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
> at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
> at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
> at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
> at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
> at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
> at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
> at 
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Operation not allowed after ResultSet 
closed
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
> at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
> at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
> at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
> at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
> at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
> ... 13 more
>
> Is this a known issue? How do I fix this, any help is greatly appreciated.
>
> Thanks
> Shashank
> This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.
> 


This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.


Re: Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Jörn Franke
Did you update the java version to 8? Did you upgrade the MySQL driver to the 
latest version?

> Am 22.11.2019 um 20:43 schrieb Shashank Bellary :
> 
> 
>  
> Hi Folks
> I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working. 
> I use `JdbcDataSource` and here the config file is attached
> 1) I started seeing OutOfMemory issue since MySQL JDBC driver has that issue 
> of not respecting `batchSize` (though Solr4 didn't show this behavior). So, I 
> added `batchSize=-1` for that
> 2) After adding that I'm running into ResultSet closed exception as shown 
> below while fetching the child entity
>  
> getNext() failed for query ' SELECT REVIEW AS REVIEWS FROM 
> SOLR_SITTER_SERVICE_PROFILE_REVIEWS WHERE SERVICE_PROFILE_ID = '17' ; 
> ':org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.sql.SQLException: Operation not allowed after ResultSet closed
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
> at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
> at 
> org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
> at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
> at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
> at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
> at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
> ... 13 more
>  
> Is this a known issue? How do I fix this, any help is greatly appreciated.
>  
> Thanks
> Shashank
> This email is intended for the person(s) to whom it is addressed and may 
> contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
> distribution, copying, or disclosure by any person other than the 
> addressee(s) is strictly prohibited. If you have received this email in 
> error, please notify the sender immediately by return email and delete the 
> message and any attachments from your system.
> 


Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Shashank Bellary


Hi Folks

I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working. I 
use `JdbcDataSource` and here the config file is attached

1) I started seeing OutOfMemory issue since MySQL JDBC driver has that issue of 
not respecting `batchSize` (though Solr4 didn't show this behavior). So, I 
added `batchSize=-1` for that

2) After adding that I'm running into ResultSet closed exception as shown below 
while fetching the child entity


getNext() failed for query ' SELECT REVIEW AS REVIEWS FROM 
SOLR_SITTER_SERVICE_PROFILE_REVIEWS WHERE SERVICE_PROFILE_ID = '17' ; 
':org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.sql.SQLException: Operation not allowed after ResultSet closed
at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
at 
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
... 13 more


Is this a known issue? How do I fix this, any help is greatly appreciated.



Thanks

Shashank

This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.


serviceprofile-data-import.xml
Description: serviceprofile-data-import.xml


FW: Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Shashank Bellary
Hi Folks

I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working. I 
use `JdbcDataSource` and here the config file is attached

1) I started seeing OutOfMemory issue since MySQL JDBC driver has that issue of 
not respecting `batchSize` (though Solr4 didn't show this behavior). So, I 
added `batchSize=-1` for that

2) After adding that I'm running into ResultSet closed exception as shown below 
while fetching the child entity


getNext() failed for query ' SELECT REVIEW AS REVIEWS FROM 
SOLR_SITTER_SERVICE_PROFILE_REVIEWS WHERE SERVICE_PROFILE_ID = '17' ; 
':org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.sql.SQLException: Operation not allowed after ResultSet closed
at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
at 
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
... 13 more


Is this a known issue? How do I fix this, any help is greatly appreciated.



Thanks

Shashank

This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.


serviceprofile-data-import.xml
Description: serviceprofile-data-import.xml


Re: How to tell which core was used based on Json or XML response from Solr

2019-11-22 Thread David Hastings
i personally dont like php, but it may just be the easiest way to do what
you need assuming you have a basic web server,
send your search query to php, and use $_GET or $_POST to read it into a
variable:
https://www.php.net/manual/en/reserved.variables.get.php

then send that to the solr server in the same piece of php with curl

https://phpenthusiast.com/blog/five-php-curl-examples

and return the raw result if you want.  at the very least it hides its url,
but with this you can block the solr port to outside ip's and only allow 80
or whatever your webserver is using


On Fri, Nov 22, 2019 at 1:43 PM rhys J  wrote:

> On Fri, Nov 22, 2019 at 1:39 PM David Hastings <
> hastings.recurs...@gmail.com>
> wrote:
>
> > 2 things (maybe 3):
> > 1.  dont have this code facing a client thats not you, otherwise anyone
> > could view the source and see where the solr server is, which means they
> > can destroy your index or anything they want.  put at the very least a
> > simple api/front end in between the javascript page for the user and the
> > solr server
> >
>
> Is there a way I can fix this?
>
>
> > 2. i dont think there is a way, you would be better off indexing an
> > indicator of sorts into your documents
> >
>
> Oh this is a good idea.
>
> Thanks!
>
> 3. the jquery in your example already has the core identified, not sure why
> > the receiving javascript wouldn't be able to read that variable unless im
> > missing something.
> >
> >
> There's another function on_data that is being called by the url, which
> does not have any indication of what the core was, only the response from
> the url.
>
> Thanks,
>
> Rhys
>


Re: async BACKUP under Solr8.3

2019-11-22 Thread Erick Erickson
Hmmm, any idea how/why it was set to zero? I just looked at the Git history for 
that file and don’t see it ever being set to 0….

> On Nov 22, 2019, at 11:19 AM, Oakley, Craig (NIH/NLM/NCBI) [C] 
>  wrote:
> 
> For the record, the solution was to edit solr.xml changing
> 
> ${socketTimeout:0}
> 
> to
> 
> ${socketTimeout:60}
> 
> -Original Message-
> From: Oakley, Craig (NIH/NLM/NCBI) [C]  
> Sent: Tuesday, November 19, 2019 6:19 PM
> To: solr-user@lucene.apache.org
> Subject: RE: async BACKUP under Solr8.3
> 
> In some collections I am having problems with Solr8.1.1 through 8.3; with 
> other collections it is fine in Solr8.1.1 through 8.3
> 
> I'm investigating what might be wrong with the collections which have the 
> problems.
> 
> Thanks
> 
> -Original Message-
> From: Oakley, Craig (NIH/NLM/NCBI) [C]  
> Sent: Tuesday, November 19, 2019 9:53 AM
> To: solr-user@lucene.apache.org
> Subject: RE: async BACKUP under Solr8.3
> 
> FYI, I DO succeed in doing an async backup in Solr8.1
> 
> -Original Message-
> From: Oakley, Craig (NIH/NLM/NCBI) [C]  
> Sent: Tuesday, November 19, 2019 9:03 AM
> To: solr-user@lucene.apache.org
> Subject: RE: async BACKUP under Solr8.3
> 
> This is on a test server: simple case: one node, one shard, one replica
> 
> In production we currently use Solr7.4 and the async BACKUP works fine. I 
> could test whether I get the same symptoms on Solr8.1 and/or 8.2
> 
> Thanks
> 
> -Original Message-
> From: Mikhail Khludnev  
> Sent: Tuesday, November 19, 2019 12:40 AM
> To: solr-user 
> Subject: Re: async BACKUP under Solr8.3
> 
> Hello, Craig.
> There was a significant  fix for async BACKUP in 8.1, if I remember it
> correctly.
> Which version you used for it before? How many nodes, shards, replicas
> `bug` has?
> Unfortunately this stacktrace is not really representative, it just says
> that some node (ok, it's overseer) fails to wait another one.
> Ideally we need a log from overseer node and subordinate node during backup
> operation.
> Thanks.
> 
> On Tue, Nov 19, 2019 at 2:13 AM Oakley, Craig (NIH/NLM/NCBI) [C]
>  wrote:
> 
>> For Solr 8.3, when I attempt a command of the form
>> 
>> 
>> host:port/solr/admin/collections?action=BACKUP=snapshot1=col1=/tmp=bug
>> 
>> And then when I run
>> /solr/admin/collections?action=REQUESTSTATUS=bug I get
>> "msg":"found [bug] in failed tasks"
>> 
>> The solr.log file has a stack trace like the following
>> 2019-11-18 17:31:31.369 ERROR
>> (OverseerThreadFactory-9-thread-5-processing-n:host:port_solr) [c:col1   ]
>> o.a.s.c.a.c.OverseerCollectionMessageHandler Error from shard:
>> http://host:port/solr =>
>> org.apache.solr.client.solrj.SolrServerException: Timeout occured while
>> waiting response from server at: http://host:port/solr/admin/cores
>>at
>> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:408)
>> org.apache.solr.client.solrj.SolrServerException: Timeout occured while
>> waiting response from server at: http://host:port/solr/admin/cores
>>at
>> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:408)
>> ~[?:?]
>>at
>> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:754)
>> ~[?:?]
>>at
>> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290) ~[?:?]
>>at
>> org.apache.solr.handler.component.HttpShardHandler.request(HttpShardHandler.java:238)
>> ~[?:?]
>>at
>> org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:199)
>> ~[?:?]
>>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> ~[?:1.8.0_232]
>>at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[?:1.8.0_232]
>>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> ~[?:1.8.0_232]
>>at
>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
>> ~[metrics-core-4.0.5.jar:4.0.5]
>>at
>> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
>> ~[?:?]
>>at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> ~[?:1.8.0_232]
>>at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> ~[?:1.8.0_232]
>>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
>> Caused by: java.util.concurrent.TimeoutException
>>at
>> org.eclipse.jetty.client.util.InputStreamResponseListener.get(InputStreamResponseListener.java:216)
>> ~[?:?]
>>at
>> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:399)
>> ~[?:?]
>>... 12 more
>> 
>> If I remove the async=bug, then it works
>> 
>> In fact, the backup looks successful, but REQUESTSTATUS does not recognize
>> it as such
>> 
>> I notice that the 3:30am 11/4/19 Email to solr-user@lucene.apache.org
>> 

Re: How to tell which core was used based on Json or XML response from Solr

2019-11-22 Thread rhys J
On Fri, Nov 22, 2019 at 1:39 PM David Hastings 
wrote:

> 2 things (maybe 3):
> 1.  dont have this code facing a client thats not you, otherwise anyone
> could view the source and see where the solr server is, which means they
> can destroy your index or anything they want.  put at the very least a
> simple api/front end in between the javascript page for the user and the
> solr server
>

Is there a way I can fix this?


> 2. i dont think there is a way, you would be better off indexing an
> indicator of sorts into your documents
>

Oh this is a good idea.

Thanks!

3. the jquery in your example already has the core identified, not sure why
> the receiving javascript wouldn't be able to read that variable unless im
> missing something.
>
>
There's another function on_data that is being called by the url, which
does not have any indication of what the core was, only the response from
the url.

Thanks,

Rhys


Re: How to tell which core was used based on Json or XML response from Solr

2019-11-22 Thread David Hastings
2 things (maybe 3):
1.  dont have this code facing a client thats not you, otherwise anyone
could view the source and see where the solr server is, which means they
can destroy your index or anything they want.  put at the very least a
simple api/front end in between the javascript page for the user and the
solr server
2. i dont think there is a way, you would be better off indexing an
indicator of sorts into your documents
3. the jquery in your example already has the core identified, not sure why
the receiving javascript wouldn't be able to read that variable unless im
missing something.

On Fri, Nov 22, 2019 at 1:27 PM rhys J  wrote:

> I'm implementing an autocomplete search box for Solr.
>
> I'm using JSON as my response style, and this is the jquery code.
>
>
>  var url='http://10.40.10.14:8983/solr/'+core+'/select/?q='+queryField +
>
>
> query+'=2.2=true=0=50=on=json=?=on_data';
>
>  jQuery_3_4_1.getJSON(url);
>
> ___
>
> on_data(data)
> {
>  var docs = data.response.docs;
> jQuery_3_4_1.each(docs, function(i, item) {
>
> var trLink = ' href="#" onclick=local_goto_dbtr(' + item.debtor_id + '); return true;"> '
>  + item.debtor_id + '';
>
> trLink += '' + item.name1 + '';
> trLink += '' + item.dl1 + '';
> trLink += '';
>
> jQuery_3_4_1('#resultsTable').prepend(jQuery_3_4_1(trLink));
> });
>
> }
>
> the jQuery_3_4_1 variable is replacing $ because I needed to have 2
> different versions of jQuery running in the same document.
>
> I'd like to know if there's something I'm missing that will indicate which
> core I've used in Solr based on the response.
>
> Thanks,
>
> Rhys
>


How to tell which core was used based on Json or XML response from Solr

2019-11-22 Thread rhys J
I'm implementing an autocomplete search box for Solr.

I'm using JSON as my response style, and this is the jquery code.


 var url='http://10.40.10.14:8983/solr/'+core+'/select/?q='+queryField +

query+'=2.2=true=0=50=on=json=?=on_data';

 jQuery_3_4_1.getJSON(url);

___

on_data(data)
{
 var docs = data.response.docs;
jQuery_3_4_1.each(docs, function(i, item) {

var trLink = ' '
 + item.debtor_id + '';

trLink += '' + item.name1 + '';
trLink += '' + item.dl1 + '';
trLink += '';

jQuery_3_4_1('#resultsTable').prepend(jQuery_3_4_1(trLink));
});

}

the jQuery_3_4_1 variable is replacing $ because I needed to have 2
different versions of jQuery running in the same document.

I'd like to know if there's something I'm missing that will indicate which
core I've used in Solr based on the response.

Thanks,

Rhys


RE: async BACKUP under Solr8.3

2019-11-22 Thread Oakley, Craig (NIH/NLM/NCBI) [C]
For the record, the solution was to edit solr.xml changing

${socketTimeout:0}

to

${socketTimeout:60}

-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C]  
Sent: Tuesday, November 19, 2019 6:19 PM
To: solr-user@lucene.apache.org
Subject: RE: async BACKUP under Solr8.3

In some collections I am having problems with Solr8.1.1 through 8.3; with other 
collections it is fine in Solr8.1.1 through 8.3

I'm investigating what might be wrong with the collections which have the 
problems.

Thanks

-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C]  
Sent: Tuesday, November 19, 2019 9:53 AM
To: solr-user@lucene.apache.org
Subject: RE: async BACKUP under Solr8.3

FYI, I DO succeed in doing an async backup in Solr8.1

-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C]  
Sent: Tuesday, November 19, 2019 9:03 AM
To: solr-user@lucene.apache.org
Subject: RE: async BACKUP under Solr8.3

This is on a test server: simple case: one node, one shard, one replica

In production we currently use Solr7.4 and the async BACKUP works fine. I could 
test whether I get the same symptoms on Solr8.1 and/or 8.2

Thanks

-Original Message-
From: Mikhail Khludnev  
Sent: Tuesday, November 19, 2019 12:40 AM
To: solr-user 
Subject: Re: async BACKUP under Solr8.3

Hello, Craig.
There was a significant  fix for async BACKUP in 8.1, if I remember it
correctly.
Which version you used for it before? How many nodes, shards, replicas
`bug` has?
Unfortunately this stacktrace is not really representative, it just says
that some node (ok, it's overseer) fails to wait another one.
Ideally we need a log from overseer node and subordinate node during backup
operation.
Thanks.

On Tue, Nov 19, 2019 at 2:13 AM Oakley, Craig (NIH/NLM/NCBI) [C]
 wrote:

> For Solr 8.3, when I attempt a command of the form
>
>
> host:port/solr/admin/collections?action=BACKUP=snapshot1=col1=/tmp=bug
>
> And then when I run
> /solr/admin/collections?action=REQUESTSTATUS=bug I get
> "msg":"found [bug] in failed tasks"
>
> The solr.log file has a stack trace like the following
> 2019-11-18 17:31:31.369 ERROR
> (OverseerThreadFactory-9-thread-5-processing-n:host:port_solr) [c:col1   ]
> o.a.s.c.a.c.OverseerCollectionMessageHandler Error from shard:
> http://host:port/solr =>
> org.apache.solr.client.solrj.SolrServerException: Timeout occured while
> waiting response from server at: http://host:port/solr/admin/cores
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:408)
> org.apache.solr.client.solrj.SolrServerException: Timeout occured while
> waiting response from server at: http://host:port/solr/admin/cores
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:408)
> ~[?:?]
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:754)
> ~[?:?]
> at
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290) ~[?:?]
> at
> org.apache.solr.handler.component.HttpShardHandler.request(HttpShardHandler.java:238)
> ~[?:?]
> at
> org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:199)
> ~[?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[?:1.8.0_232]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[?:1.8.0_232]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[?:1.8.0_232]
> at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
> ~[metrics-core-4.0.5.jar:4.0.5]
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
> ~[?:?]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> ~[?:1.8.0_232]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ~[?:1.8.0_232]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
> Caused by: java.util.concurrent.TimeoutException
> at
> org.eclipse.jetty.client.util.InputStreamResponseListener.get(InputStreamResponseListener.java:216)
> ~[?:?]
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:399)
> ~[?:?]
> ... 12 more
>
> If I remove the async=bug, then it works
>
> In fact, the backup looks successful, but REQUESTSTATUS does not recognize
> it as such
>
> I notice that the 3:30am 11/4/19 Email to solr-user@lucene.apache.org
> mentions in Solr 8.3.0 Release Highlights "Fix for SPLITSHARD (async) with
> failures in underlying sub-operations can result in data loss"
>
> Did a fix to SPLITSHARD break BACKUP?
>
> Has anyone been successful running
> solr/admin/collections?action=BACKUP=requestname under Solr8.3?
>
> Thanks
>


-- 
Sincerely yours
Mikhail Khludnev


Re: Solr process takes several minutes before accepting commands after restart

2019-11-22 Thread Erick Erickson
right, doesn’t sound likely that it’s rebuilding suggesters/spellcheckers. 
Throwing a profiler at it to see where it’s spending the time might be easiest.

> On Nov 22, 2019, at 3:45 AM, Koen De Groote  
> wrote:
> 
> Thanks for the link, that's some nice documentation and guidelines.
> 
> I'll probably have time to test again next week, but in the meantime, it
> does make me scratch my head.
> 
> I deleted the root folder and re-instrumented the environment.
> 
> So, there's nothing there. Nothing.
> 
> The text says:
> 
> "Do you really want to re-read, decompress and add the field from *every*
> document to the suggester *every time you start Solr!* Likely not, but you
> can if you want to."
> 
> However, there shouldn't be any documents, since it all got deleted and an
> empty database got set up to facilitate the restore. The slowness happens
> before the restore. With the fresh install.
> 
> But yeah, I'll try it out, thanks for the links.
> 
> 
> 
> 
> On Thu, Nov 21, 2019 at 9:41 PM Dave  wrote:
> 
>> https://lucidworks.com/post/solr-suggester/
>> 
>> You must set buildonstartup to false, the default is true. Try it
>> 
>>> On Nov 21, 2019, at 3:21 PM, Koen De Groote 
>> wrote:
>>> 
>>> Erick:
>>> 
>>> No suggesters. There is 1 spellchecker for
>>> 
>>> text_general
>>> 
>>> But no buildOnCommit or buildOnStartup setting mentioned anywhere.
>>> 
>>> That being said, the point in time at which this occurs, the database is
>>> guaranteed to be empty, as the data folders had previously been deleted
>> and
>>> recreated empty. Then the docker container is restarted and this behavior
>>> is observed.
>>> 
>>> Long shot, but even if Solr is getting data from zookeeper telling of
>> file
>>> locations and checking for the existence of these files... that should be
>>> pretty fast, I'd think.
>>> 
>>> This is really disturbing. I know what to expect when recovering now, but
>>> someone doing this on a live environment that has to be up again ASAP is
>>> probably going to be sweating bullets.
>>> 
>>> 
>>> On Thu, Nov 21, 2019 at 2:45 PM Erick Erickson 
>>> wrote:
>>> 
 Koen:
 
 Do you have any spellcheckers or suggesters defined with buildOnCommit
>> or
 buildOnStartup set to “true”? Depending on the implementation, this may
 have to read the stored data for the field used in the
 suggester/spellchecker from _every_ document in your collection, which
>> can
 take many minutes. Even if your implementation in your config is
>> file-based
 it can still take a while.
 
 Shot in the dark….
 
 Erick
 
> On Nov 21, 2019, at 4:03 AM, Koen De Groote <
>> koen.degro...@limecraft.com>
 wrote:
> 
> The logs files showed a startup, printing of all the config options
>> that
> had been set, 1 or 2 commands that got executed and then nothing.
> 
> Sending the curl did not get shown in the logs files until after that
> period where Solr became unresponsive.
> 
> Service mesh, I don't think so? It's in a docker container, but that
> shouldn't be a problem, it usually never is.
> 
> 
> On Wed, Nov 20, 2019 at 10:42 AM Jörn Franke 
 wrote:
> 
>> Have you checked the log files of Solr?
>> 
>> 
>> Do you have a service mesh in-between? Could it be something at the
>> network layer/container orchestration  that is blocking requests for
 some
>> minutes?
>> 
>>> Am 20.11.2019 um 10:32 schrieb Koen De Groote <
>> koen.degro...@limecraft.com>:
>>> 
>>> Hello
>>> 
>>> I was testing some backup/restore scenarios.
>>> 
>>> 1 of them is Solr7.6 in a docker container(7.6.0-slim), set up as
>>> SolrCloud, with zookeeper.
>>> 
>>> The steps are as follows:
>>> 
>>> 1. Manually delete the data folder.
>>> 2. Restart the container. The process is now in error mode,
>> complaining
>>> that it cannot find the cores.
>>> 3. Fix the install, meaning create new data folders, which are empty
>> at
>>> this point.
>>> 4. Restart the container again, to pick up the empty folders and not
>> be
>> in
>>> error anymore.
>>> 5. Perform the restore
>>> 6. Check if everything is available again
>>> 
>>> The problem is between step 4 and 5. After step 4, it takes several
>> minutes
>>> before solr actually responds to curl commands.
>>> 
>>> Once responsive, the restore happened just fine. But it's very
 stressful
>> in
>>> a situation where you have to restore a production environment and
>> the
>>> process just doesn't respond for 5-10 minutes.
>>> 
>>> We're talking about 20GB of data here, so not very much, but not
>> little
>>> either.
>>> 
>>> Is it normal that it takes so long before solr responds? If not, what
>>> should I look at in order to find the cause?
>>> 
>>> I have asked this before recently, though the wording was confusing.
 

Re: Odd Edge Case for SpellCheck

2019-11-22 Thread Jörn Franke
Stemming involved ?

> Am 22.11.2019 um 14:23 schrieb Moyer, Brett :
> 
> Hello, we have spellcheck running, using the index as the dictionary. An odd 
> use case came up today wanted to get your thoughts and see if what we 
> determined is correct. Use case: User sends a query for q=brokerage, 
> spellcheck fires and returns "brokerage". Looking at the output I see that 
> solr must have pulled the root word "brokage" then spellcheck said hey I need 
> to fix that. Is that correct? There's no issue, it's just an unexpected 
> outcome. Thanks!
> 
> "q":"brokerage",
> "spellcheck":{
>"suggestions":
>[
>  {"name":"brokage",{
>"type":"str","value":"numFound":1,
>"startOffset":0,
>"endOffset":9,
>"suggestion":["brokerage"]}}],
>"collations":
>[
>  {"name":"collation","type":"str","value":"brokerage"}]}}
> 
> Brett Moyer
> *
> This e-mail may contain confidential or privileged information.
> If you are not the intended recipient, please notify the sender immediately 
> and then delete it.
> 
> TIAA
> *


Odd Edge Case for SpellCheck

2019-11-22 Thread Moyer, Brett
Hello, we have spellcheck running, using the index as the dictionary. An odd 
use case came up today wanted to get your thoughts and see if what we 
determined is correct. Use case: User sends a query for q=brokerage, spellcheck 
fires and returns "brokerage". Looking at the output I see that solr must have 
pulled the root word "brokage" then spellcheck said hey I need to fix that. Is 
that correct? There's no issue, it's just an unexpected outcome. Thanks!

"q":"brokerage",
"spellcheck":{
"suggestions":
[
  {"name":"brokage",{
"type":"str","value":"numFound":1,
"startOffset":0,
"endOffset":9,
"suggestion":["brokerage"]}}],
"collations":
[
  {"name":"collation","type":"str","value":"brokerage"}]}}

Brett Moyer
*
This e-mail may contain confidential or privileged information.
If you are not the intended recipient, please notify the sender immediately and 
then delete it.

TIAA
*


Re: Solr process takes several minutes before accepting commands after restart

2019-11-22 Thread Koen De Groote
Thanks for the link, that's some nice documentation and guidelines.

I'll probably have time to test again next week, but in the meantime, it
does make me scratch my head.

I deleted the root folder and re-instrumented the environment.

So, there's nothing there. Nothing.

The text says:

"Do you really want to re-read, decompress and add the field from *every*
document to the suggester *every time you start Solr!* Likely not, but you
can if you want to."

However, there shouldn't be any documents, since it all got deleted and an
empty database got set up to facilitate the restore. The slowness happens
before the restore. With the fresh install.

But yeah, I'll try it out, thanks for the links.




On Thu, Nov 21, 2019 at 9:41 PM Dave  wrote:

> https://lucidworks.com/post/solr-suggester/
>
> You must set buildonstartup to false, the default is true. Try it
>
> > On Nov 21, 2019, at 3:21 PM, Koen De Groote 
> wrote:
> >
> > Erick:
> >
> > No suggesters. There is 1 spellchecker for
> >
> > text_general
> >
> > But no buildOnCommit or buildOnStartup setting mentioned anywhere.
> >
> > That being said, the point in time at which this occurs, the database is
> > guaranteed to be empty, as the data folders had previously been deleted
> and
> > recreated empty. Then the docker container is restarted and this behavior
> > is observed.
> >
> > Long shot, but even if Solr is getting data from zookeeper telling of
> file
> > locations and checking for the existence of these files... that should be
> > pretty fast, I'd think.
> >
> > This is really disturbing. I know what to expect when recovering now, but
> > someone doing this on a live environment that has to be up again ASAP is
> > probably going to be sweating bullets.
> >
> >
> > On Thu, Nov 21, 2019 at 2:45 PM Erick Erickson 
> > wrote:
> >
> >> Koen:
> >>
> >> Do you have any spellcheckers or suggesters defined with buildOnCommit
> or
> >> buildOnStartup set to “true”? Depending on the implementation, this may
> >> have to read the stored data for the field used in the
> >> suggester/spellchecker from _every_ document in your collection, which
> can
> >> take many minutes. Even if your implementation in your config is
> file-based
> >> it can still take a while.
> >>
> >> Shot in the dark….
> >>
> >> Erick
> >>
> >>> On Nov 21, 2019, at 4:03 AM, Koen De Groote <
> koen.degro...@limecraft.com>
> >> wrote:
> >>>
> >>> The logs files showed a startup, printing of all the config options
> that
> >>> had been set, 1 or 2 commands that got executed and then nothing.
> >>>
> >>> Sending the curl did not get shown in the logs files until after that
> >>> period where Solr became unresponsive.
> >>>
> >>> Service mesh, I don't think so? It's in a docker container, but that
> >>> shouldn't be a problem, it usually never is.
> >>>
> >>>
> >>> On Wed, Nov 20, 2019 at 10:42 AM Jörn Franke 
> >> wrote:
> >>>
>  Have you checked the log files of Solr?
> 
> 
>  Do you have a service mesh in-between? Could it be something at the
>  network layer/container orchestration  that is blocking requests for
> >> some
>  minutes?
> 
> > Am 20.11.2019 um 10:32 schrieb Koen De Groote <
>  koen.degro...@limecraft.com>:
> >
> > Hello
> >
> > I was testing some backup/restore scenarios.
> >
> > 1 of them is Solr7.6 in a docker container(7.6.0-slim), set up as
> > SolrCloud, with zookeeper.
> >
> > The steps are as follows:
> >
> > 1. Manually delete the data folder.
> > 2. Restart the container. The process is now in error mode,
> complaining
> > that it cannot find the cores.
> > 3. Fix the install, meaning create new data folders, which are empty
> at
> > this point.
> > 4. Restart the container again, to pick up the empty folders and not
> be
>  in
> > error anymore.
> > 5. Perform the restore
> > 6. Check if everything is available again
> >
> > The problem is between step 4 and 5. After step 4, it takes several
>  minutes
> > before solr actually responds to curl commands.
> >
> > Once responsive, the restore happened just fine. But it's very
> >> stressful
>  in
> > a situation where you have to restore a production environment and
> the
> > process just doesn't respond for 5-10 minutes.
> >
> > We're talking about 20GB of data here, so not very much, but not
> little
> > either.
> >
> > Is it normal that it takes so long before solr responds? If not, what
> > should I look at in order to find the cause?
> >
> > I have asked this before recently, though the wording was confusing.
> >> This
> > should be clearer.
> >
> > Kind regards,
> > Koen De Groote
> 
> >>
> >>
>


Re: Nested SubQuery

2019-11-22 Thread Mikhail Khludnev
Hello,
You may try to add =q,row.id and check logs