RELOAD collection via HTTP timeout issue solr cloud v4.5.1

2013-12-17 Thread ade-b
Hi

We are getting connection timeout errors when trying to execute the RELOAD
command against a collection in SOLR Cloud v4.5.1. 

We are issuing the command using curl on the solr server instance. The SOLR
server seems to be functional in every other way. We can issue a reload via
the admin dashboard successfully.

We have tried restarting the SOLR server and still get the same timeout on
issuing the curl command.

We can telnet to the port SOLR is running on.

The response from the curl command is:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status500/intint
name=QTime60012/int/lstlst name=errorstr
name=msgreloadcollection the collection time out:60s/strstr
name=traceorg.apache.solr.common.SolrException: reloadcollection the
collection time out:60s
at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:199)
at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:180)
at
org.apache.solr.handler.admin.CollectionsHandler.handleReloadAction(CollectionsHandler.java:221)
at
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:141)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:655)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:255)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1041)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:603)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
at java.lang.Thread.run(Unknown Source)
/strint name=code500/int/lst
/response


Any ideas?

Thanks
Ade



--
View this message in context: 
http://lucene.472066.n3.nabble.com/RELOAD-collection-via-HTTP-timeout-issue-solr-cloud-v4-5-1-tp4107102.html
Sent from the Solr - User mailing list archive at Nabble.com.


useColdSearcher in SolrCloud config

2013-11-22 Thread ade-b
Hi

The definition of useColdSearcher config element in solrconfig.xml is

If a search request comes in and there is no current registered searcher,
then immediately register the still warming searcher and use it.  If false
then all requests will block until the first searcher is done warming.

By the term 'block', I assume SOLR returns a non 200 response to requests.
Does anybody know the exact response code returned when the server is
blocking requests?

If a new SOLR server is introduced into an existing array of SOLR servers
(in SOLR Cloud setup), it will sync it's index from the leader. To save you
having to specify warm-up queries in the solrconfig.xml file for first
searchers, would/could the new server not auto warm it's caches from the
caches of an existing server?

Thanks
Ade 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/useColdSearcher-in-SolrCloud-config-tp4102569.html
Sent from the Solr - User mailing list archive at Nabble.com.


Adding a server to an existing SOLR cloud cluster

2013-11-11 Thread ade-b
Hi 

We have a SOLRCloud cluster of 3 solr servers (v4.5.0 running under tomcat)
with 1 shard. We added a new SOLR server (v4.5.1) by simply starting tomcat
and pointing it at the zookeeper ensemble used by the existing cluster. My
understanding was that this new server would handshake with zookeeper and
add itself as a replica to the existing cluster.

What has actually happened is that the server is in zookeeper's live_nodes,
but is not in the clusterstate.json file. It also does not have a
CORE/collection associated with it.

Any ideas? I assume I am missing a step. Do I have to manually create the
core on the new server?


Cheers
Ade



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Adding-a-server-to-an-existing-SOLR-cloud-cluster-tp4100275.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Adding a server to an existing SOLR cloud cluster

2013-11-11 Thread ade-b
Thanks.

If I understand what you are saying, it should automatically register itself
with the existing cluster if we start SOLR with the correct command line
options. We tried adding the numShards option to the command line but still
get the same outcome.

We start the new SOLR server using 

/usr/bin/java
-Djava.util.logging.config.file=/mnt/ephemeral/apache-tomcat-7.0.47/conf/logging.properties
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -server
-Xms256m -Xmx1024m -XX:+DisableExplicitGC
-Dsolr.solr.home=/mnt/ephemeral/solr -Dport=8080 -DhostContext=solr
-DnumShards=1 -DzkClientTimeout=15000 -DzkHost=zk ip address
-Djava.endorsed.dirs=/mnt/ephemeral/apache-tomcat-7.0.47/endorsed -classpath
/mnt/ephemeral/apache-tomcat-7.0.47/bin/bootstrap.jar:/mnt/ephemeral/apache-tomcat-7.0.47/bin/tomcat-juli.jar
-Dcatalina.base=/mnt/ephemeral/apache-tomcat-7.0.47
-Dcatalina.home=/mnt/ephemeral/apache-tomcat-7.0.47
-Djava.io.tmpdir=/mnt/ephemeral/apache-tomcat-7.0.47/temp
org.apache.catalina.startup.Bootstrap start

Regards
Ade



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Adding-a-server-to-an-existing-SOLR-cloud-cluster-tp4100275p4100286.html
Sent from the Solr - User mailing list archive at Nabble.com.


SOLRCloud - Small Index - Full Index Strategy

2013-11-06 Thread ade-b
Hi

We are moving from running the Endeca search engine to SOLRcloud. We have a
comparatively small index compared to a lot of companies (approx 150,000
records).

To potentially keep things simple in the first release to the production
environment we are considering running a full index every 15 minutes (i.e.
we are not doing deltas). In our performance environment the full index
currently takes approximately 11 minutes to run (approx 4 minutes of this is
writing to SOLR).

Naturally we see spikes in CPU on the SOLR servers during this 4 minute
period, but it does not significantly impact the response times/throughput
to the clients that are reading from SOLR.

I have attached a CPU graph that shows the spikes. Were you see the CPU
hovering around 20%, this is when  we are running load tests.

My question is, does anybody else use SOLRcloud in this way (i.e. similar
index size/full index strategy) or does anybody see issues with this setup?

http://lucene.472066.n3.nabble.com/file/n4099570/cpu.jpg 

Thanks
Ade



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLRCloud-Small-Index-Full-Index-Strategy-tp4099570.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to dynamically add geo fields to a query using a request handler

2013-07-26 Thread ade-b
I just realised that you can use the appends attribute value in the request
handler config (of solrconfig.xml). By setting this, any additional fields
you add via the solrj API are appended.

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-dynamically-add-geo-fields-to-a-query-using-a-request-handler-tp4071655p4080619.html
Sent from the Solr - User mailing list archive at Nabble.com.


Requests Per Second - All request handlers

2013-07-26 Thread ade-b
Hi

Is there somewhere in the stats page (e.g.
http://localhost:8983/solr/admin/mbeans?stats=true) that has the stats for
all of the request handlers combined?

I have a lot of request handlers that have their individual stats, but for a
birds eye view of performance it would be good to get a combined view.

Are the stats registered under the name /admin/mbeans a candidate for
example?

Thanks
Ade 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Requests-Per-Second-All-request-handlers-tp4080621.html
Sent from the Solr - User mailing list archive at Nabble.com.


How to dynamically add geo fields to a query using a request handler

2013-06-19 Thread ade-b
Hi

We have a request handler defined in solrconfig.xml that specifies a list of
fields to return for the request using the fl name.

E.g. str name=flcreatedDate/str

When constructing a query using solrj that uses this request handler, we
want to conditionally add the geo spatial fields that will tell us the
distance of a record in the solr index from a given location. Currently we
add this to the query by specifying 

solrQuery.set(fl, *,distance:geodist());

This has the effect of returning all fields for the record - not those
specified in the request handler. I'm assuming this is because of the * in
the solrQuery.set method is overriding those statically defined in the
request handler.

I have tried to add the geodist property via the solrQuery.addField()
method, but that complains saying it is not a valid field - maybe I used it
incorrectly?

Has anybody any ideas how to achieve this?

Thanks
Ade






--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-dynamically-add-geo-fields-to-a-query-using-a-request-handler-tp4071655.html
Sent from the Solr - User mailing list archive at Nabble.com.


Garbled data in response - reading from mySQL database

2009-10-03 Thread Ade B

Hi 

I am using the DIH to import data from a mySQL database. Everything was fine
in that I could index the data, and search the index and return the correct
results. However, I have just changed my database-config.xml file to add the
primary key value of a table to a field composed of other values. For
example in the entity definition below from the database-config.xml file, I
have added changed 

concat(i.name,'|',ih.name) to concat(i.name,'|',ih.name,'|',i.id).


entity name=ingredient query=select concat(i.name,'|',ih.name,'|',i.id)
as name, concat(i.quantity,'|',ih.name,'|',i.id) as quantity,
concat(i.unit,'|',ih.name,'|',i.id) as unit from ingredientheader ih,
ingredient i where ih.id = i.ingredientHeader_id and
ih.recipe_id='${recipe.id}'
deltaQuery=select ih.recipe_id from ingredientheader ih, 
ingredient i
where ih.lastModified  '${dataimporter.last_index_time}' or i.lastModified
 '${dataimporter.last_index_time}'
parentDeltaQuery=select id from recipe where id =
'${ingredientheader.recipe_id}'
field name=ingredientName column=name/
field name=ingredientUnit column=unit/
field name=ingredientQuantity column=quantity/
/entity

The query runs correctly in MySQL and returns the correct data. I can still
index the data without error, but the response now contains garbage for
these fields.

Example snippet of response is:

arr name=ingredientName
str[...@1f759bf/str
/arr
−
arr name=ingredientQuantity
str[...@2513d0/str
/arr
−
arr name=ingredientUnit
str[...@1a82c58/str
/arr

Note: I am using a nightly build of Solr 1.4 from 26/9/2009.

Any ideas?

Thanks
Ade

-- 
View this message in context: 
http://www.nabble.com/Garbled-data-in-response---reading-from-mySQL-database-tp25726655p25726655.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Garbled data in response - reading from mySQL database

2009-10-03 Thread Ade B

Perfect, thank you.

Ade



-- 
View this message in context: 
http://www.nabble.com/Garbled-data-in-response---reading-from-mySQL-database-tp25726655p25726976.html
Sent from the Solr - User mailing list archive at Nabble.com.