Deleted Collections not updated in Zookeeper

2014-09-08 Thread RadhaJayalakshmi
Hi,
Issue in brief:
I am facing a strange issue, where, the collections that are deleted in
SOLR, are still having reference in Zookeeper and due to which, in the solr
cloud console, i am still seeing the reference to the deleted collections in
down state

Issue in Detail:
I am using Solr 4.5.1 and zookeeper 3.4.5. 
Running a solr cloud of 3 nodes in one physical box. In that same box, i am
also running zookeeper ensemble of 3 nodes

Now as part of my application, every week, i create new indexes(collection)
with the new data and that index will become LIVE index for Searches.
Old Index(collections), will be deleted periodically from the solr server,
using the curl command and using DELETE on /admin/collections(Collections
API).

So far, it is working as expected.
But the problem i am facing is that, the indexes(Collections) that gets
deleted from the SOLR server periodically, are still having reference in the
zookeeper.
Becuase it has reference in zookeeper, i am still able to see those deleted
collections in SOLR cloud console, highlighted in orange color(which means
in down state).
Over time, i can see many such deleted collections in solr cloud console. I
dont know how to get rid of this.

Even restart of SOLR and Zookeeper is not giving any relief.

One more thing is, the deleted collections, dont have any reference in the
SOLR HOME folder. 
But it sits only on zookeeper. 

Expecting a reply for this issue

Thanks
Radha




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Deleted-Collections-not-updated-in-Zookeeper-tp4157362.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Slow QTimes - 5 seconds for Small sized Collections

2014-06-27 Thread RadhaJayalakshmi
Thanks for all of your responses - did some more research - and here is an
observation:
I am seeing an inconsistency in the QTime in the SolrQueryResponse object
returned to the Client App Versus the value of the QTime printed in the
Solr.log.

Here is one specific instance:
Value of QTime in SolrQueryResponse object is 5023 ms as
seen by the Client App
Value of QTime printed in Solr.log - 6ms

Why is there such a huge inconsistency?

Have the following theory - please help validate:
1. The SolrQuery (via SolrJ) from the Client App - hits a Solr TomcatNode
(lets call this Node 1) that does NOT contain the shard where the data is
supposed to reside ON.
a) P.S. - I use a Document Co-Location strategy so that
documents belonging to a single customer reside in a single shard - I use
the shard.keys in the query - so that searches are limited to a single shard
b) This would mean that my Search Queries will sail through
a maximum of 2 nodes in the cluster
2. Node 1 forwards the search request to the correct node (lets call this
Node 2) that contains the shard that needs to be looked up
a) For some reason - Node 1 is taking too long to send the
request to Node 2 (Maybe a firewall or network issue?? - How do I even find
out ?? Should I be seeing anything in the logs if such an issue were to
happen?)
b) This delay causes the request to reach Node 2 with a 5
second delay
c) Node 2 executes the query in 6 ms - and returns the
response back to Node 1 immediately
3. I am also noticing the following:
a) The absolute time when Node 2 logs the Qtime in Solr.log
and the absolute time that Client App prints the QTime from the
SolrQueryResponse more or less match - there is a difference of only a few
milliseconds
b) Does this prove beyond doubt that the query reached Node
2 with a delay of 5 seconds ??
c) The one thing I don't understand is that if the query
request took 5 seconds to reach Node 2 from Node 1 - then how is it that the
response from Node 2 is being sent back to Node 1 (and finally to the Client
App) with NO further delays (assuming there was a network issue). Somehow
seems like the delays is only affecting the request message brokering

One final question:
1. In a Solr Cloud Setup does the QTime in the SolrQueryResponse include
both the Query Execution Time + time taken for network communication between
Solr nodes in case the query arrived at the wrong shard ?
a) This is the only way I can explain seeing different
values of QTime in the Solr.log and at the Client App (via the QTime in
SolrQueryResponse)
b) A lot of documentation states that QTime is a measure of
the pure query execution time - but this is probably true only in a non-Solr
CLoud setup.

Appreciate additional inputs on this subject.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Slow-QTimes-5-seconds-for-Small-sized-Collections-tp4143681p4144393.html
Sent from the Solr - User mailing list archive at Nabble.com.


Slow QTimes - 5 seconds for Small sized Collections

2014-06-27 Thread RadhaJayalakshmi
Thanks for all of your responses - did some more research - and here is an
observation:
I am seeing an inconsistency in the QTime in the SolrQueryResponse object
returned to the Client App Versus the value of the QTime printed in the
Solr.log.

Here is one specific instance:
Value of QTime in SolrQueryResponse object is 5023 ms as
seen by the Client App
Value of QTime printed in Solr.log - 6ms

Why is there such a huge inconsistency?

Have the following theory - please help validate:
1. The SolrQuery (via SolrJ) from the Client App - hits a Solr TomcatNode
(lets call this Node 1) that does NOT contain the shard where the data is
supposed to reside ON.
a) P.S. - I use a Document Co-Location strategy so that
documents belonging to a single customer reside in a single shard - I use
the shard.keys in the query - so that searches are limited to a single shard
b) This would mean that my Search Queries will sail through
a maximum of 2 nodes in the cluster
2. Node 1 forwards the search request to the correct node (lets call this
Node 2) that contains the shard that needs to be looked up
a) For some reason - Node 1 is taking too long to send the
request to Node 2 (Maybe a firewall or network issue?? - How do I even find
out ?? Should I be seeing anything in the logs if such an issue were to
happen?)
b) This delay causes the request to reach Node 2 with a 5
second delay
c) Node 2 executes the query in 6 ms - and returns the
response back to Node 1 immediately
3. I am also noticing the following:
a) The absolute time when Node 2 logs the Qtime in Solr.log
and the absolute time that Client App prints the QTime from the
SolrQueryResponse more or less match - there is a difference of only a few
milliseconds
b) Does this prove beyond doubt that the query reached Node
2 with a delay of 5 seconds ??
c) The one thing I don't understand is that if the query
request took 5 seconds to reach Node 2 from Node 1 - then how is it that the
response from Node 2 is being sent back to Node 1 (and finally to the Client
App) with NO further delays (assuming there was a network issue). Somehow
seems like the delays is only affecting the request message brokering

One final question:
1. In a Solr Cloud Setup does the QTime in the SolrQueryResponse include
both the Query Execution Time + time taken for network communication between
Solr nodes in case the query arrived at the wrong shard ?
a) This is the only way I can explain seeing different
values of QTime in the Solr.log and at the Client App (via the QTime in
SolrQueryResponse)
b) A lot of documentation states that QTime is a measure of
the pure query execution time - but this is probably true only in a non-Solr
CLoud setup.

Appreciate additional inputs on this subject.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Slow-QTimes-5-seconds-for-Small-sized-Collections-tp4143681p4144395.html
Sent from the Solr - User mailing list archive at Nabble.com.


Slow QTimes - 5 seconds for Small sized Collections

2014-06-24 Thread RadhaJayalakshmi
I am running Solr 4.5.1. Here is how my setup looks:

Have 2 modest sized Collections.
Collection 1 - 2 shards, 3 replicas (Size of Shard 1 - 115
MB, Size of Shard 2 - 55 MB) 
Collection 2 - 2 shards, 3 replicas (Size of Shard 2 - 3.5
GB, Size of Shard 2 - 1 GB)
These two collections are distributed across:
6 Tomcat Nodes setup on 3 VMs (2 Nodes per VM)
Each of the 6 Tomcat nodes has a XmS / XmX setting of 2 GB
Each of the 3 VMs have a Physical Memory (RAM) of 32 GB

As you can see my Collections are pretty small - This is actually a test
environment (and NOT Production), However my users (only have a handful of
testers) are complaining of sporadic performances issues on the Search. 

Here are my observations from the application logs:
1) Out of 200 sample searches across both collections - 13 requests are slow
(3 slow responses on Collection 1 and 10 slow responses on Collection 2).

2) When things run fast - they are really fast (Qtimes of 25 - 100
milliseconds) - but when things are slow - I can see that the QTime
consistently hovers around the 5 second (or 5000 millisecond mark). I am
seeing responses of the order of 5024, 5094, 5035 ms - as though something
just hung for 5 seconds. I am observing this 5 second delay on both
Collections - which I feel is unusual - because both contain very different
data sets. I am unable to figure out whats causing the QTime to be so
consistent around the 5 second mark.

3) I build my index only once. I did try running an optimize on both
Collection 1 and Collection 2 after the users complained - I did notice that
post the optimize the segment count on each of the four shards did come down
- but that still didn't resolve the slowness on the searches (I was hoping
it would).

4) I am looking at the Solr Dashboard for more clues - My TomCat nodes are
definitely NOT running out of memory - the 6 nodes are consuming anywhere
between 500 MB - 1 GB RAM

5) The File Descriptor counts are under control - can only see a maximum of
100 file descriptors being used of a total of 4096

6) The Solr dashboard is however showing that 0.2% (or 9.8MB) of Swap Space
being consumed on one of the 3 VMs. Is this a concern ?

7) Also looked at the Plugin / Stats for every core on the Solr Dashboard. I
can't see any evictions happening in any of the caches - Its always ZERO. 

Has anyone encountered such an issue ? What else should I be looking for to
debug my problem ?

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Slow-QTimes-5-seconds-for-Small-sized-Collections-tp4143681.html
Sent from the Solr - User mailing list archive at Nabble.com.


Does one need to perform an optimize soon after doing a batch indexing using SolrJ ?

2014-06-24 Thread RadhaJayalakshmi
I am using Solr 4.5.1. I have two collections:
Collection 1 - 2 shards, 3 replicas (Size of Shard 1 - 115
MB, Size of Shard 2 - 55 MB) 
Collection 2 - 2 shards, 3 replicas (Size of Shard 2 - 3.5
GB, Size of Shard 2 - 1 GB)

I have a batch process that performs indexing (full refresh) - once a week
on the same index.

Here is some information on how I index:
a) I use SolrJ's bulk ADD API for indexing - CloudSolrServer.add(Collection
docs).
b) I have an autoCommit (hardcommit) setting of for both my Collections
(solrConfig.xml):
autoCommit
maxDocs10/maxDocs
   
openSearcherfalse/openSearcher
/autoCommit
c) I do a programatic hardcommit at the end of the indexing cycle - with an
open searcher of true - so that the documents show up on the Search
Results.
d) I neither programatically soft commit (nor have any autoSoftCommit
seetings) during the batch indexing process
e) When I re-index all my data again (the following week) into the same
index - I don't delete existing docs. Rather, I just re-index into the same
Collection.
f) I am using the default mergefactor of 10 in my solrconfig.xml
mergeFactor10/mergeFactor

Here is what I am observing:
1) After a batch indexing cycle - the segment counts for each shard / core
is pretty high. The Solr Dashboard reports segment counts between 8 - 30
segments on the variousr cores.
2) Sometimes the Solr Dashboard shows the status of my Core as - NOT
OPTIMIZED. This I find unusual - since I have just finished a Batch indexing
cycle - and would assume that the Index should already be optimized - Is
this happening because I don't delete my docs before re-indexing all my data
?
3) After I run an optimize on my Collections - the segment count does reduce
to significantly - to 1 segment.

Am I doing indexing the right way ? Is there a better strategy ?

Is it necessary to perform an optimize after every batch indexing cycle ?? 

The outcome I am looking for is that I need an optimized index after every
major Batch Indexing cycle.

Thanks!!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Does-one-need-to-perform-an-optimize-soon-after-doing-a-batch-indexing-using-SolrJ-tp4143686.html
Sent from the Solr - User mailing list archive at Nabble.com.


Segment Count of my Index is greater than the Configured MergeFactor

2014-06-19 Thread RadhaJayalakshmi
Hi,
I am using Solr 4.5.1. In that i have created an Index 114.8 MB. Also i have
the following index  configuration
   indexConfig
maxIndexingThreads8/maxIndexingThreads
ramBufferSizeMB100/ramBufferSizeMB
mergeFactor10/mergeFactor
/indexConfig

I have given the ramBufferSizeMB of 100 and mergefactor of 10. So this
means, that after indexing is completed. i should see =10 segments. Thats
my assumption and even documentation says that.

But, after the indexing is completed, i went into Solr Dashboard, and
selected the collection, for which indexing is completed. It is showing a
Segment count of 13. 
How is this possible? As i have given mergefactor 0f 10, at any point of
time, there should not be more than 9 segments in the index.

I want to understand why 13 segments are created in my index??
Could appreciate if i can get response ASAP

Thanks
Radha




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Segment-Count-of-my-Index-is-greater-than-the-Configured-MergeFactor-tp4142783.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Segment Count of my Index is greater than the Configured MergeFactor

2014-06-19 Thread RadhaJayalakshmi
Thanks Shawn and Thanks Chris!!
Shawn your explanation was very clear and clarified my doubts

Chris,
The video was also very useful




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Segment-Count-of-my-Index-is-greater-than-the-Configured-MergeFactor-tp4142783p4142987.html
Sent from the Solr - User mailing list archive at Nabble.com.


The way Autocommit works in solr - Wierd

2014-03-10 Thread RadhaJayalakshmi
Hi,

Brief Description of my application:
We have a java program which reads a flat file, and adds document to solr
using cloudsolrserver.
And we index for every 1000 documents(bulk indexing). 

And the Autocommit setting of my application is:
autoCommit
maxDocs10/maxDocs
openSearcherfalse/openSearcher
/autoCommit

So after every 100,000 documents are indexed, engine should perform a
HardCommit/AutoCommit. But still the OpenSearcher will be false. 
Once the file is fully read, we are issuing a commit() from the
CloudSolrServer class. So this by default opens a new Searcher. 

Also, from the Log, i can see that three times, Autocommit is happenning.
and Only with the last/final Autocommit, opensearcher is set to true.

So, till now all looks fine and working as expected.

But one strange issue i observed during the course of indexing.
Now, as per the documentation, the data that is being indexed should first
get written into tlog. When the Autocommit is performed, the data will be
flushed to disk. 
So only at three times, there should have been size difference in the /index
folder. All the time only the size of the /tlog folder should have been
changing

But actually happened is, all the time, i see the size of the /index folder
getting increased in parallel to the size of the /tlog folder.  
Actually it is increasing to certain limit and coming down. Again increasing
and coming down to a point.

So Now the bigger doubt is have is, during hard commit, is the data getting
written into both /index or /tlog folder??

I am using solr 4.5.1.

Some one please clear me how the hardcommit works. I am asumming the
following sequence:
1. Reads the data and writes to tlog
2. During hardcommit, flushes the data from tlog to index. If openSearcher
is false, should not open a new searcher
3. In the end, once all the datas are indexed, it should open a new
searcher.

If not please explain me..

Thanks in Advance
Radha








--
View this message in context: 
http://lucene.472066.n3.nabble.com/The-way-Autocommit-works-in-solr-Wierd-tp4122558.html
Sent from the Solr - User mailing list archive at Nabble.com.


Is it possible to have only fq in my solr query?

2013-11-26 Thread RadhaJayalakshmi
Hi,
I am preparing a solr query. in that i am only giving fq parameter .. I dont
give any q parameter..
If i exeucte such query, where only it is having fq, it is not returning any
docs. in the sense it is returning 0 docs.
So, is it always mandatory to have q parameter in solr query?
if so, then i think i should have something like
q=*:* and fq=field:value


Please explain

Thanks
Radha



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-it-possible-to-have-only-fq-in-my-solr-query-tp4103429.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrServerException while adding an invalid UNIQUE_KEY in solr 4.4

2013-11-21 Thread RadhaJayalakshmi
Hi,I am using solr4.4 with zookeeper 3.3.5. While i was checking for error
conditions of my application, i came across a strange issue.Here is what i
tried:I have three fields  defined in my schemaa) UNIQUE_KEY  - of type
solr.TrieLongb) empId - of type Solr.TrieLongc) companyId - of type
Solr.TrieLongHow Am i Indexing:I am indexing
using SolrJ API. and the data for the indexing will be in a text file,
delimited by | symbol. My Indexer java program will read the textfile lineby
line, splits the data by | symbol and creates SolrInputdocument object (for
every line of the file) and adds the fields with values (that it read from
the file)Now, intentionally, in the data file, for unique_key, i had String
values(instead of long value) . something like123AB|111|222Now, when i index
this data, i am getting the below
exception:*org.apache.solr.client.solrj.SolrServerException*: No live
SolrServers available to handle this request*:[URL of my application]*  
   
at
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333)
  
at
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:318)

at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
 
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)  
   
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)  
Caused
by: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:
Server at *[URL of my application] *returned non ok status:500,
message:Internal Server Error   at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
  
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
  
at
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:264)
But, when i correct the unique_key field data, but when i gave string data
for other two long fields, i am getting a different
exceptionorg.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:
ERROR: [Error stating the field name for which it is
mismathing]orrg.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:424)
  
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
  
at
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:264)
  
at
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:318)

at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
 
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)  
   
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)  
   
at  What is my question here:--During 
indexing, if
solr finds, that for any field, if the fieldtype declared in schema is
mismatching with the data that is being givem, then it should riase the same
type of exception.But in the above case, if it finds a mismatch for
Unique_key, it is raising SolrServerException. For all other fields, it is
raising, RemoteSolrException(which is an unchecked exception). Is this a bug
in solr or is there any reason for thowing different exception for the above
two cases.Expecting a positive replyThanksRadha 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrServerException-while-adding-an-invalid-UNIQUE-KEY-in-solr-4-4-tp4102346.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: SolrServerException while adding an invalid UNIQUE_KEY in solr 4.4

2013-11-21 Thread RadhaJayalakshmi
Thanks Shawn for your response.
So, from your email, it seems that unique_key validation is handled
differently from other field validation.
But what i am not very clear, is what the unique_key has to do with finding
the live server?
Becase if there is any mismatch in the unique_key, it is throwing
SolrServerException saying No live servers found.. Because live servers
are being sourced by clusterstate of zookeeper. so i feel the unique key is
particular to a core/index.
So looking to understand the nature of this exception. Please explain me how
unique_key and live servers are related




--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrServerException-while-adding-an-invalid-UNIQUE-KEY-in-solr-4-4-tp4102346p4102533.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SPLITSHARD not working in SOLR-4.4.0

2013-10-16 Thread RadhaJayalakshmi
Shalin,
It is working for me. As you pointed rightly, i had defined UNIQUE_KEY field
in schema, but forgot to mention this field in the uniqueKey decalaration.
After i added this, it started working.
One another question i have with regard to SPLITSHARD is, we are not able to
control, which nodes of tomcat, the splitted shards should be create.
While creating a collection, we can mention createNodeSet to set our
preference of tomcat nodes on which the collections slices should be
created.
But i dont find that feature in SPLITSHARD API. Would you know that it is a
limitation in solr 4.4 or is there any other means by which we can achieve
this



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SPLITSHARD-not-working-in-SOLR-4-4-0-tp4095623p4095809.html
Sent from the Solr - User mailing list archive at Nabble.com.


Timeout Errors while using Collections API

2013-10-16 Thread RadhaJayalakshmi
Hi,
My setup is
Zookeeper ensemble - running with 3 nodes
Tomcats - 9 Tomcat instances are brought up, by registereing with zookeeper. 

Steps :
1) I uploaded the solr configuration like db_data_config, solrconfig, schema
xmls into zookeeoper
2)  Now, i am trying to create a collection with the collection API like
below:

http://miadevuser001.albridge.com:7021/solr/admin/collections?action=CREATEname=Schwab_InvACC_CollnumShards=1replicationFactor=2createNodeSet=localhost:7034_solr,localhost:7036_solrcollection.configName=InvestorAccountDomainConfig

Now, when i execute this command, i am getting the following error:
responselst name=responseHeaderint name=status500/intint
name=QTime60015/int/lstlst name=errorstr
name=msgcreatecollection the collection time out:60s/strstr
name=traceorg.apache.solr.common.SolrException: createcollection the
collection time out:60s
at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:175)
at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
at
org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
at
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
/strint name=code500/int/lst/response

Now after i got this error, i am not able to do any operation on these
instances with collection API. It is repeteadly giving the same timeout
error..
This setup was working fine 5 mins back. suddenly it started throwing this
exceptions. Any ideas please??






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Timeout-Errors-while-using-Collections-API-tp4095852.html
Sent from the Solr - User mailing list archive at Nabble.com.


SPLITSHARD not working in SOLR-4.4.0

2013-10-15 Thread RadhaJayalakshmi
Hi All,
For POC purpose, I just brought up a Tomcat-Solr Cluster, with Zookeeper of
3 zodes.
In one of my collection, i haave only one shard, with two replicas. I just
want to split this shard, so that, it will be splitted by two and each
splitted shard will have two replicas(including the master copy). but when i
execute the SPLITSHARD command, i am getting, the below
NullpointerException:

Exception Trace:
---
ain{StandardDirectoryReader(segments_3:2183:nrt _ua(4.4):C977031)}
79569726 [http-bio-7031-exec-56] INFO 
org.apache.solr.update.SolrIndexSplitter  – SolrIndexSplitter: partitions=2
segments=1
79569726 [http-bio-7031-exec-56] ERROR
org.apache.solr.handler.admin.CoreAdminHandler  – ERROR executing split:
java.lang.NullPointerException
at
org.apache.solr.update.SolrIndexSplitter.split(SolrIndexSplitter.java:154)
at
org.apache.solr.update.SolrIndexSplitter.split(SolrIndexSplitter.java:89)
at
org.apache.solr.update.DirectUpdateHandler2.split(DirectUpdateHandler2.java:766)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleSplitAction(CoreAdminHandler.java:284)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:186)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:209)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
79569727 [http-bio-7031-exec-56] ERROR org.apache.solr.core.SolrCore  –
java.lang.RuntimeException: java.lang.NullPointerException
at
org.apache.solr.handler.admin.CoreAdminHandler.handleSplitAction(CoreAdminHandler.java:290)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:186)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:209)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at

Re: SPLITSHARD not working in SOLR-4.4.0

2013-10-15 Thread RadhaJayalakshmi
Thanks for the response!!
Yes i have defined unique key in the schema... Still it is throwing the same
error..
Is this SPLITSHARD a new feature that is under development in solr 4.4? Has
anyone able to split the shards using SPLITSHARD successfully?




--
View this message in context: 
http://lucene.472066.n3.nabble.com/SPLITSHARD-not-working-in-SOLR-4-4-0-tp4095623p4095789.html
Sent from the Solr - User mailing list archive at Nabble.com.


Configuring seperate db-data-config.xml per shard

2013-06-05 Thread RadhaJayalakshmi
Hi,
We have a setup where we have 3 shards in a collection, and each shard in
the collection need to load different sets of data
That is
Shard1- will contain data only for Entity1
Shard2 - will contain data for entity2
shard3- will contain data for entity3
So in this case,. the db-data-config.xml can't be same for three shards so
it can;'t be uploaded in zookeeper.
Is there any way, where we can mantain db-data-config.xml inside each
shard's folder and make our shards to refer to this
db-data-config.xml(during data import), rather than looking for this file in
zookeepers repository

Thanks in Advance
Radha



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Configuring-seperate-db-data-config-xml-per-shard-tp4068383.html
Sent from the Solr - User mailing list archive at Nabble.com.