[jira] [Commented] (SOLR-6227) ChaosMonkeySafeLeaderTest failures on jenkins

2014-07-21 Thread Deepak Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14068486#comment-14068486
 ] 

Deepak Kumar commented on SOLR-6227:


Hello all,

While invoking 
http://localhost:9090/solr/admin/collections?action=CREATEname=corenumShards=1replicationFactor=1collection.configName=coreconf
 I am getting below exception, this has been happening consistently for 4.7.2 
and 4.7.1 version, please help me understand if its the very same thing:

-- solr.log --

[ERROR] [2014-07-21 18:06:20,960] 
[Overseer-92140072928280576-localhost:9090_solr-n_03] 
[cloud.OverseerCollectionProcessor] - [Collection createcollection of 
createcollection failed:org.apache.solr.common.SolrException
at 
org.apache.solr.cloud.OverseerCollectionProcessor.createCollection(OverseerCollectionProcessor.java:1687)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:387)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.run(OverseerCollectionProcessor.java:200)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.zookeeper.KeeperException$NodeExistsException: 
KeeperErrorCode = NodeExists for /collections/usersearches
at org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at 
org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:429)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:426)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:383)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:370)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:357)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.createConfNode(OverseerCollectionProcessor.java:1711)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.createCollection(OverseerCollectionProcessor.java:1624)
... 3 more
]

[ERROR] [2014-07-21 16:39:55,906] [http-9090-1] [servlet.SolrDispatchFilter] - 
[null:org.apache.solr.common.SolrException

at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:248)
at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:233)
at 
org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:368)
at 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:141)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:720)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:265)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:205)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)

]

-- SOLR http response --
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status500/intint 
name=QTime210/int/lststr name=Operation createcollection caused 
exception:org.apache.solr.common.SolrException:org.apache.solr.common.SolrException/strlst
 name=exceptionnull name=msg/int name=rspCode500/int/lstlst 
name=errorstr name=traceorg.apache.solr.common.SolrException
at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:248)
at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:233)
at 

[jira] [Commented] (SOLR-6227) ChaosMonkeySafeLeaderTest failures on jenkins

2014-07-21 Thread Deepak Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069843#comment-14069843
 ] 

Deepak Kumar commented on SOLR-6227:


The above has been happening because of the 'linkconfig' command being used on 
ZooKeeper side which further prevents SOLR admin api to create a valid ZK node 
path therefore failing this way.

I got over it by just uploading the SOLR configurations and then using the 
action=CREATE http command and it works.

However, are there some documentation which lists down such changes across 
versions?

--Deepak

 ChaosMonkeySafeLeaderTest failures on jenkins
 -

 Key: SOLR-6227
 URL: https://issues.apache.org/jira/browse/SOLR-6227
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
 Fix For: 4.10


 This is happening very frequently.
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard1 is not consistent.  Got 143 from 
 https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
 https://127.0.0.1:33168/xvv/collection1
 Stack Trace:
 java.lang.AssertionError: shard1 is not consistent.  Got 143 from 
 https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
 https://127.0.0.1:33168/xvv/collection1
 at 
 __randomizedtesting.SeedInfo.seed([3C1FB6EAFE71:BDF938F2AA829E4D]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1139)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:150)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2593) A new core admin action 'split' for splitting index

2012-10-24 Thread Deepak Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483335#comment-13483335
 ] 

Deepak Kumar commented on SOLR-2593:


I have a situation which demands 2 core merging, re-create data partitions, 
split  install in 2(or more) cores, seems like this place has got somewhat 
things closer in that area, basically the case is that there are 2 cores on 
same schema roughly of 55G and 35G(and growing) each and data keeps on getting 
pushed continuously on 35G core, we can't allow it to get filled infinitely so 
essentially over a period of time(offline period/maintenance period) we 
regenrate(by re-indexing to a fresh core) both the cores with the desired set 
of data keyed on some unique key, discard the old oversized cores and install 
the fresh ones, re-indexing is a kind of pain and eventually it'll create the 
same set of documents but the older core will loose too older docs due to size 
constraint and the smaller core would be further shrinked as it'll probably be 
holding lesser documents due to docs getting shifted to bigger one, this can be 
considered as a sliding time window based core, so the basic steps in demand 
could be:

1.) Merge N cores to 1 big core(high cost).
2.) Scan through all the documents of the big core and create N(num of cores 
that were merged initially) new cores till allowed size by the side.
3.) Hot swap the main cores with the fresh ones.
4.) Discard the old cores probably after backing it up.

Above 1 may be omitted if we can directly scan through documents of N cores and 
keep on pushing the new docs over to target cores.

 A new core admin action 'split' for splitting index
 ---

 Key: SOLR-2593
 URL: https://issues.apache.org/jira/browse/SOLR-2593
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
 Fix For: 4.1


 If an index is too large/hot it would be desirable to split it out to another 
 core .
 This core may eventually be replicated out to another host.
 There can be to be multiple strategies 
 * random split of x or x% 
 * fq=user:johndoe
 example :
 action=splitsplit=20percentnewcore=my_new_index
 or
 action=splitfq=user:johndoenewcore=john_doe_index

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org