[
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13139997#comment-13139997
]
JohnWu edited comment on SOLR-1395 at 12/30/11 6:49 AM:
--------------------------------------------------------
tom:
For multi-schema case, I try to add multi-core in proxy and sub-proxy, but I
meet the connection close issue.
please help me and analyze the case.
now in proxy, I set the solr home with proxymulticore folder, which contains
structure as follows:
-proxymulticore
--customer
----conf
----data
--part
----conf
----data
--solr.xml
solr.xml as follows:
<cores adminPath="/admin/cores">
<core name="part" instanceDir="part">
</core>
<core name="customer" instanceDir="customer">
</core>
</cores>
in part folder the solrcong.xml set as
----------------
<requestHandler name="standard" class="solr.KattaRequestHandler"
default="true">
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="shards">part-00000,part-00001,part-00002</str>
</lst>
</requestHandler>
----------------
so in the proxy, we can use the request:
http://localhost:8080/solr-1395-katta-0.6.2-2patch/part/select/?q=a*&version=2.2&start=0&rows=10&indent=on&isShard=false&distrib=true&core=part
the proxy will use the kattaRequest dispatch the query to katta
datanodes(subproxy).
in subproxy, when we start the embedded solr with subproxymulticore folder,
which structure as follows:
-subproxymulticore
--customer
----conf
----data
--part
----conf
----data
--solr.xml
exlusive the solrconfig.xml in part folder as follows, the others are same.
----------------
<requestHandler name="standard" class="solr.SearchHandler" default="true">
<!-- default values for query parameters -->
<lst name="defaults">
<str name="echoParams">explicit</str>
</lst>
</requestHandler>
------------------
now, I correct some code of DeployableSolrKattaServer.java in patched solr as
follows:
--------------------
public DeployableSolrKattaServer() throws ParserConfigurationException,
IOException, SAXException {
// super(getServerName(), new CoreContainer(getSolrHome()
// .getAbsolutePath(), getConfigFile()));
//super(getServerName(), new CoreContainer());
//By JohnWu, we do not direct find the conf folder of a core, we find the
solr.xml to add cores.
super(getServerName(),new
CoreContainer.Initializer().initialize());
}
-------------------
add correct some code in SolrKattaServer.java
-------------------
public SolrKattaServer(String defaultCoreName, CoreContainer
coreContainer) {
this.coreContainer = coreContainer;
handler = new MultiEmbeddedSearchHandler(coreContainer);
handler.init(new NamedList());
//defaultCore = coreContainer.getCore(defaultCoreName);
defaultCores = coreContainer.getCores();
// if (defaultCore == null)
// throw new SolrException(ErrorCode.UNKNOWN,
"defaultCore:"
// + defaultCoreName + " could not be
found");
if (defaultCores == null)
throw new SolrException(ErrorCode.UNKNOWN,
"defaultCore:"
+ defaultCoreName + " could not be
found");
//JohnWu add for multi-cores
Iterator it = defaultCores.iterator();
while(it.hasNext()){
handler.inform((SolrCore)it.next());
}
//handler.inform(defaultCore);
}
/**
* The main method that executes requests from a KattaClient
*/
@Override
public KattaResponse request(String[] shards, KattaRequest request)
throws Exception {
//JohnWu add it for multi cores, we need get the suitable core
according to the shard
SolrParams params = request.getParams();
SolrParams required = params.required();
String cname = required.get(CoreAdminParams.CORE);
SolrCore core = coreContainer.getCore(cname);
//need add some code to void the socre is null
if (core != null) {
ModifiableSolrParams sp = new
ModifiableSolrParams(request.getParams());
String shardsStr = StringUtils.arrayToString(shards);
sp.set(ShardParams.SHARDS, shardsStr);
if (log.isDebugEnabled()){
log.debug("SolrServer.request: " + nodeName + "
shards:"
+ Arrays.asList(shards) + "
request params:" + sp );
}
//remove by John
//SolrQueryRequestBase req = new
LocalSolrQueryRequest(defaultCore, sp);
SolrQueryRequestBase req = new
LocalSolrQueryRequest(core, sp);
SolrQueryResponse resp = new
SolrQueryResponse();
// Added by tom liu
// because exception would stop RPC
// so, must handle exception
try{
// add end
getRequestHandler(req).handleRequest(req, resp);
// Added by tom liu
}catch(SolrException ex){
log.error(ex.getMessage(), ex);
}
// add end
NamedList nl = resp.getValues();
nl.add("QueriedShards", shards);
// Added by tom liu
SolrDocumentList sdl =
(SolrDocumentList)nl.get("response");
if( sdl == null ){
nl.add("response", new
SolrDocumentList());
if( log.isWarnEnabled() )
log.warn("SolrServer.SolrResponse: no response");
}
// add end
SolrResponse rsp = new SolrResponseBase();
rsp.setResponse(nl);
if (log.isDebugEnabled()){
if( null != sdl ){
log.debug("SolrServer.SolrResponse: numFound=" + sdl.getNumFound()
+ ",start=" +
sdl.getStart() + ",docs=" + sdl.size());
}
log.debug("termVectors=" +
nl.get("termVectors"));
}
// By using shards[0] we guarantee that this response is tied to a known
// shard in the orignator, so that the results can be merged.
// The name and only 1 is allowed has to be one of the original query
// shards.
return new KattaResponse(shards[0], "", 0, rsp);
}else{
//maybe null is bad!
System.out.println("------the core is null!!!!!!");
return null;
}
}
// Added by tom liu
// for supporting qt=...
private MultiEmbeddedSearchHandler getRequestHandler(SolrQueryRequest
request) {
SolrParams params = request.getParams();
if( params == null ) {
params = new ModifiableSolrParams();
}
String qt = params.get( CommonParams.QT );
if (qt != null) {
//JohnWu remove follow for multi-core
// MultiEmbeddedSearchHandler myhandler =
(MultiEmbeddedSearchHandler)defaultCore.getRequestHandler( qt );
// if( myhandler == null ) {
// throw new SolrException( SolrException.ErrorCode.BAD_REQUEST,
"unknown handler: "+qt);
// }
// myhandler.setCoreContainer(coreContainer);
// return myhandler;
//JohnWu add for multi-core
Iterator it = defaultCores.iterator();
while(it.hasNext()){
MultiEmbeddedSearchHandler myhandler =
(MultiEmbeddedSearchHandler)((SolrCore)it.next()).getRequestHandler( qt );
if( myhandler == null ) {
throw new SolrException(
SolrException.ErrorCode.BAD_REQUEST, "unknown handler: "+qt);
}
myhandler.setCoreContainer(coreContainer);
return myhandler;
}
}
return handler;
}
------------------
but now when the katta send the second query with ids to data node, the
connection is closed.
INFO: [pSearch-00000#pSearch-00000] webapp=null path=/select
params={start=0&ids=28aaa%2Caaa%2Cbbb&q=a*&core=part&isShard=true&rows=10}
hits=3 status=0 QTime=12
please help me, and tell me why the process can not go through the whole
process.
Thanks!
JohnWu
was (Author: johnwu):
tom:
For multi-schema case, I try to add multi-core in proxy and sub-proxy, but I
meet the connection close issue.
please help me and analyze the case.
now in proxy, I set the solr home with proxymulticore folder, which contains
structure as follows:
-proxymulticore
--customer
----conf
----data
--part
----conf
----data
--solr.xml
solr.xml as follows:
<cores adminPath="/admin/cores">
<core name="part" instanceDir="part">
</core>
<core name="customer" instanceDir="customer">
</core>
</cores>
in part folder the solrcong.xml set as
----------------
<requestHandler name="standard" class="solr.KattaRequestHandler"
default="true">
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="shards">part-00000,part-00001,part-00002</str>
</lst>
</requestHandler>
----------------
so in the proxy, we can use the request:
http://localhost:8080/solr-1395-katta-0.6.2-2patch/part/select/?q=a*&version=2.2&start=0&rows=10&indent=on&isShard=false&distrib=true&core=part
the proxy will use the kattaRequest dispatch the query to katta
datanodes(subproxy).
in subproxy, when we start the embedded solr with subproxymulticore folder,
which structure as follows:
-subproxymulticore
--customer
----conf
----data
--part
----conf
----data
--solr.xml
exlusive the solrconfig.xml in part folder as follows, the others are same.
----------------
<requestHandler name="standard" class="solr.SearchHandler" default="true">
<!-- default values for query parameters -->
<lst name="defaults">
<str name="echoParams">explicit</str>
</lst>
</requestHandler>
------------------
now, I correct some code of DeployableSolrKattaServer.java in patched solr as
follows:
--------------------
public DeployableSolrKattaServer() throws ParserConfigurationException,
IOException, SAXException {
// super(getServerName(), new CoreContainer(getSolrHome()
// .getAbsolutePath(), getConfigFile()));
//super(getServerName(), new CoreContainer());
//By JohnWu, we do not direct find the conf folder of a core, we find the
solr.xml to add cores.
super(getServerName(),new
CoreContainer.Initializer().initialize());
}
-------------------
add correct some code in SolrKattaServer.java
-------------------
public SolrKattaServer(String defaultCoreName, CoreContainer
coreContainer) {
this.coreContainer = coreContainer;
handler = new MultiEmbeddedSearchHandler(coreContainer);
handler.init(new NamedList());
//defaultCore = coreContainer.getCore(defaultCoreName);
defaultCores = coreContainer.getCores();
// if (defaultCore == null)
// throw new SolrException(ErrorCode.UNKNOWN,
"defaultCore:"
// + defaultCoreName + " could not be
found");
if (defaultCores == null)
throw new SolrException(ErrorCode.UNKNOWN,
"defaultCore:"
+ defaultCoreName + " could not be
found");
//JohnWu add for multi-cores
Iterator it = defaultCores.iterator();
while(it.hasNext()){
handler.inform((SolrCore)it.next());
}
//handler.inform(defaultCore);
}
/**
* The main method that executes requests from a KattaClient
*/
@Override
public KattaResponse request(String[] shards, KattaRequest request)
throws Exception {
//JohnWu add it for multi cores, we need get the suitable core
according to the shard
SolrParams params = request.getParams();
SolrParams required = params.required();
String cname = required.get(CoreAdminParams.CORE);
SolrCore core = coreContainer.getCore(cname);
//need add some code to void the socre is null
if (core != null) {
ModifiableSolrParams sp = new
ModifiableSolrParams(request.getParams());
String shardsStr = StringUtils.arrayToString(shards);
sp.set(ShardParams.SHARDS, shardsStr);
if (log.isDebugEnabled()){
log.debug("SolrServer.request: " + nodeName + "
shards:"
+ Arrays.asList(shards) + "
request params:" + sp );
}
//remove by John
//SolrQueryRequestBase req = new
LocalSolrQueryRequest(defaultCore, sp);
SolrQueryRequestBase req = new
LocalSolrQueryRequest(core, sp);
SolrQueryResponse resp = new
SolrQueryResponse();
// Added by tom liu
// because exception would stop RPC
// so, must handle exception
try{
// add end
getRequestHandler(req).handleRequest(req, resp);
// Added by tom liu
}catch(SolrException ex){
log.error(ex.getMessage(), ex);
}
// add end
NamedList nl = resp.getValues();
nl.add("QueriedShards", shards);
// Added by tom liu
SolrDocumentList sdl =
(SolrDocumentList)nl.get("response");
if( sdl == null ){
nl.add("response", new
SolrDocumentList());
if( log.isWarnEnabled() )
log.warn("SolrServer.SolrResponse: no response");
}
// add end
SolrResponse rsp = new SolrResponseBase();
rsp.setResponse(nl);
if (log.isDebugEnabled()){
if( null != sdl ){
log.debug("SolrServer.SolrResponse: numFound=" + sdl.getNumFound()
+ ",start=" +
sdl.getStart() + ",docs=" + sdl.size());
}
log.debug("termVectors=" +
nl.get("termVectors"));
}
// By using shards[0] we guarantee that this response is tied to a known
// shard in the orignator, so that the results can be merged.
// The name and only 1 is allowed has to be one of the original query
// shards.
return new KattaResponse(shards[0], "", 0, rsp);
}else{
//maybe null is bad!
System.out.println("------the core is null!!!!!!");
return null;
}
}
// Added by tom liu
// for supporting qt=...
private MultiEmbeddedSearchHandler getRequestHandler(SolrQueryRequest
request) {
SolrParams params = request.getParams();
if( params == null ) {
params = new ModifiableSolrParams();
}
String qt = params.get( CommonParams.QT );
if (qt != null) {
//JohnWu remove follow for multi-core
// MultiEmbeddedSearchHandler myhandler =
(MultiEmbeddedSearchHandler)defaultCore.getRequestHandler( qt );
// if( myhandler == null ) {
// throw new SolrException( SolrException.ErrorCode.BAD_REQUEST,
"unknown handler: "+qt);
// }
// myhandler.setCoreContainer(coreContainer);
// return myhandler;
//JohnWu add for multi-core
Iterator it = defaultCores.iterator();
while(it.hasNext()){
MultiEmbeddedSearchHandler myhandler =
(MultiEmbeddedSearchHandler)((SolrCore)it.next()).getRequestHandler( qt );
if( myhandler == null ) {
throw new SolrException(
SolrException.ErrorCode.BAD_REQUEST, "unknown handler: "+qt);
}
myhandler.setCoreContainer(coreContainer);
return myhandler;
}
}
return handler;
}
------------------
but now when the katta send the second query with ids to data node, the
connection is closed.
INFO: [partSearch-00000#partSearch-00000] webapp=null path=/select
params={start=0&ids=28aaa%2Caaa%2Cbbb&q=a*&core=part&isShard=true&rows=10}
hits=3 status=0 QTime=12
please help me, and tell me why the process can not go through the whole
process.
Thanks!
JohnWu
> Integrate Katta
> ---------------
>
> Key: SOLR-1395
> URL: https://issues.apache.org/jira/browse/SOLR-1395
> Project: Solr
> Issue Type: New Feature
> Affects Versions: 1.4
> Reporter: Jason Rutherglen
> Priority: Minor
> Fix For: 3.6, 4.0
>
> Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch,
> back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar,
> katta-solrcores.jpg, katta.node.properties, katta.zk.properties,
> log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch,
> solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch,
> solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch,
> solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch,
> solr-1395-katta-0.6.2.patch, solr-1395-katta-0.6.3-4.patch,
> solr-1395-katta-0.6.3-5.patch, solr-1395-katta-0.6.3-6.patch,
> solr-1395-katta-0.6.3-7.patch, solr1395.jpg, test-katta-core-0.6-dev.jar,
> zkclient-0.1-dev.jar, zookeeper-3.2.1.jar
>
> Original Estimate: 336h
> Remaining Estimate: 336h
>
> We'll integrate Katta into Solr so that:
> * Distributed search uses Hadoop RPC
> * Shard/SolrCore distribution and management
> * Zookeeper based failover
> * Indexes may be built using Hadoop
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]