Re: SolrCloud FunctionQuery inconsistency

2013-12-06 Thread sling
Thank you, Chris.

I notice crontabs are performed at different time in replicas(delayed for 10
minutes against its leader), and these crontabs is to reload dic files.
Therefore, the terms are slightly different between replicas.
So the maxScore shows difference.


Best,
Sling



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346p4105293.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud FunctionQuery inconsistency

2013-12-05 Thread sling
By the way, the shards param is running ok with the value
localhost:7574/solr,localhost:8983/solr or shard2,
but it get an exception with only one replica localhost:7574/solr;

right:  
shards=204.lead.index.com:9090/solr/doc/,66.index.com:8080/solr/doc/  
wrong: shards=204.lead.index.com:9090/solr/doc/
why can't this param run with only one replica?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346p4105078.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: SolrCloud FunctionQuery inconsistency

2013-12-04 Thread sling
Hi Raju,
Collection is a concept in solrcloud, and core is in standalone mode.
So you can create multiple cores in solr standalone mode, not collections.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346p4104888.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud FunctionQuery inconsistency

2013-12-03 Thread sling
Thanks, Chirs:
The schema is:
field name=title type=textComplex indexed=true stored=false
multiValued=false omitNorms=true  /
field name=dkeys type=textComplex indexed=true stored=false
multiValued=false omitNorms=true /
field name=ptime type=date indexed=true stored=false
multiValued=false omitNorms=true /

There is no default value for ptime. It is generated by users.

There are 4 shards in this solrcloud, and 2 nodes in each shard.

I was trying query with a function query({!boost b=dateDeboost(ptime)}
channelid:0082  title:abc), which leads differents results from the same
shard(using the param: shards=shard3).

The diffenence is maxScore, which is not consistent. And the maxScore is
either score A or score B.
And at the same time, new docs are indexed.
In my opinion, the maxScore should be the same between querys in a very
short time. or at least, it shoud not always change between score A and
score B.

And quite by accident, the sort result is even inconsistent(say there is a
doc in this query, and not in another query, over and over ). It does appear
once, but not reappear again.


Does this mean , when query happens, the index in replica has not synced
from its leader? so if query from different nodes from the shard at the same
time, it shows different results.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346p4104851.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud FunctionQuery inconsistency

2013-12-02 Thread sling
Hi,
I have a solrcloud with 4 shards. They are running normally.
How is possible that the same function query returns different results? 
And it happens even in the same shard?

However, when sort by ptime desc, the result is consistent.
The dateDeboost generate the time-weight from ptime, which is multiplied by
the score.

The result is as follows:
{
  responseHeader:{
status:0,
QTime:7,
params:{
  fl:id,
  shards:shard3,
  cache:false,
  indent:true,
  start:0,
  q:{!boost b=dateDeboost(ptime)}channelid:0082  (title:\abc\ ||
dkeys:\abc\),
  wt:json,
  rows:5}},
  response:{numFound:121,start:0,maxScore:0.5319116,docs:[
  {
id:9EORHN5I00824IHR},
  {
id:9EOPQGOI00824IMP},
  {
id:9EMATM6900824IHR},
  {
id:9EJLBOEN00824IHR},
  {
id:9E6V45IM00824IHR}]
  }}



{
  responseHeader:{
status:0,
QTime:6,
params:{
  fl:id,
  shards:shard3,
  cache:false,
  indent:true,
  start:0,
  q:{!boost b=dateDeboost(ptime)}channelid:0082  (title:\abc\ ||
dkeys:\abc\),
  wt:json,
  rows:5}},
  response:{numFound:121,start:0,maxScore:0.5319117,docs:[
  {
id:9EOPQGOI00824IMP},
  {
id:9EORHN5I00824IHR},
  {
id:9EMATM6900824IHR},
  {
id:9EJLBOEN00824IHR},
  {
id:9E1LP3S300824IHR}]
  }}





--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud FunctionQuery inconsistency

2013-12-02 Thread sling
Thanks, Erick

I mean the first id of the results is not consistent, and the maxScore is
not too.

When query, I do index docs at the same time, but they are not revelent to
this query. 

The updated docs can not affect tf cals, and for idf, they should affect for
all docs, so the results should consistent.

But for the same query, it shows diffenents sort(either sort A or sort B)
over and over.

Thanks,
sling



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346p4104549.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud FunctionQuery inconsistency

2013-12-02 Thread sling
Thank for your reply, Chris.

Yes, I am populating ptime using a default of NOW.

I only store the id, so I can't get ptime values. But from the perspective
of business logic, ptime should not change.

Strangely, the sort result is consistent now... :(
I should do more test case...



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-FunctionQuery-inconsistency-tp4104346p4104558.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: In a functon query, I can't get the ValueSource when extend ValueSourceParser

2013-11-26 Thread sling
Thank you, kydryavtsev andrey!
You give me the right solution.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/In-a-functon-query-I-can-t-get-the-ValueSource-when-extend-ValueSourceParser-tp4103026p4103449.html
Sent from the Solr - User mailing list archive at Nabble.com.


In a functon query, I can't get the ValueSource when extend ValueSourceParser

2013-11-25 Thread sling
hi,
I am working with solr4.1.
When I don't parseValueSource, my function query works well. The code is
like this:
public class DateSourceParser extends ValueSourceParser {
@Override
public void init(NamedList namedList) {
}
@Override
*public ValueSource parse(FunctionQParser fp) throws SyntaxError {  

return new DateFunction();
}*
}

When I want to use the ValueSource, like this:
public class DateSourceParser extends ValueSourceParser {
@Override
public void init(NamedList namedList) {
}
@Override
*public ValueSource parse(FunctionQParser fp) throws SyntaxError {
ValueSource source = fp.parseValueSource();
return new DateFunction(source);
}*
}

fp.parseValueSource() throws an error like this:
ERROR [org.apache.solr.core.SolrCore] -
org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError:
Expected identifier at pos 12 str='dateDeboost()'
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:147)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:187)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at
com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70)
at
com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:173)
at
com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229)
at
com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:274)
at com.caucho.server.port.TcpConnection.run(TcpConnection.java:514)
at com.caucho.util.ThreadPool.runTasks(ThreadPool.java:527)
at com.caucho.util.ThreadPool.run(ThreadPool.java:449)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.solr.search.SyntaxError: Expected identifier at pos 12
str='dateDeboost()'
at
org.apache.solr.search.QueryParsing$StrParser.getId(QueryParsing.java:747)
at
org.apache.solr.search.QueryParsing$StrParser.getId(QueryParsing.java:726)
at
org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:345)
at
org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:223)
at
org.sling.solr.custom.DateSourceParser.parse(DateSourceParser.java:24)
at
org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:352)
at
org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:68)
at org.apache.solr.search.QParser.getQuery(QParser.java:142)
at
org.apache.solr.search.BoostQParserPlugin$1.parse(BoostQParserPlugin.java:61)
at org.apache.solr.search.QParser.getQuery(QParser.java:142)
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:117)
... 13 more


so, how to make fp.parseValueSource() work?

Thanks!!!

sling





--
View this message in context: 
http://lucene.472066.n3.nabble.com/In-a-functon-query-I-can-t-get-the-ValueSource-when-extend-ValueSourceParser-tp4103026.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: In a functon query, I can't get the ValueSource when extend ValueSourceParser

2013-11-25 Thread sling
Thanks a lot for your reply, Chris.

I was trying to sort the query result by the Datefunction, by passing
q={!boost b=dateDeboost()}title:test to the /select request-handler.

Before, my custom DateFunction is like this:
public class DateFunction extends FieldCacheSource {
private static final long serialVersionUID = 6752223682280098130L;
private static long now;
public DateFunction(String field) {
super(field);
now = System.currentTimeMillis();
}
@Override
public FunctionValues getValues(Map context,
AtomicReaderContext readerContext) throws IOException {
long[] times = cache.getLongs(readerContext.reader(), field, 
false);
final float[] weights = new float[times.length];
for (int i = 0; i  times.length; i++) {
weights[i] = ScoreUtils.getNewsScoreFactor(now, 
times[i]);
}
return new FunctionValues() {
@Override
public float floatVal(int doc) {
return weights[doc];
}
};
}
}
It calculate every documet's date-weight, but at the same time , it only
need the one doc's date-weight, so it run slowly. 

When I see the source code of recip function in
org.apache.solr.search.ValueSourceParser, like this:
addParser(recip, new ValueSourceParser() {
  @Override
  public ValueSource parse(FunctionQParser fp) throws SyntaxError {
ValueSource source = fp.parseValueSource();
float m = fp.parseFloat();
float a = fp.parseFloat();
float b = fp.parseFloat();
return new ReciprocalFloatFunction(source, m, a, b);
  }
});
and in the ReciprocalFloatFunction, it get the value like this:
@Override
  public FunctionValues getValues(Map context, AtomicReaderContext
readerContext) throws IOException {
final FunctionValues vals = source.getValues(context, readerContext);
return new FloatDocValues(this) {
  @Override
  public float floatVal(int doc) {
return a/(m*vals.floatVal(doc) + b);
  }
  @Override
  public String toString(int doc) {
return Float.toString(a) + /(
+ m + *float( + vals.toString(doc) + ')'
+ '+' + b + ')';
  }
};
  }

So I think this is what I want. 
When calculate a doc's date-weight, I needn't cache.getLongs(x),
instead, I should source.getValues(xxx)

Therefore I change my code, but when fp.parseValueSource(), it throws an
error like this:
org.apache.solr.search.SyntaxError: Expected identifier at pos 12
str='dateDeboost()' 

Do I describe clearly this time?

Thanks again!

sling




--
View this message in context: 
http://lucene.472066.n3.nabble.com/In-a-functon-query-I-can-t-get-the-ValueSource-when-extend-ValueSourceParser-tp4103026p4103207.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: a function query of time, frequency and score.

2013-11-25 Thread sling
Thanks, Erick.
What I want to do is custom the sort by date, time, and number.
I want to know is there some formula to tackle this.

Thanks again!
sling


On Fri, Nov 22, 2013 at 9:11 PM, Erick Erickson [via Lucene] 
ml-node+s472066n4102599...@n3.nabble.com wrote:

 Not quite sure what you're asking. The field() function query brings the
 value of a field into the score, something like:
 http://localhost:8983/solr/select?wt=jsonfl=id%20scoreq={!boost%20b=field(popularity)}ipod


 Best,
 Erick


 On Thu, Nov 21, 2013 at 10:43 PM, sling [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=4102599i=0
 wrote:

  Hi, guys.
 
  I indexed 1000 documents, which have fields like title, ptime and
  frequency.
 
  The title is a text fild, the ptime is a date field, and the frequency
 is a
  int field.
  Frequency field is ups and downs. say sometimes its value is 0, and
  sometimes its value is 999.
 
  Now, in my app, the query could work with function query well. The
 function
  query is implemented as the score multiplied by an decreased date-weight
  array.
 
  However, I have got no idea to add the frequency to this formula...
 
  so could someone give me a clue?
 
  Thanks again!
 
  sling
 
 
 
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/a-function-query-of-time-frequency-and-score-tp4102531.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://lucene.472066.n3.nabble.com/a-function-query-of-time-frequency-and-score-tp4102531p4102599.html
  To unsubscribe from a function query of time, frequency and score., click
 herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=4102531code=c2xpbmczNThAZ21haWwuY29tfDQxMDI1MzF8NzMyOTA2Njg2
 .
 NAMLhttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://lucene.472066.n3.nabble.com/a-function-query-of-time-frequency-and-score-tp4102531p4103216.html
Sent from the Solr - User mailing list archive at Nabble.com.

a function query of time, frequency and score.

2013-11-21 Thread sling
Hi, guys.

I indexed 1000 documents, which have fields like title, ptime and frequency.

The title is a text fild, the ptime is a date field, and the frequency is a
int field.
Frequency field is ups and downs. say sometimes its value is 0, and
sometimes its value is 999.

Now, in my app, the query could work with function query well. The function
query is implemented as the score multiplied by an decreased date-weight
array. 

However, I have got no idea to add the frequency to this formula...

so could someone give me a clue?

Thanks again!

sling



--
View this message in context: 
http://lucene.472066.n3.nabble.com/a-function-query-of-time-frequency-and-score-tp4102531.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: how to avoid recover? how to ensure a recover success?

2013-10-22 Thread sling

There is 8Gb index in each replica, 8 nodes , 4 shards and 4 collections in
this application.

in test enviroment, it can get 20 qps with no pressure. but the index size
is small too...



--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-avoid-recover-how-to-ensure-a-recover-success-tp4096777p4096963.html
Sent from the Solr - User mailing list archive at Nabble.com.


how to avoid recover? how to ensure a recover success?

2013-10-21 Thread sling
Hi, guys:

I have an online application with solrcloud 4.1, but I get errors of
syncpeer every 2 or 3 weeks...
In my opinion, a recover occers when a replica can not sync data to its
leader successfully.

I see the topic 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Errors-while-recovering-td4022542.html
and https://issues.apache.org/jira/i#browse/SOLR-4032, but why did I still
get similar errors in solrcloud4.1?

so is there any settings for syncpeer? 
how to reduce the probability of this error?
when recover happens, how to ensure its success?



The errors I got is like these:
[2013.10.21 10:39:13.482]2013-10-21 10:39:13,482 WARN
[org.apache.solr.handler.SnapPuller] - Error in fetching packets 
[2013.10.21 10:39:13.482]java.io.EOFException
[2013.10.21 10:39:13.482]   at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:154)
[2013.10.21 10:39:13.482]   at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:146)
[2013.10.21 10:39:13.482]   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchPackets(SnapPuller.java:1136)
[2013.10.21 10:39:13.482]   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1099)
[2013.10.21 10:39:13.482]   at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:738)
[2013.10.21 10:39:13.482]   at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:395)
[2013.10.21 10:39:13.482]   at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
[2013.10.21 10:39:13.482]   at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:153)
[2013.10.21 10:39:13.482]   at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
[2013.10.21 10:39:13.482]   at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223)
[2013.10.21 10:39:13.485]2013-10-21 10:39:13,485 WARN
[org.apache.solr.handler.SnapPuller] - Error in fetching packets 
[2013.10.21 10:39:13.485]java.io.EOFException
[2013.10.21 10:39:13.485]   at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:154)
[2013.10.21 10:39:13.485]   at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:146)
[2013.10.21 10:39:13.485]   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchPackets(SnapPuller.java:1136)
[2013.10.21 10:39:13.485]   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1099)
[2013.10.21 10:39:13.485]   at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:738)
[2013.10.21 10:39:13.485]   at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:395)
[2013.10.21 10:39:13.485]   at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
[2013.10.21 10:39:13.485]   at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:153)
[2013.10.21 10:39:13.485]   at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
[2013.10.21 10:39:13.485]   at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223)
[2013.10.21 10:41:08.461]2013-10-21 10:41:08,461 ERROR
[org.apache.solr.handler.ReplicationHandler] - SnapPull failed
:org.apache.solr.common.SolrException: Unable to download
_fi05_Lucene41_0.pos completely. Downloaded 0!=1485
[2013.10.21 10:41:08.461]   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.cleanup(SnapPuller.java:1230)
[2013.10.21 10:41:08.461]   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1110)
[2013.10.21 10:41:08.461]   at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:738)
[2013.10.21 10:41:08.461]   at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:395)
[2013.10.21 10:41:08.461]   at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
[2013.10.21 10:41:08.461]   at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:153)
[2013.10.21 10:41:08.461]   at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
[2013.10.21 10:41:08.461]   at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223)
[2013.10.21 10:41:08.461]
[2013.10.21 10:41:08.461]2013-10-21 10:41:08,461 ERROR
[org.apache.solr.cloud.RecoveryStrategy] - Error while trying to
recover:org.apache.solr.common.SolrException: Replication for recovery
failed.
[2013.10.21 10:41:08.461]   at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:156)
[2013.10.21 10:41:08.461]   at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
[2013.10.21 10:41:08.461]   at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223)
[2013.10.21 10:41:08.461]
[2013.10.21 10:41:08.555]2013-10-21 10:41:08,462 ERROR

why does a node switch state ?

2013-08-28 Thread sling
hi,
I have a solrcloud with 8 jvm, which has 4 shards(2 nodes for each shard).
1000 000 docs are indexed per day, and 10 query requests per second, and
sometimes, maybe there are 100 query requests per second.

in each shard, one jvm has 8G ram, and another has 5G.

the jvm args is like this:
-Xmx5000m -Xms5000m -Xmn2500m -Xss1m -XX:PermSize=128m -XX:MaxPermSize=128m
-XX:SurvivorRatio=3 -XX:+UseParNewGC -XX:ParallelGCThreads=4
-XX:+UseConcMarkSweepGC -XX:CMSFullGCsBeforeCompaction=5
-XX:+UseCMSCompactAtFullCollection -XX:+PrintGCDateStamps -XX:+PrintGC
-Xloggc:log/jvmsolr.log
OR
-Xmx8000m -Xms8000m -Xmn2500m -Xss1m -XX:PermSize=128m -XX:MaxPermSize=128m
-XX:SurvivorRatio=3 -XX:+UseParNewGC -XX:ParallelGCThreads=8
-XX:+UseConcMarkSweepGC -XX:CMSFullGCsBeforeCompaction=5
-XX:+UseCMSCompactAtFullCollection -XX:+PrintGC -XX:+PrintGCDateStamps
-Xloggc:log/jvmsolr.log

Nodes works well, but also switch state every day (at the same time, gc
becomes abnormal like below).  

2013-08-28T13:29:39.140+0800: 97180.866: [GC 3770296K-2232626K(4608000K),
0.0099250 secs]
2013-08-28T13:30:09.324+0800: 97211.050: [GC 3765732K-2241711K(4608000K),
0.0124890 secs]
2013-08-28T13:30:29.777+0800: 97231.504: [GC 3760694K-2736863K(4608000K),
0.0695530 secs]
2013-08-28T13:31:02.887+0800: 97264.613: [GC 4258337K-4354810K(4608000K),
0.1374600 secs]
97264.752: [Full GC 4354810K-2599431K(4608000K), 6.7833960 secs]
2013-08-28T13:31:09.884+0800: 97271.610: [GC 2750517K(4608000K), 0.0054320
secs]
2013-08-28T13:31:15.354+0800: 97277.080: [GC 3550474K(4608000K), 0.0871270
secs]
2013-08-28T13:31:31.258+0800: 97292.984: [GC 3877223K(4608000K), 0.1551870
secs]
2013-08-28T13:31:34.396+0800: 97296.123: [GC 3877223K(4608000K), 0.1220380
secs]
2013-08-28T13:31:38.102+0800: 97299.828: [GC 3877225K(4608000K), 0.1545500
secs]
2013-08-28T13:31:40.227+0800: 97303.019: [Full GC
4174941K-2127315K(4608000K), 6.3435150 secs]
2013-08-28T13:31:49.645+0800: 97311.371: [GC 2508466K(4608000K), 0.0355180
secs]
2013-08-28T13:31:57.645+0800: 97319.371: [GC 2967737K(4608000K), 0.0579650
secs]

even more, sometimes a shard is down(one node is recovering, another is
down), that is an absolute disaster...

please help me.   any advice is welcome...



--
View this message in context: 
http://lucene.472066.n3.nabble.com/why-does-a-node-switch-state-tp4086939.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: why does a node switch state ?

2013-08-28 Thread sling
Hi Daniel, thank you very much for your reply.
However, my zkTimeout in solr.xml is 30s.

cores adminPath=/admin/cores defaultCoreName=doc
host=${host:215.lead.index.com} hostPort=${jetty.port:9090} 
hostContext=${hostContext:}
zkClientTimeout=${zkClientTimeout:3}
leaderVoteWait=${leaderVoteWait:2}
...
/cores







--
View this message in context: 
http://lucene.472066.n3.nabble.com/why-does-a-node-switch-state-tp4086939p4087142.html
Sent from the Solr - User mailing list archive at Nabble.com.


in solrcoud, how to assign a schemaConf to a collection ?

2013-04-19 Thread sling
hi all, help~~~
how to specify a schema to a collection in solrcloud?

i have a solrcloud with 3 collections, and each configfile is uploaded to zk
like this:
args=-Xmn3000m -Xms5000m -Xmx5000m -XX:MaxPermSize=384m
-Dbootstrap_confdir=/workspace/solr/solrhome/doc/conf
-Dcollection.configName=docconf -DzkHost=zk1:2181,zk2:2181,zk3:2181
-DnumShards=3 -Dname=docCollection

the solr.xml is like this
  cores ...
core name=doc instanceDir=doc/ loadOnStartup=true
transient=false collection=docCollection /
core name=video instanceDir=video/ loadOnStartup=true
transient=false collection=videoCollection /
core name=pic instanceDir=pic/ loadOnStartup=true
transient=false collection=picCollection  /
  /cores

then, when all nodes startup, i find the schema of 2 collection(doc and
video) are the same , while the schema of pic is wrong too..

are there some propeties in core, which can specify a its own schma??? 

thands for any help...







--
View this message in context: 
http://lucene.472066.n3.nabble.com/in-solrcoud-how-to-assign-a-schemaConf-to-a-collection-tp4057238.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: in solrcoud, how to assign a schemaConf to a collection ?

2013-04-19 Thread sling
when i add a schema property to core
core name=pic instanceDir=pic/ loadOnStartup=true transient=false
collection=picCollection  
config=solrconfig.xml schema=../picconf/schema.xml/
it seems there a default path to schema ,that is /configs/docconf/
the exception is:
[18:59:09.211] java.lang.IllegalArgumentException: Invalid path string
/configs/docconf/../picconf/schema.xml caused by relative paths not
allowed @18
[18:59:09.211]  at
org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:99)
[18:59:09.211]  at
org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1133)
[18:59:09.211]  at
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:253)
[18:59:09.211]  at
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:250)
[18:59:09.211]  at
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:65)
[18:59:09.211]  at
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:250)
[18:59:09.211]  at
org.apache.solr.cloud.ZkController.getConfigFileData(ZkController.java:388)
[18:59:09.211]  at
org.apache.solr.core.CoreContainer.getSchemaFromZk(CoreContainer.java:1659)
[18:59:09.211]  at
org.apache.solr.core.CoreContainer.createFromZk(CoreContainer.java:948)
[18:59:09.211]  at
org.apache.solr.core.CoreContainer.create(CoreContainer.java:1031)
[18:59:09.211]  at
org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
[18:59:09.211]  at
org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:624)
[18:59:09.211]  at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
[18:59:09.211]  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
[18:59:09.211]  at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
[18:59:09.211]  at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
[18:59:09.211]  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
[18:59:09.211]  at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[18:59:09.211]  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[18:59:09.211]  at java.lang.Thread.run(Thread.java:619)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/in-solrcoud-how-to-assign-a-schemaConf-to-a-collection-tp4057238p4057250.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: in solrcoud, how to assign a schemaConf to a collection ?

2013-04-19 Thread sling
i copy the 3 schema.xml and solrconfig.xml to $solrhome/conf/.xml, and
upload this filedir to zk like this:
args=-Xmn1000m -Xms2000m -Xmx2000m -XX:MaxPermSize=384m
-Dbootstrap_confdir=/home/app/workspace/solrcloud/solr/solrhome/conf
-Dcollection.configName=conf -DzkHost=zk1:2181,zk2:2181,zk3:2181
-DnumShards=2 -Dname=docCollection

then in solr.xml , it changes to:
 core name=doc instanceDir=doc/ loadOnStartup=true transient=false
collection=docCollection  schema=s1.xml config=sc1.xml /

in this way , the schema.xml is seprated.

it seems the schema and config properties   has a relative path
/configs/conf,
and this is what i uploaded from local,
$solrhome/conf  is equals to /configs/conf.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/in-solrcoud-how-to-assign-a-schemaConf-to-a-collection-tp4057238p4057254.html
Sent from the Solr - User mailing list archive at Nabble.com.


solr4.1 No live SolrServers available to handle this request

2013-04-01 Thread sling
hi,all. I am new to Solr.
when i query solrcloud4.1 with solrj, the client throws exceptions as
follows.
there are 2 shards in my solrcloud.  
each shard is on a server with 4cpu/3G RAM, and jvm has 2G ram.
when the query requests get more and more, the exception occers.
 [java] org.apache.solr.client.solrj.SolrServerException: No live
SolrServers available to handle this request
 [java] at
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:486)
 [java] at
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90)
 [java] at
org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 [java] at
com.netease.index.service.impl.SearcherServiceImpl.search(Unknown Source)
 [java] at com.netease.index.util.ConSearcher.run(Unknown Source)
 [java] at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 [java] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 [java] at java.lang.Thread.run(Thread.java:662)
 [java] Caused by: org.apache.solr.client.solrj.SolrServerException:
IOException occured when talking to server at: http://cms.test.com/solr/doc
 [java] at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:416)
 [java] at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [java] at
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:439)
 [java] ... 7 more
 [java] Caused by: org.apache.http.conn.ConnectionPoolTimeoutException:
Timeout waiting for connection from pool
 [java] at
org.apache.http.impl.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:416)
 [java] at
org.apache.http.impl.conn.tsccm.ConnPoolByRoute$1.getPoolEntry(ConnPoolByRoute.java:299)
 [java] at
org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager$1.getConnection(ThreadSafeClientConnManager.java:242)
 [java] at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455)
 [java] at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 [java] at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 [java] at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 [java] at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:353)
 [java] ... 9 more

ps: lbhttpsolrserver seems to allocate task imbalance...some node get a much
heavy load, while others may be not.   i use nginx so that task could be
more controllable.  is this right?


please help me out, Thank you in advance. ^_^








--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr4-1-No-live-SolrServers-available-to-handle-this-request-tp4052862.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr4.1 No live SolrServers available to handle this request

2013-04-01 Thread sling
thx for your reply.
my solr.xml is like this:
solr persistent=true
  cores adminPath=/admin/cores defaultCoreName=doc
host=${host:cms1.test.com} hostPort=${jetty.port:9090} 
hostContext=${hostContext:}
zkClientTimeout=${zkClientTimeout:3}
leaderVoteWait=${leaderVoteWait:2}
core name=doc instanceDir=doc/ loadOnStartup=true
transient=false collection=docCollection /
  /cores
/solr

i have change the zkclienttimeout from 15s to 30s,  but this exception still
shows.
and the load on solrcloud servers   are not too heavy, they are 1.4 1.5 1.

and these disconnects appear in solrj logs, while the solrcloud is fine.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr4-1-No-live-SolrServers-available-to-handle-this-request-tp4052862p4053075.html
Sent from the Solr - User mailing list archive at Nabble.com.