Re: Can files be faceted based on their size ?

2011-11-19 Thread neuron005
But sir
fileSize is of type string, how will it compare?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Can-files-be-faceted-based-on-their-size-tp3518393p3520569.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: fieldCache problem OOM exception

2011-11-19 Thread erolagnab
@topcat: you need to call close() method for solr request after using them.
In general,

SolrQueryRequest request = new SolrQueryRequest();
try {
   .
} finally {
   request.close();
}

--
View this message in context: 
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp3067057p3520595.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: fieldCache problem OOM exception

2011-11-19 Thread erolagnab
@topcat: you need to call close() method for solr request after using them.
In general,

SolrQueryRequest request = new SolrQueryRequest();
try {
   .
} finally {
   request.close();
}

--
View this message in context: 
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp3067057p3520596.html
Sent from the Solr - User mailing list archive at Nabble.com.


To push the terms.limit parameter from the master core to all the shard cores.

2011-11-19 Thread mechravi25
Hi,

When we pass the terms.limit parameter to the master (which has many shard
cores), it's not getting pushed down to the individual cores. Instead the
default value of -1 is assigned to Terms.limit parameter is assigned in the
underlying shard cores. The issue being the time taken by the Master core to
return the required limit of terms is higher when we are having more number
of underlying shard cores. This affects the performances of the auto suggest
feature. 

*Is there any way that we can explicitly override the default value -1 being
set to Terms.limit in shards core.*

We saw the source code(TermsComponent.java) and concluded that the same.
Please help us in pushing the terms.limit parameter to shard cores. 

PFB code snippet.

private ShardRequest createShardQuery(SolrParams params) {
ShardRequest sreq = new ShardRequest();
sreq.purpose = ShardRequest.PURPOSE_GET_TERMS;

// base shard request on original parameters
sreq.params = new ModifiableSolrParams(params);

// remove any limits for shards, we want them to return all possible
// responses
// we want this so we can calculate the correct counts
// dont sort by count to avoid that unnecessary overhead on the shards
sreq.params.remove(TermsParams.TERMS_MAXCOUNT);
sreq.params.remove(TermsParams.TERMS_MINCOUNT);
sreq.params.set(TermsParams.TERMS_LIMIT, -1);
sreq.params.set(TermsParams.TERMS_SORT, TermsParams.TERMS_SORT_INDEX);

return sreq;
  }

Solr version :
Solr Specification Version: 1.4.0.2010.01.13.08.09.44 
Solr Implementation Version: 1.5-dev exported - yonik - 2010-01-13 08:09:44 
Lucene Specification Version: 2.9.1-dev 
Lucene Implementation Version: 2.9.1-dev 888785 - 2009-12-09 18:03:31 


Thanks,
Sivagnesh


--
View this message in context: 
http://lucene.472066.n3.nabble.com/To-push-the-terms-limit-parameter-from-the-master-core-to-all-the-shard-cores-tp3520609p3520609.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Jetty logging

2011-11-19 Thread darul
test

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Jetty-logging-tp3476715p3520897.html
Sent from the Solr - User mailing list archive at Nabble.com.


Delta Query Exception

2011-11-19 Thread David T. Webb
Im sure that my deltaQueries are causing this issue, but I have the
logging turned on the FINEST.  It would be great if this Exception was
handles properly and the failing PK test was also displayed.  I will
open a Jira for this request, but does anyone have any pointers on how
to determine which deltaQuery may be causing this to fail?

 

java.lang.NullPointerException

at
org.apache.solr.handler.dataimport.DocBuilder.findMatchingPkColumn(DocBu
ilder.java:839)

at
org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.ja
va:900)

at
org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.ja
va:879)

at
org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:28
5)

at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:17
9)

at
org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImport
er.java:390)

at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java
:429)

at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:
408)

 

--

Sincerely,

David Webb, President

BrightMove, Inc.

http://www.brightmove.com http://www.brightmove.com/ 

320 High Tide Dr, Suite 201

Saint Augustine Beach, FL 32080

(904) 861-2396

(866) 895-6299 (Fax)

 

 

 



Re: To push the terms.limit parameter from the master core to all the shard cores.

2011-11-19 Thread Mark Miller

On Nov 19, 2011, at 4:19 AM, mechravi25 wrote:

 Hi,
 
 When we pass the terms.limit parameter to the master (which has many shard
 cores), it's not getting pushed down to the individual cores. Instead the
 default value of -1 is assigned to Terms.limit parameter is assigned in the
 underlying shard cores. The issue being the time taken by the Master core to
 return the required limit of terms is higher when we are having more number
 of underlying shard cores. This affects the performances of the auto suggest
 feature. 
 
 *Is there any way that we can explicitly override the default value -1 being
 set to Terms.limit in shards core.*

Yuck. Maybe you should make a JIRA issue to allow control of this? If you don't 
want perfect results and this is too slow, seems it would be nice to be able to 
specify different values here: one for the total returned, and one for the 
limit used on shards. You do have to keep in mind that results will become 
imperfect depending on the limit used.

As a workaround, you can set the terms.limit as an invariant on your /terms 
request handler at each shard - this will force the limit to n, regardless of 
what the request asks for at each shard:

 requestHandler name=/terms 
class=org.apache.solr.handler.component.SearchHandler
   lst name=invariants
 int name=terms.limitn/int
   /lst 
   arr name=components 
 strtermsComp/str 
   /arr 
 /requestHandler

- Mark Miller
lucidimagination.com





jetty error, broken pipe

2011-11-19 Thread alxsss
Hello,

I use solr 3.4 with jetty that is included in it. Periodically, I see this 
error in the jetty output

SEVERE: org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
at 
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
at 
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:296)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
...
...
...
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:161)
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
... 25 more

2011-11-19 20:50:00.060:WARN::Committed before 500 
null||org.mortbay.jetty.EofException|?at 
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)|?at 
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)|?at
 org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)|?at 
sun.nio.cs.StreamEncoder.implFlush(S

I searched web and the only advice I get is to upgrade to jetty 6.1, but I 
think the version included in solr is 6.1.26.

Any advise is appreciated.


Thanks.
Alex.


Re: jetty error, broken pipe

2011-11-19 Thread Fuad Efendi
It's not Jetty. It is broken TCP pipe due to client-side. It happens when 
client closes TCP connection.

And I even had this problem with recent Tomcat 6.


Problem disappeared after I explicitly tuned keep-alive at Tomcat, and started 
using monitoring thread with HttpClient and SOLRJ... 

Fuad Efendi
http://www.tokenizer.ca




Sent from my iPad

On 2011-11-19, at 9:14 PM, alx...@aim.com wrote:

 Hello,
 
 I use solr 3.4 with jetty that is included in it. Periodically, I see this 
 error in the jetty output
 
 SEVERE: org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
at 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
at 
 org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:296)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
 ...
 ...
 ...
 Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:161)
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
... 25 more
 
 2011-11-19 20:50:00.060:WARN::Committed before 500 
 null||org.mortbay.jetty.EofException|?at 
 org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)|?at 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)|?at
  org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)|?at 
 sun.nio.cs.StreamEncoder.implFlush(S
 
 I searched web and the only advice I get is to upgrade to jetty 6.1, but I 
 think the version included in solr is 6.1.26.
 
 Any advise is appreciated.
 
 
 Thanks.
 Alex.


Re: jetty error, broken pipe

2011-11-19 Thread alxsss
I found out that curl timeout was set to 10 and for queries taking longer than 
10 sec it was closing connection to jetty.
I noticed that when number of docs found is large solr returns results for 
about 20 sec. This is too long. I set caching to off but it did not help.
I think solr spends too much time to find total number of docs. Is there a way 
to turn off this count?

Thanks.
Alex.

 

 
-Original Message-
From: Fuad Efendi f...@efendi.ca
To: solr-user solr-user@lucene.apache.org
Cc: solr-user solr-user@lucene.apache.org
Sent: Sat, Nov 19, 2011 7:24 pm
Subject: Re: jetty error, broken pipe


It's not Jetty. It is broken TCP pipe due to client-side. It happens when 
client 
closes TCP connection.

And I even had this problem with recent Tomcat 6.


Problem disappeared after I explicitly tuned keep-alive at Tomcat, and started 
using monitoring thread with HttpClient and SOLRJ... 

Fuad Efendi
http://www.tokenizer.ca




Sent from my iPad

On 2011-11-19, at 9:14 PM, alx...@aim.com wrote:

 Hello,
 
 I use solr 3.4 with jetty that is included in it. Periodically, I see this 
error in the jetty output
 
 SEVERE: org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
at 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
at 
 org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:296)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
 ...
 ...
 ...
 Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:161)
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
... 25 more
 
 2011-11-19 20:50:00.060:WARN::Committed before 500 
 null||org.mortbay.jetty.EofException|?at 
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)|?at 
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)|?at
 
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)|?at 
sun.nio.cs.StreamEncoder.implFlush(S
 
 I searched web and the only advice I get is to upgrade to jetty 6.1, but I 
think the version included in solr is 6.1.26.
 
 Any advise is appreciated.
 
 
 Thanks.
 Alex.