: Rahul R
Sent: Friday, June 07, 2013 1:21 AM
To: solr-user@lucene.apache.org
Subject: Re: OR query with null value and non-null value(s)
Thank you Shawn. This does work. To help me understand better, why do
we need the *:* ? Shouldn't it be implicit ?
Shouldn't
fq=(price:4+OR+(-price
I have recently enabled facet.missing=true in solrconfig.xml which gives
null facet values also. As I understand it, the syntax to do a faceted
search on a null value is something like this:
fq=-price:[* TO *]
So when I want to search on a particular value (for example : 4) OR null
value, I would
, 2013 at 12:07 AM, Shawn Heisey s...@elyograg.org wrote:
On 6/6/2013 12:28 PM, Rahul R wrote:
I have recently enabled facet.missing=true in solrconfig.xml which gives
null facet values also. As I understand it, the syntax to do a faceted
search on a null value is something like this:
fq
Hoss,
We rely heavily on facet.mincount because once a user has selected a facet,
it doesn't make sense for us to show that facet field to him and let him
filter again with the same facet. Also, when a facet has only one value, it
doesn't make sense to show it to the user, since searching with
All,
We had a requirement in our solr powered application where customers want
to see all the documents that have a blank value for a field. So when they
facet on a field, if the field has null values, they should be able select
that facet value and see all documents. I thought facet.missing=true
Hello,
I am trying to understand how I can size the caches for my solr powered
application. Some details on the index and application :
Solr Version : 1.3
JDK : 1.5.0_14 32 bit
OS : Solaris 10
App Server : Weblogic 10 MP1
Number of documents : 1 million
Total number of fields : 1000 (750 strings,
performance with all my fields as
multiValued fields ?
Appreciate any help on this. Thank you.
- Rahul
On Mon, May 7, 2012 at 7:23 PM, Rahul R rahul.s...@gmail.com wrote:
Jack,
Sorry for the delayed response:
Total memory allocated : 3GB
Free Memory on startup of application server : 2.85GB
facets.) Just to see how close you are to the
edge even before a volume of queries start coming in.
-- Jack Krupansky
-Original Message- From: Rahul R
Sent: Thursday, May 03, 2012 1:28 AM
To: solr-user@lucene.apache.org
Subject: Re: Lucene FieldCache - Out of memory exception
and
there is no improvement with either.
Appreciate any help on this. Thank you.
- Rahul
On Mon, Apr 30, 2012 at 2:53 PM, Rahul R rahul.s...@gmail.com wrote:
Hello,
I am using solr 1.3 with jdk 1.5.0_14 and weblogic 10MP1 application
server on Solaris. I use embedded solr server. More details :
Number of docs
. It
is probably not the 50-70 number, but the 440 or accumulated number across
many queries that pushes the memory usage up.
When you hit OOM, what does the Solr admin stats display say for
FieldCache?
-- Jack Krupansky
-Original Message- From: Rahul R
Sent: Wednesday, May 02, 2012
Hello,
A related question on this topic. How do I programmatically find the total
number of documents across many shards ? For EmbeddedSolrServer, I use the
following command to get the total count :
solrSearcher.getStatistics().get(numDocs)
With distributed search, how do i get the count of all
Hello,
I am using solr 1.3 with jdk 1.5.0_14 and weblogic 10MP1 application server
on Solaris. I use embedded solr server. More details :
Number of docs in solr index : 1.4 million
Physical size of index : 640MB
Total number of fields in the index : 700 (99% of these are dynamic fields)
Total
Hello,
Since Apache Solr is governed by Apache License 2.0 - does it mean that all
jar files bundled within Solr are also governed by the same License ? Do I
have to worry about checking the License information of all bundled jar
files in my commercial Solr powered application ?
Even if I use
Chris,
I am using SolrIndexSearcher to get a handle to the total number of records
in the index. I am doing it like this :
int num =
Integer.parseInt((String)solrSearcher.getStatistics().get(numDocs).toString());
Please let me know if there is a better way to do this.
Mark,
I can tell you what I
I am not sure, what you mean with multi-user-scenario.
I have an application deployed on an application server (Weblogic). This
application uses solr to query an index. Users (sessions) will log in to the
application, query and then log out. This login and logout has nothing to do
with solr but
Thank you I found the API to get the existing SolrIndexSearcher to be
present in SolrCore:
SolrCore.getSearcher().get()
So if now the Index changes (a commit is done) in between, will I
automatically get the new SolrIndexSearcher from this call ?
Regards
Rahul
On Mon, May 24, 2010 at 11:25
Mitch,
Thank you for your response. A few follow up questions for clarification :
That means one IndexSearcher + its caches got a lifetime of one commit.
In my case, I have an index which will not be modified after creation. Does
this mean that in a multi-user scenario, I can have a static
Hello all,
I have a few questions w.r.t the caches and the IndexSearcher available in
solr. I am using solr 1.3.
- The solr wiki states that the caches are per IndexSearcher object i.e if I
set my filterCache size to 1000 it means that 1000 entries can be assigned
for every IndexSearcher object.
, Steven A Rowe sar...@syr.edu wrote:
Hi Rahul,
On 11/26/2009 at 12:53 AM, Rahul R wrote:
Is there a way by which I can prevent the WordDelimiterFilterFactory
from totally acting on numerical data ?
prevent ... from totally acting on is pretty vague, and nowhere AFAICT do
you say precisely what
Hello,
Would really appreciate any inputs/suggestions on this. Thank you.
On Tue, Nov 24, 2009 at 10:59 PM, Rahul R rahul.s...@gmail.com wrote:
Hello,
In our application we have a catch-all field (the 'text' field) which is
cofigured as the default search field. Now this field will have
Hello,
In our application we have a catch-all field (the 'text' field) which is
cofigured as the default search field. Now this field will have a
combination of numbers, alphabets, special characters etc. I have a
requirement wherein the WordDelimiterFilterFactory does not work on numbers,
=S3552debugQuery=true
Other information
Solr 1.3, JDK 1.5.0_14
regards
Rahul
On Mon, Sep 28, 2009 at 6:48 PM, Yonik Seeley yo...@lucidimagination.comwrote:
On Mon, Sep 28, 2009 at 7:51 AM, Rahul R rahul.s...@gmail.com wrote:
Yonik,
I understand that the network can be a bottle-neck but I am
can look to improve accordingly. Thank you.
Regards
Rahul
On Tue, Sep 29, 2009 at 7:12 PM, Rahul R rahul.s...@gmail.com wrote:
Sorry for the delayed response
**
*How big are your documents?*
I have totally 1 million documents. I have totally 1950 fields in the
index. Every document would
Hello,
I am trying to measure why some of my queries take a long time. I am using
EmbeddedSolrServer and with logging statements before and
after the EmbeddedSolrServer.query(SolrQuery) function, I have found the
time to be around 16s. I added the debugQuery=true and the timing component
for this
Hello,
A rather trivial question on omitNorms parameter in schema.xml. The
out-of-the-box schema.xml uses this parameter during both within
the fieldType tag and field tag and If we define the omitNorms during
the fieldType definition, will it hold good for all fields that are defined
using the
Would appreciate any help on this. Thanks
Rahul
On Mon, Sep 14, 2009 at 5:12 PM, Rahul R rahul.s...@gmail.com wrote:
Hello,
I have a few questions regarding the copyField directive in schema.xml
1. Does the destination field store a reference or the actual data ?
If I have soemthing like
shalinman...@gmail.com wrote:
On Mon, Sep 14, 2009 at 5:12 PM, Rahul R rahul.s...@gmail.com wrote:
Hello,
I have a few questions regarding the copyField directive in schema.xml
1. Does the destination field store a reference or the actual data ?
It makes a copy. Storing or indexing
Hello,
I have a few questions regarding the copyField directive in schema.xml
1. Does the destination field store a reference or the actual data ?
If I have soemthing like this
copyField source=name dest=text/
then will the values in the 'name' field get copied into the 'text' field or
will the
Hello,
I am trying to measure the benefit that I am getting out of using the filter
cache. As I understand, there are two major parts to an fq query. Please
correct me if I am wrong :
- doing full index queries of each of the fq params (if filter cache is
used, this result will be retrieved from
Thank you Martijn.
On Tue, Sep 1, 2009 at 8:07 PM, Martijn v Groningen
martijn.is.h...@gmail.com wrote:
Hi Rahul,
Yes you are understanding is correct, but it is not possible to
monitor these actions separately with Solr.
Martijn
2009/9/1 Rahul R rahul.s...@gmail.com:
Hello,
I am
*release any SOLR resources - no need.*
My query is answered. Thank you.
Regards
Rahul
On Mon, Aug 24, 2009 at 12:32 AM, Fuad Efendi f...@efendi.ca wrote:
Truly correct:
- SOLR does not create HttpSession for user access to Admin screens (do we
have any other screens of UI?)
- SolrCore is
applications in a same
container?
Are you trying to close shared SolrCore when one of many users (of
another
application) logs off?
Usually one needs to clean up only user-session specific objects (such
as
non-persistent shopping cart)...
-Original Message-
From: Rahul R
...
Applicable to non-tokenized single-valued non-boolean fields only, Lucene
internals, FieldCache... and it won't be GC-collected after user log-off...
prefer dedicated box for SOLR.
-Fuad
-Original Message-
From: Rahul R [mailto:rahul.s...@gmail.com]
Sent: August-19-09 6:19 AM
To: solr
Hello,
Can somebody give me some pointers on the Solr objects I need to clean
up/release while doing a logout on a Solr Application. I find that only the
SolrCore object has a close() method. I typically do a lot of faceting
queries on a large dataset with my application. I am using Solr 1.3.0.
e) {
SolrException.log(log,e);
sendErr(500, SolrException.toStr(e), request, response);
} finally {
Rahul R wrote:
Otis,
Thank you for your response. I know there are a few variables here but
the
difference in memory utilization with and without shards somehow leads
after I started to use 16Gb RAM for SOLR
instance (almost a year without any restart!)
-Original Message-
From: Rahul R [mailto:rahul.s...@gmail.com]
Sent: August-13-09 1:25 AM
To: solr-user@lucene.apache.org
Subject: Re: JVM Heap utilization Memory leaks with Solr
*You should
how it goes. Thanks for your input.
Rahul
On Wed, Aug 12, 2009 at 2:15 PM, Gunnar Wagenknecht
gun...@wagenknecht.orgwrote:
Rahul R schrieb:
I tried using a profiling tool - Yourkit. The trial version was free for
15
days. But I couldn't find anything of significance.
You should try
is first sent to the server (with which SolrServer is initialized)
and from there it is sent to all the other shards ?
Regards
Rahul
On Tue, Aug 4, 2009 at 2:29 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Tue, Aug 4, 2009 at 11:26 AM, Rahul R rahul.s...@gmail.com wrote:
Philip
Shalin, thank you for the clarification.
Philip, I just realized that I have diverted the original topic of the
thread. My apologies.
Regards
Rahul
On Tue, Aug 4, 2009 at 3:35 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Tue, Aug 4, 2009 at 2:37 PM, Rahul R rahul.s...@gmail.com
are dealing with the JVM here. :)
Try jmap -histo:live PID-HERE | less and see what's using your memory.
Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
From: Rahul R rahul.s
I am trying to track memory utilization with my Application that uses Solr.
Details of the setup :
-3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
- Hardware : 12 CPU, 24 GB RAM
For testing during PSR I am using a smaller subset of the actual data that I
want to work with.
Philip,
I cannot answer your question, but I do have a question for you. Does
aggregation happen at the primary shard ? For eg : if I have three JVMs
JVM 1 : My application powered by Solr
JVM 2 : Shard 1
JVM 3 : Shard 2
I initialize my SolrServer like this
SolrServer _solrServer = *new*
Hello,
We are trying to get Solr to work for a really huge parts database. Details
of the database
- 55 million parts
- Totally 3700 properties (facets). But each record will not have value for
all properties.
- Most of these facets are defined as dynamic fields within the Solr Index
We were
...@ehatchersolutions.comwrote:
On Jul 31, 2009, at 2:35 AM, Rahul R wrote:
Hello,
We are trying to get Solr to work for a really huge parts database.
Details
of the database
- 55 million parts
- Totally 3700 properties (facets). But each record will not have value
for
all properties.
- Most
to around 10 seconds.
This really helped. Thanks a lot !
Regards
Rahul
On Fri, Jul 31, 2009 at 6:34 PM, Erik Hatcher e...@ehatchersolutions.comwrote:
On Jul 31, 2009, at 7:17 AM, Rahul R wrote:
Erik,
I understand that caching is going to improve performance. Infact we did a
PSR run
45 matches
Mail list logo