Apologies for late reply, Thanks Toke for a great explaination :)
I am new in solr so i am unaware of DocValues, so please can you explain.
With Regards
Aman Tandon
On Fri, May 2, 2014 at 1:52 PM, Toke Eskildsen t...@statsbiblioteket.dkwrote:
On Thu, 2014-05-01 at 23:03 +0200, Aman Tandon
On Thu, 2014-05-01 at 23:38 +0200, Shawn Heisey wrote:
I was surprised to read that fc uses less memory.
I think that is an error in the documentation. Except for special cases,
such as asking for all facet values on a high cardinality field, I would
estimate that enum uses less memory than fc.
On Thu, 2014-05-01 at 23:03 +0200, Aman Tandon wrote:
So can you explain how enum is faster than default.
The fundamental difference is than enum iterates terms and counts how
many of the documents associated to the terms are in the hits, while fc
iterates all hits and updates a counter for the
On 4/30/2014 5:53 PM, Aman Tandon wrote:
Shawn - Yes we have some plans to move to SolrCloud, Our total index size
is 40GB with 11M of Docs, Available RAM 32GB, Allowed heap space for solr
is 14GB, the GC tuning parameters using in our server
is -XX:+UseConcMarkSweepGC
Hi Shawn,
Please check that link
http://wiki.apache.org/solr/SimpleFacetParameters#facet.method there is
something mentioned in facet.method wiki
*The default value is fc (except for BoolField which uses enum) since it
tends to use less memory and is faster then the enumeration method when a
On 5/1/2014 3:03 PM, Aman Tandon wrote:
Please check that link
http://wiki.apache.org/solr/SimpleFacetParameters#facet.method there is
something mentioned in facet.method wiki
*The default value is fc (except for BoolField which uses enum) since it
tends to use less memory and is faster then
I had this issue too. timeAllowed only works for a certain phase of the
query. I think that's the 'process' part. However, if the query is taking
time in 'prepare' phase (e.g. I think for wildcards to get all the possible
combinations before running the query) it won't have any impact on that.
You
Hi Salman,
here is the my debug query dump please help!. I am unable to find the
wildcards in it.
?xml version=1.0 encoding=UTF-8?responselst
name=responseHeader bool name=partialResultstrue/bool int
name=status0/int int name=QTime10080/int/lstresult
name=response numFound=976303
On Wed, Apr 30, 2014 at 2:16 PM, Aman Tandon amantandon...@gmail.comwrote:
lst name=querydouble
name=time3337.0/double /lst lst name=facet
double name=time6739.0/double /lst
Most time is spent in facet counting. FacetComponent doesn't checks
timeAllowed right now. You
On 4/29/2014 11:43 PM, Aman Tandon wrote:
My heap size is 14GB and i am not using solr cloud currently, 40GB index
is replicated from master to two slaves.
I read somewhere that it return the partial results which is computed by
the query in that specified amount of time which is defined by
It¹s not just FacetComponent, here¹s the original feature ticket for
timeAllowed:
https://issues.apache.org/jira/browse/SOLR-502
As I read it, timeAllowed only limits the time spent actually getting
documents, not the time spent figuring out what data to get or how. I
think that means the
Jeff - Thanks Jeff this discussion on jira is really quite helpful. Thanks
for this.
Shawn - Yes we have some plans to move to SolrCloud, Our total index size
is 40GB with 11M of Docs, Available RAM 32GB, Allowed heap space for solr
is 14GB, the GC tuning parameters using in our server
is
Hi,
I am using solr 4.2 with the index size of 40GB, while querying to my index
there are some queries which is taking the significant amount of time of
about 22 seconds *in the case of minmatch of 50%*. So i added a parameter
timeAllowed = 2000 in my query but this doesn't seems to be work.
On 4/29/2014 10:05 PM, Aman Tandon wrote:
I am using solr 4.2 with the index size of 40GB, while querying to my index
there are some queries which is taking the significant amount of time of
about 22 seconds *in the case of minmatch of 50%*. So i added a parameter
timeAllowed = 2000 in my
Shawn this is the first time i raised this problem.
My heap size is 14GB and i am not using solr cloud currently, 40GB index
is replicated from master to two slaves.
I read somewhere that it return the partial results which is computed by
the query in that specified amount of time which is
15 matches
Mail list logo