On Thu, Jan 14, 2016 at 2:43 PM, Lewin Joy (TMS) wrote:
> Thanks for the reply.
> But, the grouping on multivalued is working for me even with multiple data in
> the multivalued field.
> I also tested this on the tutorial collection from the later solr version
> 5.3.1 , which works as well.
Old
Message-
From: Toke Eskildsen [mailto:t...@statsbiblioteket.dk]
Sent: Thursday, January 14, 2016 12:31 AM
To: solr-user@lucene.apache.org
Subject: Re: FieldCache
On Thu, 2016-01-14 at 00:18 +, Lewin Joy (TMS) wrote:
> I am working on Solr 4.10.3 on Cloudera CDH 5.4.4 and am trying
On Thu, 2016-01-14 at 00:18 +, Lewin Joy (TMS) wrote:
> I am working on Solr 4.10.3 on Cloudera CDH 5.4.4 and am trying to
> group results on a multivalued field, let's say "interests".
...
> But after I just re-indexed the data, it started working.
Grouping is not supposed to be supported for
: What is the implication of this? Should we move all facets to DocValues
: when we have high cardinality (lots of values) ? Are we adding it back?
1) Using DocValues is almost certainly a good idea moving forward for
situations where the FieldCache was used in the past.
: FieldCache is gone (m
For completeness this is the related issue :
https://issues.apache.org/jira/browse/SOLR-8096
Cheers
2015-10-06 11:21 GMT+01:00 Alessandro Benedetti
:
> We should make some precision here,
> When dealing with faceting , there are currently 2 main approaches :
>
> 1) *Enum Algorithm* - best for
We should make some precision here,
When dealing with faceting , there are currently 2 main approaches :
1) *Enum Algorithm* - best for low cardinality value fields, it is based on
retrieving the term enum for all the terms in the index, and then
intersecting the related posting list with the quer
Hi I am using solr 5.3 and I have the same problem while doing json facet on
multivalued field. Below is the error stack trace :
2015-09-21 21:26:09,292 ERROR org.apache.solr.core.SolrCore ?
org.apache.solr.common.SolrException: can not use FieldCache on multivalued
field: FLAG
at
org.
Yonik, Upayavira,
thanks for response. Here is the stacktrace from solr logs.
I can make my field single valued, but are there any plans to fix this or
in general mulitvalued fields should not be used for metric calculation ?
what about other metrics, e.g. avg, min,max -- should I be able to
calcul
On Mon, Jul 13, 2015 at 1:55 AM, Iana Bondarska wrote:
> Hi,
> I'm using json query api for solr 5.2. When query for metrics for
> multivalued fields, I get error:
> can not use FieldCache on multivalued field: sales.
>
> I've found in solr wiki that to avoid using fieldcache I should set
> facet.
On Mon, Jul 13, 2015, at 06:55 AM, Iana Bondarska wrote:
> Hi,
> I'm using json query api for solr 5.2. When query for metrics for
> multivalued fields, I get error:
> can not use FieldCache on multivalued field: sales.
>
> I've found in solr wiki that to avoid using fieldcache I should set
> fa
I'm reproducing the problem with the 4.2.1 example with 2 shards.
1) started up solr shards, indexed the example data, and confirmed empty
fieldCaches
[sanniere@funlevel-dx example]$ java
-Dbootstrap_confdir=./solr/collection1/conf
-Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.j
I've created https://issues.apache.org/jira/browse/SOLR-4866
Elodie
Le 07.05.2013 18:19, Chris Hostetter a écrit :
: I am using the Lucene FieldCache with SolrCloud and I have "insane" instances
: with messages like:
FWIW: I'm the one that named the result of these "sanity checks"
"FieldCacheI
: I am using the Lucene FieldCache with SolrCloud and I have "insane" instances
: with messages like:
FWIW: I'm the one that named the result of these "sanity checks"
"FieldCacheInsantity" and i have regretted it ever since -- a better label
would have been "inconsistency"
: VALUEMISMATCH: Mul
@topcat: you need to call close() method for solr request after using them.
In general,
SolrQueryRequest request = new SolrQueryRequest();
try {
.
} finally {
request.close();
}
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp30670
@topcat: you need to call close() method for solr request after using them.
In general,
SolrQueryRequest request = new SolrQueryRequest();
try {
.
} finally {
request.close();
}
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp30670
dear erolagnab,
it is your code in the solr server?
which class i can put it?
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp3067057p3517780.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sorry to pull this up again, but I've faced a similar issue and would like to
share the solution.
In my situation, I uses SolrQueryRequest, SolrCore, SolrQueryResponse to
explicitly perform the search.
The gotcha from my code is that I didn't call SolrQueryRequest.close() hence
the increasing memo
Bernd, in our case, optimizing the index seems to flush the FieldCache for
some reason. On the other hand, doing a few commits without optimizing seems
to make the problem worse.
Hope that helps, we would like to give it a try and debug this in Lucene,
but are pressed for time right now. Perhaps l
The current status of my installation is that with some tweeking of
JAVA I get a runtime of about 2 weeks until OldGen (14GB) is filled
to 100 percent and won't free anything even with FullGC.
The part of fieldCache in a HeapDump to that time is over 80 percent
from the whole heap (20GB). And that
Hello Erick,
I have a 1.7MM documents, 3.6GB index. I also hava an unusual amount of
dynamic fields, that I use for sorting. My FieldCache currently has about
13.000 entries, even though my index only has 1-3 queries per second. Each
query sorts by two dynamic fields, and facets on 3-4 fields that
Hi Erik,
as far as I can see with MemoryAnalyzer from the heap:
- the class fieldCache has a HashMap
- one entry of the HashMap is FieldCacheImpl$StringIndex which is "mister big"
- FieldCacheImpl$StringIndex is a WeakHashMap
- WeakHashMap has three entries
-- 63.58 percent of heap
-- 8.14 perce
Sorry, it was late last night when I typed that...
Basically, if you sort and facet on #all# the fields you mentioned, it
should populate
the cache in one go. If the problem is that you just have too many unique terms
for all those operations, then it should go bOOM.
But, frankly, that's unlikely
Hi Erik,
I will take some memory snapshots during the next week,
but how can it be to get OOMs with one query?
- I started with 6g for JVM --> 1 day until OOM.
- increased to 8 g --> 2 days until OOM
- increased to 10g --> 3.5 days until OOM
- increased to 16g --> 5 days until OOM
- currently 20g
Well, if my theory is right, you should be able to generate OOMs at will by
sorting and faceting on all your fields in one query.
But Lucene's cache should be garbage collected, can you take some memory
snapshots during the week? It should hit a point and stay steady there.
How much memory are yo
Hi Erik,
yes I'm sorting and faceting.
1) Fields for sorting:
sort=f_dccreator_sort, sort=f_dctitle, sort=f_dcyear
The parameter "facet.sort=" is empty, only using parameter "sort=".
2) Fields for faceting:
f_dcperson, f_dcsubject, f_dcyear, f_dccollection, f_dclang, f_dctypenorm,
f_d
The first question I have is whether you're sorting and/or
faceting on many unique string values? I'm guessing
that sometime you are. So, some questions to help
pin it down:
1> what fields are you sorting on?
2> what fields are you faceting on?
3> how many unique terms in each (see the solr admin p
Since FieldCache is an expert level API in lucene, there is no direct control
provided by SOLR/Lucene to control its size.
--
View this message in context:
http://lucene.472066.n3.nabble.com/FieldCache-tp2987541p2989443.html
Sent from the Solr - User mailing list archive at Nabble.com.
at right?
>
> Thanks for your feedback
>
> -Original Message-
> From: pravesh [mailto:suyalprav...@yahoo.com]
> Sent: May-26-11 2:58 AM
> To: solr-user@lucene.apache.org
> Subject: Re: FieldCache
>
> This is because you may be having only 10 unique term
f the FieldCache is wrong. I thought this
was the main cache for Lucene. Is that right?
Thanks for your feedback
-Original Message-
From: pravesh [mailto:suyalprav...@yahoo.com]
Sent: May-26-11 2:58 AM
To: solr-user@lucene.apache.org
Subject: Re: FieldCache
This is because you may be having
This is because you may be having only 10 unique terms in your indexed Field.
BTW, what do you mean by controlling the FieldCache?
--
View this message in context:
http://lucene.472066.n3.nabble.com/FieldCache-tp2987541p2988142.html
Sent from the Solr - User mailing list archive at Nabble.com.
Solr version:
Solr Specification Version: 3.1.0
Solr Implementation Version: 3.1.0 1085815 - grantingersoll -
2011-03-26 18:00:07
Lucene Specification Version: 3.1.0
Lucene Implementation Version: 3.1.0 1085809 - 2011-03-26 18:06:58
Current Time: Wed Apr 27 14:28:34 CEST 2011
Server Start Time:Wed
It Works On My Machine (tm).
Hmmm. this is the packaged Solr release, right? I just tried this from
the admin page and got all the caches. This is from Solr Admin/stats,
right? As in you're clicking either the [Info] or the [Statistics] link on
the admin page then clicking the [cache] link on the
Solr version:
Solr Specification Version: 3.1.0
Solr Implementation Version: 3.1.0 1085815 - grantingersoll -
2011-03-26 18:00:07
Lucene Specification Version: 3.1.0
Lucene Implementation Version: 3.1.0 1085809 - 2011-03-26 18:06:58
Current Time: Wed Apr 27 14:28:34 CEST 2011
Server Start Time:Wed
There's nothing special you need to do to be able to view the various
stats from admin/stats.jsp. If another look doesn't show them, could you
post a screenshot?
And please include the version of Solr you're using, I checked with 1.4.1.
Best
Erick
On Wed, Apr 27, 2011 at 1:44 AM, Solr Beginner
On Mon, Oct 25, 2010 at 3:41 PM, Mathias Walter wrote:
> How do I use it with Solr, i. e. how to set up a schema.xml using a custom
> AttributeFactory?
>
at the moment there is no way to specify an AttributeFactory
(AttributeFactoryFactory? heh) in the schema.xml, nor do the
TokenizerFactories
Hi,
> On Mon, Oct 25, 2010 at 3:41 AM, Mathias Walter
> wrote:
> > I indexed about 90 million sentences and the PAS (predicate argument
> structures) they consist of (which are about 500 million). Then
> > I try to do NER (named entity recognition) by searching about 5 million
> entities. For eac
On Mon, Oct 25, 2010 at 9:00 AM, Steven A Rowe wrote:
> It's not actually deprecated yet.
you are right! only in my patch!
> AFAICT, Test2BTerms only deals with the indexing side of this issue, and
> doesn't test searching.
>
> LUCENE-2551 does, however, test searching. Why hasn't this been co
Hi Robert,
On 10/25/2010 at 8:20 AM, Robert Muir wrote:
> it is deprecated in trunk, because you can index binary terms (your
> own byte[]) directly if you want. To do this, you need to use a custom
> AttributeFactory.
It's not actually deprecated yet.
> See src/test/org/apache/lucene/index/Test
Hi Mathias,
> [...] I tried to use IndexableBinaryStringTools to re-encode my 11 byte
> array. The size was increased to 7 characters (= 14 bytes)
> which is still a gain of more than 50 percent compared to the UTF8
> encoding. BTW: I found no sample how to use the
> IndexableBinaryStringTools cla
On Mon, Oct 25, 2010 at 3:41 AM, Mathias Walter wrote:
> I indexed about 90 million sentences and the PAS (predicate argument
> structures) they consist of (which are about 500 million). Then
> I try to do NER (named entity recognition) by searching about 5 million
> entities. For each entity I
Why do you want to? Basically, the caches are there to improve
#searching#. To search something, you must index it. Retrieving
it is usually a rare enough operation that caching is irrelevant.
This smells like an XY problem, see:
http://people.apache.org/~hossman/#xyproblem
If this seems like gib
41 matches
Mail list logo