old version of Solr (5.x). Also, it looks like the
only new files created recently are .liv files, which were created at the
time of deletion, and also a segment_ file.
I'd love some guidance on this.
Thanks,
- A
--
Alex Hanna, PhD
alex-hanna.com
@alexhanna
ollowing:
Where "float" defined as:
What does the error means? How can I handle it?
Thanks.
Alex Broitman | Integration Developer
4 HaHarash Street | PO Box 7330 | Hod Hasharon, ISRAEL 45241
F: +972-9-7944333 | C: +972-54-4771776 | W:
www.clarizen.com<http://www.clarizen.com/&g
Hello
Please add me to solr wiki editor - nickname is Alex Birdman
--
Alex Birdman
Head of PR department of 3dmdb.com
3dmodeldatab...@gmail.com
53
> Am 25.02.2019 um 08:46 schrieb Jörn Franke :
>
> Elastic ip addresses?
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-address
> es-eip.html
>
>> Am 25.02.2019 um 08:22 schrieb Addison, Alex (LNG-LON)
>> :
>>
>> Hi all, we
anks,
Alex Addison
LexisNexis is a trading name of RELX (UK) LIMITED - Registered office - 1-3
STRAND, LONDON WC2N 5JR
Registered in England - Company No. 02746621
https://www.youtube.com/watch?v=pNe1wWeaHOU=PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i=1
https://www.youtube.com/watch?v=pNe1wWeaHOU=PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i=1
http://audiobible.life CHECK IT OUT!
On Wed, Sep 6, 2017 at 5:57 PM, Nick Way wrote:
> Hi, I have a
the hostname of current Solr instance. I can of course write the
Javascript to handle this task, but maybe there is builtin Velocity
property that I can ask for host & port of current server?
Thank you
--
With best wishes,Alex Ott
http://alexott.net/
Twitter: alexott_en (Eng
Hi,
Please kindly add me to the Solr wiki contributors list. The app we're
developing (Jitbit Help) is using Apache Solr to power our knowledge-base
search engine, customers love it. (we were using MS Fulltext indexing
service before, but it's a huge PITA).
Thanks
Oh, shoot, forgot to include my wiki username. Its "AlexYumas" sorry about
that stupid me
On Sat, Oct 31, 2015 at 10:48 PM, Alex <jit...@gmail.com> wrote:
> Hi,
>
> Please kindly add me to the Solr wiki contributors list. The app we're
> developing (Jitbit Help) is us
My stopwords don't works as expected.
Here is part of my schema:
fieldType name=text_general class=solr.TextField
analyzer type=index
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt
We need advanced stop words filter in Solr.
We need stopwords to be stored in db and ability to change them by users
(each user should have own stopwords). That's why I am thinking about
sending stop words to solr from our app or connect to our db from solr and
use updated stop words in custom
I am trying to write a custom analyzer , whose execution is determined by
the value of another field within the document.
For example if the locale field in the document has 'de' as the value, then
the analizer would use the German set of tokenizers/filters to process the
value of a field.
My
=false distErrPct=0.025 maxDistErr=0.09 units=degrees /
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From: david.w.smi...@gmail.com [mailto:david.w.smi
wta.WTArea - does show in the index, is of type varchar(max)
they are defined similarly in schema.xml
field name=geom type=location_rpt indexed=true stored=true/
field name=wtarea type=location_rpt indexed=true stored=true /
Again, thanks for all your help, I appreciate it
Alex Bostic
GIS Developer
-Dsolr.solr.home=c:\AddedSoftware\solr-4.90\ocsirasspatial\solr
-Djetty.class.path=c:\AddedSoftware\solr-4.9.0\jts-1.13\lib -jar start.jar
Based on the above, what am I missing to get this to work. Maybe I am
overlooking an issue in the console?
Thanks
Alex Bostic
GIS Developer
URS Corporation
12420
issues would be great
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From: Bostic, Alex [mailto:alex.bos...@urs.com]
Sent: Saturday, August 23, 2014 3:53 AM
To: solr
\ocsirasspatial\solr-webapp\webapp\WEB-INF\lib
Any other hints are certainly welcome. I think I'm close
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From
. If not then you’re
current problem is squarely with the import process/config, not with
Solr spatial.
~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley
On Sat, Aug 23, 2014 at 10:53 AM, Bostic, Alex alex.bos...@urs.com wrote:
Ok thanks, I
=com.spatial4j.core.context.jts.JtsSpatialContextFactory
geo=false distErrPct=0.025 maxDistErr=0.09 units=degrees /
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From: Bostic, Alex
Ok Great, I'm just going to dive in and see if I can index my data. Does
spatial reference matter?
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From: Pires
detail if needed.
Thanks
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
This e-mail and any attachments contain URS Corporation confidential
information that may be proprietary or privileged
Hi all,
I have a field that contains dates (it has date type) and I would like
to make a hierarchical (pivot) facet based on that field.
So I would like to have something like this:
date_of_creation:
|__2014
||__January
|| |_01
|| |_02
|| |_14
|
Hi,
All of our synonyms are maintained in DB, we would like to fetch those
synonym dynamically for query expansion (Not indexing time). Are there any
code contribution?
I saw some discussion years ago but without conclusion.
Thanks a lot!
Thanks, Koji. No, we don't have that option set. Should we?
be happening? Thanks.
-Alex
Hi,
I'm running Solr 4.3 embedded in Tomcat, so there's a Solr server starting when
Tomcat starts.
In the same webapp, I also have a process to recreate the Lucene index when
Solr starts. To do this, I have a singleton instance of EmbeddedSolrServer
provided by Spring. This same instance is
fine for me.
Alex.
The information contained in this email is strictly confidential and for the
use of the addressee only, unless otherwise indicated. If you are not the
intended recipient, please do not read, copy, use or disclose to others this
message or any attachment. Please also notify
Thanks, Jack. Sorry, took me a while to reply :)
It sounds like sentence/paragraph level searches won't be easy.
Warm regards,
Alex
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: 15 April 2013 5:09 PM
To: solr-user@lucene.apache.org
Subject: Re: Tokenize
Thanks, Oussama. That was very useful information and we have added the double
quotes. One interesting trick: we had to change the way we did it to wrap the
pattern value in single quotes so we could have double quotes inside.
Warm regards,
Alex Cougarman
Bahá'í World Centre
Haifa, Israel
=solr.LowerCaseFilterFactory/
filter class=solr.PorterStemFilterFactory/
/analyzer
/fieldType
Warm regards,
Alex Cougarman
Bahá'í World Centre
Haifa, Israel
Office: +972-4-835-8683
Cell: +972-54-241-4742
acoug...@bwc.orgmailto:acoug...@bwc.org
. Is it unavailable? Could it serve searches?
What happens if it has to replicate huge amount of data?
Regards,
Alex
Thanks, Erick. That really helped us in learning about tokens and how the
Analyzer works. Thank you!
Warm regards,
Alex
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 19 September 2012 3:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Wildcard
:
date_text:2010*
date_text:2010-*
date_text:2010-01*
date_text:2010-01-*
However, when we run these, they return nothing. What are we doing wrong?
date_text:*-01-27
date_text:2010-*-27
date_text:2010-01-27*
Warm regards,
Alex
this?
Thanks.
Warm regards,
Alex
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: 13 September 2012 4:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Partial date searches
Wildcard patterns work on dates if they are string fields.
If your dates are trie dates
to search on 2010-xx-12 where the day isn't known
but the month and year are known. Is this possible? Is there a sample search we
can run in the admin interface against our index? Thanks.
Warm regards,
Alex Cougarman
Bahá'í World Centre
Haifa, Israel
Office: +972-4-835-8683
Cell: +972-54-241-4742
? Thanks.
Sincerely,
Alex
? is there some special
tool for this purpose?)
Regards,
Alex
.
Many thanks, Alex
| title:ns1
define.logica.com^1.5))
meaning that documents containing ns1 OR define.logica.com are returned.
This is contrary to e.g. ns.logica.define.com which is treated as a single
token. Is there a way I can make Solr treat both queries the same way?
Many thanks, Alex
--
Alex Willmer
Thanks guys! I'll try out OpenNMS Zabbix :)
Alex
On 03/14/2012 12:07 AM, Jan Høydahl wrote:
And here is a page on how to wire Solr's JMX info into OpenNMS monitoring tool.
Have not tried it, but as soon as a collector config is defined once I'd guess
it could be re-used, maybe shipped
anyone know of one ? Or has a set of JMX URLs that could be used to
make i.e. munin or cacti use that data ?
I'm currently running psi-probe on each host to have at least some
overview of whats going on within the JVM.
Thanks!
Alex
Hi there,
Yes I know about that tool, however, we've decided that that's not
optimal for us, so i'm looking for something freely available.
Alex
On 03/13/2012 09:15 AM, Rafał Kuć wrote:
Hello Alex!
Right now, SPM from Sematext is free to use so You can try that out :)
anyone know of one ? Or has a set of JMX URLs that could be used to
make i.e. munin or cacti use that data ?
I'm currently running psi-probe on each host to have at least some
overview of whats going on within the JVM.
Thanks!
Alex
ago to integrate this into the package, but he
hasn't answered yet.
Cheers,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3450788.html
Sent from the Solr - User mailing list archive at Nabble.com.
, but he
hasn't answered yet.
Cheers,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3450881.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 09/03/2011 08:49 PM, Erick Erickson wrote:
Does hl.fragsize do what you want?
Best
Erick
On Sat, Sep 3, 2011 at 11:56 AM, alexalex.alex.alex.9...@gmail.com wrote:
hi all,
I would like to truncate some titles(or limit length) while still using
highlighthing if possible , like :
very
hi all,
I would like to truncate some titles(or limit length) while still using
highlighthing if possible , like :
very long title...end of very long title or
very long title sfgdsdfsg end of very...
Can it be done currently with any highlighter ?
thanks.
)?
Thanks in advance
Alex Grilo
The queries I am trying to do are
q=title:Unicamp
and
q=title:Unicampbf=question_count^5.0
The boosting factor (5.0) is just to verify if it was really used.
Thanks
Alex
On Wed, Jun 8, 2011 at 10:25 AM, Denis Kuzmenok forward...@ukr.net wrote:
Show your full request to solr (all params
Hi,
Can I make a query that returns only exact match or do I have to change the
fields to achieve that?
Thanks in advance
Alex Grilo
in advance
--
Alex Bredariol Grilo
Developer - umamao.com
The code is here: http://pastebin.com/50ugqRfA
http://pastebin.com/50ugqRfAand my schema.xml configuration entry for
similarity is:
similarity class=com.umamao.solr.ShortFieldNormSimilarity/
Thanks
Alex
On Mon, May 16, 2011 at 2:01 PM, Gora Mohanty g...@mimirtech.com wrote:
On Mon, May 16
org.apache.solr.search.ValueSourceParser which
takes a guid and run a embedded query to get a score for this guid in the
bookmark schema. This would probably requires two separated indexes to begin
with.
Keen to hear ideas on what's the best way to implement this and where I
should start.
Thanks,
Alex
relies on the the number and anchor
text of the incoming link, we're trying to use the number of people and
their keywords/comments as a weight for the link.
Alex
On Fri, Mar 4, 2011 at 6:29 PM, Gora Mohanty g...@mimirtech.com wrote:
On Fri, Mar 4, 2011 at 10:24 AM, Alex Dong a...@trunk.ly wrote
hi all,
I'm trying to get result like :
blabla bkeyword/b blabla ... blablabkeyword/b blabla...
so, I'd like to show 2 fragments.I've added these settings
str name=hl.simple.pre![CDATA[b]]/str
str name=hl.simple.post![CDATA[/b]]/str
str name=f.content.hl.fragsize20/str
str
Ahmet Arslan wrote:
--- On Mon, 2/7/11, alex alex.alex.alex.9...@gmail.com wrote:
From: alex alex.alex.alex.9...@gmail.com
Subject: hl.snippets in solr 3.1
To: solr-user@lucene.apache.org
Date: Monday, February 7, 2011, 7:38 PM
hi all,
I'm trying to get result like :
blabla bkeyword/b
Ahmet Arslan wrote:
I can see changes if I change fragsize, but no
hl.snippets.
May be your text is too short to generate more than one snippets?
What happens when you increase hl.maxAnalyzedChars parameter?
hl.maxAnalyzedChars=2147483647
It's working now. I guess, it was a problem
I just moved to a multi core solr instance a few weeks ago, and it's
been working great. I'm trying to add a 3rd core and I can't query
against it though.
I'm running 1.4.1 (and tried 1.4.0) with the spatial search plugin.
This is the section in solr.xml
cores adminPath=/admin/cores
I recently added a second core to my solr setup, and I'm now running
into this Lock obtain timed out error when I try to update one core
after I've updated another core.
In my update process, I add/update 1000 documents at a time and commit
in between. Then at the end, I commit and optimize.
I'm going to go ahead and replay to myself since I solved my problem.
It seems I was doing one more update to the data at the end and wasn't
doing a commit, so it then couldn't write to the other core. Adding the
last commit seems to have fixed everything.
On 2/1/2011 11:08 AM, Alex Thurlow
(input_longitude,stored_longitude),2))degrees
distance filter
What's anyone else out there using?
Thanks in advance,
Alex
Make sure you are not going to reinvent the wheel here ;). There's been
done a lot around the problem of distributes search engine.
This thread might be useful for you: http://search-hadoop.com/m/ARlbS1MiTNY
Alex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop
at the FieldType class and it looks like it only relies on
solr stuff and lucene.
I don't have much experience with classloader issues, any tips on how
to debug this?
And Ken:
I tried renaming the field as you suggested, but I get the same issue.
Thanks,
Alex Matviychuk
-snowball-2.9.3.jar
lucene-spellchecker-2.9.3.jar
...
Hope that helps someone in a similar situation.
Thanks again for the help!
Cheers,
Alex Matviychuk
On Wed, Oct 27, 2010 at 18:01, Alex Matviychuk alex...@gmail.com wrote:
On Wed, Oct 27, 2010 at 03:57, Chris Hostetter hossman_luc...@fucit.org
Hi,
Adding Solr user list.
We used similar approach to the one in this patch but with Hadoop Streaming.
Did you determine that indices are really missing? I mean did you find
missing documents in the output indices?
Alex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
in the community.
Thank you,
Alex.
On Sun, Aug 22, 2010 at 5:55 PM, MitchK mitc...@web.de wrote:
Alex,
it sounds like it would make sense.
Use cases could be i.e. clustering or similar techniques.
However, in my opinion the point of view for such a modification is not the
right.
I.e
Hi,
So if those are separate documents how should I handle paging? Two
separate queries?
First to return all matching courses-events pairs, and second one to get
courses for given page?
Is this common design described in details somewhere?
Thanks,
Alex
On 2010-07-09 01:50, Lance Norskog
clear.
Thanks,
Alex
On 2010-07-12 10:26, Chantal Ackermann wrote:
Hi Alex,
I think you have to explain the complete use case. Paging is done by
specifying the parameter start (and rows if you want to have more or
less than 10 hits per page). For each page you need of course a new
query
for q=name:Microsoft town:Leeds returns docs 1 3.
How would I remove London/Glasgow from doc 1 and Birmingham from doc 3?
Or is it that I should create separate doc for each name-event?
Thanks,
Alex
=time0.0/double
/lst
-%5Cl%20%22%22 lst name=org.apache.solr.handler.component.DebugComponent
double name=time0.0/double
/lst
/lst
/lst
/lst
/response
Alex Wang
CrossView Inc.
Mobile: 647-4093066
Email: aw...@crossview.commailto:aw...@crossview.com
http://www.crossview.comhttp
=org.apache.solr.handler.component.DebugComponent
double name=time0.0/double
/lst
/lst
/lst
/lst
/response
Alex Wang
CrossView Inc.
Mobile: 647-4093066
Email: aw...@crossview.commailto:aw...@crossview.com
http://www.crossview.comhttp://www.crossview.com/
[http://www.crossview.com/emails/signatures/signature.gif]
http
Sorry Erick, can you tell me how to find the raw *indexed* terms from the admin
console? I am not familiar with the admin console.
Thanks,
On May 12, 2010, at 10:18 AM, Erick Erickson wrote:
Hmmm, nothing looks odd about that, except perhaps the casing. If you use
the admin
console to look
name=productType type=string indexed=true stored=true /
Thanks,
Alex
On May 12, 2010, at 11:58 AM, Erick Erickson wrote:
Not til this evening, don't have a handy SOLR implementation to ping...
But another option is to get a copy of Luke and look at the index, but the
same caution
about seeing
Thanks Hoss. Please see the query results as follows:
: result name=response numFound=0 start=0/
: lst name=debug
: lst name=queryBoosting
: str name=qproductType:ProductBean/str
: null name=match/
: /lst
...can you please disable the QueryElevationComponent and see if that
Sorry please discard my query results here, because I was playing with the
field type and changed it to text from string and forgot to change it back.
I will change it back to string and post the query results shortly.
I apologize for the careless mistake.
Thanks,
Alex
On May 12, 2010, at 10
Found the problem! It's because all the values in productType field have
trailing spaces in it like this: ProductBean . Thanks Hoss for your
suggestion of using Luke query which exposed the problem.
You guys are awesome!
Thanks,
Alex
On May 12, 2010, at 10:12 PM, Alex Wang wrote:
Sorry
You are absolutely right. The fields have trailing spaces in it. Thanks Erick
for your time. Really appreciated!
Thanks,
Alex
On May 12, 2010, at 8:29 PM, Erick Erickson wrote:
Click the schema browser link on the admin page.
On the next page click
the fields link, then the field
payload
Also, if I use wild card searches, I get results as expected
(productType:ProductBe*).
I have tried various things like clearing my browser cache, deleting the data
folder and re-index. None of them helps. Can someone please shed some light
here?
Thanks,
Alex
Thanks so much. That works really well now. So this brings up a
complaint I have with the Solr documentation. I see very few actual
examples. If I had seen any example of searching for a multi-word
search, I assume it would have had these parentheses.
-Alex
On 3/18/2010 5:54 PM
=artist_title, doc=19218) 0.2631579
= coord(5/19)
Someone else suggested I use DisMax, but I can't really get that to do
what I want right now either. I'm just wondering why this seems to not
be using this field at all.
-Alex
,qt:dismax,wt:json,version:1.2,rows:30}},response:{numFound:0,start:0,docs:[]}}
If I remove the qt=dismax, I get results like I should. Can anyone shed
some light?
Thanks,
Alex
? Query type=standard works fine here.
Thanks,
Alex
Aha. That appears to be the issue. I hadn't realized that the query
handler had all of those definitions there.
-Alex
On 3/16/2010 6:56 PM, Erick Erickson wrote:
I suspect your problem is that you still have price defined in
solrconfig.xml for the dismax handler. Look for the section
:$search^100
I have tried artist_tight as string, textTight, and text_ws, all to no
avail. I have completely deleted my index between changes and
reinserted all my data.
What am I missing here?
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http
=solr.LowerCaseFilterFactory/
/analyzer
/fieldType
Gives me this error.
SEVERE: org.apache.solr.common.SolrException: analyzer without class or
tokenizer filter list
Is there a way to have a string field that's case-insensitive?
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http
for that?
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
. Is this possible either in the indexing or with a query
option?
Thanks,
Alex
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
That's great information. Thanks!
-Alex
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 3/2/2010 3:11 PM, Ahmet Arslan wrote:
I'm new to Solr and just getting it set up
and testing it out. I'd like to know if there's a way
is doing.
On Jan 1, 2010, at 2:11 PM, Alex Muir wrote:
Hi,
I'm about to start using Ant to get Carrot2 working with solr however
I was first trying to get it working without Ant by placing jars into
a lib directory in the quickstart example directory however I couldn't
find any documentation
to accomplish this I would be happy to hear about it.
Happy New Year!
Regards
--
Alex
https://sites.google.com/a/utg.edu.gm/alex
Thanks Otis for the reply. Yes this will be pretty memory intensive.
The size of the index is 5 cores with a maximum of 500K documents each
core. I did search the archives before but did not find any definite
answer. Thanks again!
Alex
On Nov 27, 2009, at 11:09 PM, Otis Gospodnetic wrote
. Is there a limit on the number of fields allowed per document?
2. What is the performance impact for such design?
3. Has anyone done this before and is it a wise thing to do?
Thanks,
Alex
hossman wrote:
If you just want the full input string passed to the analyzer of each qf
field, then you just need to quote the entire string (or escape every
shitespace charter in the string with a backslash) so that the entire
input is considered one chunk -- but then you don't get to
the same: how to search against two
fields with dismax when they are tokenized differently (and one of them is
not tokenized by whitespaces)
Could you please help with that situation?
Thank you in advance,
Alex.
--
View this message in context:
http://www.nabble.com/Dismax%3A-Impossible-to-search
Hello,
It seems to me that there is no way how I can use dismax handler for
searching in both tokenized and untokenized fields while I'm searching for a
phrase.
Consider the next example. I have two fields in index: product_name and
product_name_un. The schema looks like:
fieldType
unfortunately, but thanks for the suggestion.
Alex Baranov.
On Sat, Oct 10, 2009 at 3:01 PM, Yonik Seeley yo...@lucidimagination.comwrote:
On Sat, Oct 10, 2009 at 6:34 AM, Alex Baranov alex.barano...@gmail.com
wrote:
Hello,
It seems to me that there is no way how I can use dismax handler
into RAM. It worked
well for me on the project with 140+ million documents and 30 concurrent
user requests per second. If your index can be placed in RAM you can reduce
the architecture complexity.
Alex Baranov
On Wed, Sep 9, 2009 at 5:10 PM, Elaine Li elaine.bing...@gmail.com wrote:
Hi,
I have 20
Please, take a look at
http://issues.apache.org/jira/browse/SOLR-1379
Alex.
On Wed, Sep 9, 2009 at 5:28 PM, Constantijn Visinescu baeli...@gmail.comwrote:
Just wondering, is there an easy way to load the whole index into ram?
On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov alex.barano
So, is there an ability to perform filtering as I described?
On Mon, Jun 1, 2009 at 22:24, Alex Shevchenko caeza...@gmail.com wrote:
But I don't need to sort using this value. I need to cut results, where
this value (for particular term of query!) not in some range.
On Mon, Jun 1, 2009
HI All,
Is there a way to perform filtering based on keyword density?
Thanks
--
Alex Shevchenko
1 - 100 of 129 matches
Mail list logo