Hi
I tried to split a shard but it failed. If I try to do it again it does
not start again.
I see the to extra shards in /collections/messages/leader_elect/ and
/collections/messages/leaders/
How can I fix this?
root@solr07-dcg:/solr/messages_shard3_replica2# curl
Hi,
We are using solr 3.6.1, our application has many cores (more than 1K),
the problem is that solr starting took a long time (10m). Examing log
file and code we found that for each core we loaded many resources, but
in our app, we are sure we are always using the same solrconfig.xml and
Hi Lisheng,
I had the same problem when I enabled the autoSoftCommit in
solrconfig.xml. If you have it enabled, disabling it could fix your problem,
Cheers.
Carlos.
2013/5/22 Zhang, Lisheng lisheng.zh...@broadvision.com
Hi,
We are using solr 3.6.1, our application has many cores (more than
Thanks very much for quick helps! I searched but it seems that
autoSoftCommit is solr 4x feature and we are still using 3.6.1?
Best regards, Lisheng
-Original Message-
From: Carlos Bonilla [mailto:carlosbonill...@gmail.com]
Sent: Wednesday, May 22, 2013 12:17 AM
To:
Thank you for your reply bbarani,
I can't do that because I want to boost some documents over others,
independing of the query.
On 05/21/2013 05:41 PM, bbarani wrote:
Why don't you boost during query time?
Something like q=supermanqf=title^2 subject
You can refer:
clusterstate.json is now reporting shard3 as inactive. Any idea how to
change clusterstate.json manually from commandline?
On 05/22/2013 08:59 AM, Arkadi Colson wrote:
Hi
I tried to split a shard but it failed. If I try to do it again it
does not start again.
I see the to extra shards in
My index is originally of version 4.0. My methods failed with this
configuration.
So, I changed solrconfig.xml in my index to both versions: LUCENE_42 and
LUCENE_41.
For each version in each method (loading and IndexUpgrader), I see the same
errors as before.
Thanks.
-Original
Hi Oussama,
This is explained very nicely on Solr Wiki..
http://wiki.apache.org/solr/SolrRelevancyFAQ#index-time_boosts
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22add.22
All you need to do is something similar to below..
-
add doc boost=2.5field
Hi Erick,
I opened an issue in JIRA: SOLR-4850. But I don't see how to change an
assignee, I don't think that I have permissions to do it.
Thank you.
Best regards,
Lyuba
On Mon, May 20, 2013 at 6:05 PM, Erick Erickson erickerick...@gmail.comwrote:
Lyuba:
Could you go ahead and raise a
Thank you Sandeep,
I did post the document like that (a minor difference is that I did not
add the boost to the field since I don't want to boost on specific
field, I boosted the whole document 'doc boost=2.0 /doc'),
but the issue is that everything in the queries results has the same
I don't know if this is the issue or not but, concidering this note from
the wiki :
NOTE: make sure norms are enabled (omitNorms=false in the schema.xml)
for any fields where the index-time boost should be stored.
In my case where I only need to boost the whole document (not a specific
Hi,
How do we search based upon regular expressions in solr?
Regards,
Sagar
DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for
I think that is applicable only for the field level boosting and not at
document level boosting.
Can you post your query, field definition and results you're expecting.
I am using index and query time boosting without any issues so far. also
which version of Solr you're using?
On 22 May 2013
I don't know if this can help (since the document boost should be
independent of any schema) but here is my schema :
|?xml version=1.0 encoding=UTF-8?
schema name= version=1.5
types
fieldType name=string class=solr.StrField
sortMissingLast=true /
You can write a regular expression query like this (you need to specify
the regex between slashes / ) :
fieldName:/[rR]egular.*/
On 05/22/2013 10:51 AM, Sagar Chaturvedi wrote:
Hi,
How do we search based upon regular expressions in solr?
Regards,
Sagar
DISCLAIMER:
Did you use the debugQuery=true in solr console to see how the query is
being interpreted and the result calculation?
Also, I'm not sure but this copyfield directive seems a bit confusing to
me..
copyField source=Id dest=Suggestion /
Because multiValued is false for Suggestion field so does
@Oussama Thank you for your reply. Is it as simple as that? I mean no
additional settings required?
-Original Message-
From: Oussama Jilal [mailto:jilal.ouss...@gmail.com]
Sent: Wednesday, May 22, 2013 3:37 PM
To: solr-user@lucene.apache.org
Subject: Re: Regular expression in solr
You
I don't think so, it always worked for me without anything special, just
try it and see :)
On 05/22/2013 11:26 AM, Sagar Chaturvedi wrote:
@Oussama Thank you for your reply. Is it as simple as that? I mean no
additional settings required?
-Original Message-
From: Oussama Jilal
Yes I did debug it and there is nothing special about it, everything is
treated the same,
My Solr version is 4.2
The copy field is used because the 2 field are of different types but
only one value is indexed in them (so no multiValue is required and it
works perfectly).
On 05/22/2013
Yes, it works for me too. But many times result is not as expected. Is there
some guide on use of regex in solr?
-Original Message-
From: Oussama Jilal [mailto:jilal.ouss...@gmail.com]
Sent: Wednesday, May 22, 2013 4:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Regular expression
Hi,
Since synonym searching has some limitations in solr, so I wanted to know the
procedure of Synonym indexing in solr?
Please let me know if any guide is available for that.
Regards,
Sagar
DISCLAIMER:
I am not sure but I heard it works with the Java Regex engine (a little
obvious if it is true ...), so any Java regex tutorial would help you.
On 05/22/2013 11:42 AM, Sagar Chaturvedi wrote:
Yes, it works for me too. But many times result is not as expected. Is there
some guide on use of
Hello,
I think that what is written about the SynonymFilterFactory in the wiki
is well explained, so I will direct you there :
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
On 05/22/2013 11:44 AM, Sagar Chaturvedi wrote:
Hi,
Since synonym searching
Jack,
Thanks for your response.
1. Flattening could be an option, although our scale and required
functionality (runtime non DocValues backed facets) is beyond what solr3
can handle (billions of docs). We have flattened the meta data at the
expense of over-generating solr documents. But to solve
I'm running out of options now, can't really see the issue you're facing
unless the debug analysis is posted.
I think a thorough debugging is required from both application and solr
level.
If you want a customize scoring from Solr, you can also consider overriding
DefaultSimilarity implementation
I just can't get the $ endpoint to work.
I am not sure but I heard it works with the Java Regex engine (a little
obvious if it is true ...), so any Java regex tutorial would help you.
On 05/22/2013 11:42 AM, Sagar Chaturvedi wrote:
Yes, it works for me too. But many times result is not as
Ok thank you for your help, I think I will have to treat the problem in
another way even if it will complicate things for me.
thanks again
On 05/22/2013 11:51 AM, Sandeep Mestry wrote:
I'm running out of options now, can't really see the issue you're facing
unless the debug analysis is
Thanks. Already used it. Quite easy to setup. But it tells how to setup Synonym
search. I am asking about synonym indexing.
-Original Message-
From: Oussama Jilal [mailto:jilal.ouss...@gmail.com]
Sent: Wednesday, May 22, 2013 4:18 PM
To: solr-user@lucene.apache.org
Subject: Re: synonym
There is no ^ or $ in the solr regex since the regular expression will
match tokens (not the complete indexed text). So the results you get
will basicly depend on your way of indexing, if you use the regex on a
tokenized field and that is not what you want, try to use a copy field
wich is not
Sandeep:
You need to be a little careful here, I second Shawn's comment that
you are mixing versions. You say you are using solr 4.0. But the jar
that ships with that is apache-solr-core-4.0.0.jar. Then you talk
about using solr-core, which is called solr-core-4.1.jar.
Maven is not officially
LUCENE_40 since your original index was built with 4.0.
As for the other, I'll defer to people who actually know what they're
talking about.
Best
Erick
On Wed, May 22, 2013 at 5:19 AM, Elran Dvir elr...@checkpoint.com wrote:
My index is originally of version 4.0. My methods failed with
Thanks, I saw that and assigned it to myself. On the original form
when you create the issue, there's an assign to entry field, but I
don't know whether you see the same thing
Best
Erick
On Wed, May 22, 2013 at 5:36 AM, Lyuba Romanchuk
lyuba.romanc...@gmail.com wrote:
Hi Erick,
I opened
Thanks Erick for your suggestion.
Turns out I won't be going that route after all as the highlighter
component is quite complicated - to follow and to override - and not much
time left in hand so did it the manual (dirty) way.
Beat Regards,
Sandeep
On 22 May 2013 12:21, Erick Erickson
Hello,
I have a field defined in my schema.xml like so:
field name=sa_site_city type=string indexed=true stored=true/
string is a type :
fieldType name=string class=solr.StrField sortMissingLast=true /
When I run the query for faceting data by the city:
Zhang:
In 3.6, there's really no choice except to load all the cores on
startup. 10 minutes still seems excessive, do you perhaps have a
heavy-weight firstSearcher query?
Yes, soft commits are 4.x only, so that's not your problem.
There's a shareSchema option that tries to only load 1 copy of
Look at the text_general type (solr 4.x) in the example schema.xml.
That has an example of including synonyms at index time (although it
it commented out, but you can get the idea). So to substitute synonyms
at index time, just uncomment the index time analyzer mention of
synonyms and comment out
Probably you're not querying the field you think you are. Try adding
debug=all to the URL and I think you'll see something like
default_search_field:mm_state_code
Which means you're searching for the literal phrase mm_state_code in
your default search field (defined in solrconfig.xml for the
I got:
SyntaxError: Cannot parse
'name:Bbbbm'
Using solr 4.21
name field type def:
fieldType name=text_general class=solr.TextField
positionIncrementGap=100
analyzer type=index
tokenizer
hi all
I wanted to know is there a way I can sort the my documents based on 3
fields
I have fields like pop(which is basically frequency of the term searched
history) and autosug(auto suggested words) and initial_boost(copy field of
autosug such that only match with initial term match having
On 5/21/2013 11:20 PM, mike st. john wrote:
Is there any way to set the collection without passing setDefaultCollection
in cloudsolrserver?
I'm using cloudsolrserver with spring, and would like to autowire it.
It's a query parameter:
On 22 May 2013 18:26, Rohan Thakur rohan.i...@gmail.com wrote:
hi all
I wanted to know is there a way I can sort the my documents based on 3
fields
I have fields like pop(which is basically frequency of the term searched
history) and autosug(auto suggested words) and initial_boost(copy field
Hi,
I didn't see this question.
Yes, I confirm Crawl-Anywhere can crawl in distributed environment.
If you have several huge web sites to crawl, you can dispatch crawling
across several crawler engines. However, one single web site can only be
crawled by one crawler engine at a time.
This
Hi,
Crawl-Anywhere is now open-source - https://github.com/bejean/crawl-anywhere
Best regards.
Le 02/03/11 10:02, findbestopensource a écrit :
Hello Dominique Bejean,
Good job.
We identified almost 8 open source web crawlers
http://www.findbestopensource.com/tagged/webcrawler I don't
Ok after I added debug=all to the query, I get:
{
responseHeader:{
status:0,
QTime:11,
params:{
facet:true,
indent:true,
q:mm_state_code,
debug:all,
facet.field:sa_site_city,
wt:json}},
response:{numFound:0,start:0,docs:[]
},
facet_counts:{
Hi,
I did see this message (again). Please, use the new dedicated
Crawl-Anywhere forum for your next questions.
https://groups.google.com/forum/#!forum/crawl-anywhere
Did you solve your problem ?
Thank you
Dominique
Le 29/01/13 09:28, SivaKarthik a écrit :
Hi,
i resolved the issue
Dear All
Can I write a search filter for a field having a value in a range or a
specific value.
Say if I want to have a filter like
1. Select profiles with salary 5 to 10 or Salary 0.
So I expect profiles having salary either 0 , 5, 6, 7, 8, 9, 10 etc.
It should be possible, can somebody help
Hi,
Crawl-Anywhere includes a customizable document processing pipeline.
Crawl-Anywhere can also cache original crawled pages and documents in a
mongodb database.
Best regards.
Dominique
Le 11/02/13 06:16, SivaKarthik a écrit :
Dear Erick,
Thanks for ur relpy..
ya..nutch can meet
Ok my bad.
I do have a default field defined in the /select handler in the config file.
lst name=defaults
str name=echoParamsexplicit/str
int name=rows10/int
str name=dfsa_property_id/str
/lst
But then how do I change my query now?
--
View this message in context:
Hello!
You can try sending a filter like this fq=Salary:[5+TO+10]+OR+Salary:0
It should work
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
Dear All
Can I write a search filter for a field having a value in a range or a
specific value.
On 5/22/2013 6:43 AM, adm1n wrote:
SyntaxError: Cannot parse
'name:Bbbbm'
The subject mentions one error, the message says another. If you are
getting too many boolean clauses, then you need to increase the
thanks gora I got that
one more thing
what actually I have done is made document consisting of fields:
{
autosug:galaxy,
query_id:1414,
pop:168,
initial_boost:galaxy
_version_:1435669695565922305,
score:1.8908522}
this inital_boost is basically
Thank you bbarani. Unfortunately, this does not work. I do not get any
exception, and the documents import OK. However there is no Category1,
Category2 … etc. when I retrieve the documents.
I don’t think I am using the Alpha or Beta of 4.0. I think I downloaded the
plain vanilla release version.
There was a mistake in my last reply. Your child entities need to SELECT on
the join key so DIH has it to do the join. So use SELECT SKU, CategoryName...
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: O. Olson [mailto:olson_...@yahoo.it]
Sent: Tuesday, May
first of all thanks for response!
Regarding two tokenizers - it's ok.
switching to NGramFilterFactory didn't help (though I didn't reindex but
don't think it was needed since switched it into 'query' section).
Now regarding the maxBooleanClauses - how it effects performance (response
times,
Now regarding the maxBooleanClauses - how it effects performance (response
times, memory usage) when increasing it?
Changing maxBooleanClauses doesn't make any difference at all. Having
thousands of clauses is what makes things run slower and take more memory.
The setting just causes large
*Problem:*
We periodically rebuild our Solr index from scratch. We have built a
custom publisher that horizontally scales to increase write throughput. On
a given rebuild, we will have ~60 JVMs running with 5 threads that are
actively publishing to all Solr masters.
For each thread, we
Thank you very much James. Your suggestion worked exactly! I am curious why I
did not get any errors before. For others, the following worked for me:
entity name=Cat1
query=SELECT CategoryName, SKU from CAT_TABLE WHERE
CategoryLevel=1 cacheKey=SKU cacheLookup=Product.SKU
I am curious why I did not get any errors before.
Because there was no (syntax) error before - the fact that you didn't include a
SKU (but using it as cacheKey) just doesn't match anything .. therefore you got
nothing added to your documents.
Perhaps we should add an ticket as improvement for
Although we are entering the era of Big Data, that does not mean there are
no limits or restrictions on what a given technology can do.
Maybe you need to consider either a smaller scope for your project, or more
limited features, or some other form of simplification.
Solr can do billions of
I have schema.xml
field name=body type=text_en_html indexed=true stored=true
omitNorms=true/
...
fieldType name=text_en_html class=solr.TextField
positionIncrementGap=100
analyzer type=index
charFilter class=solr.HTMLStripCharFilterFactory/
tokenizer
Hi There,
Not sure I understand your problem correctly, but is 'mm_state_code' a real
value or is it field name?
Also, as Erick pointed out above, the facets are not calculated if there
are no results. Hence you get no facets.
You have mentioned which facets you want but you haven't mentioned
On 5/22/2013 9:08 AM, Justin Babuscio wrote:
We periodically rebuild our Solr index from scratch. We have built a
custom publisher that horizontally scales to increase write throughput. On
a given rebuild, we will have ~60 JVMs running with 5 threads that are
actively publishing to all Solr
That would be a worthy enhancement to do. Always nice to give the user a
warning when something is going to fail so they can troubleshoot better...
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: Stefan Matheis [mailto:matheis.ste...@gmail.com]
Sent:
Hello to all,
I'm trying to setup solr 4.2 to index and search into french content.
I defined a special fieldtype for french content :
fieldType name=text_fr class=solr.TextField
positionIncrementGap=100
analyzer type=index
charFilter
Thanks for your reply.
I have my request url modified like this:
http://xx.xx.xx.xx/solr/collection1/select?q=TXdf=mm_state_codewt=xmlindent=truefacet=truefacet.field=sa_site_citydebug=all
Facet Filed = sa_site_city ( city wise facet)
Default Filed = mm_state_code
Query= TX
When I run this
I doubt if there is any straight out of the box feature that supports this
requirement, you will probably need to handle this at the index time.
You can play around with Function Queries
http://wiki.apache.org/solr/FunctionQuery for any such feature.
On 22 May 2013 16:37, Sam Lee
This query returns 0 documents: *q=(+Title:() +Classification:()
+Contributors:() +text:())*
This returns 1 document: *q=doc-id:3000*
And this returns 631580 documents when I was expecting 0: *q=doc-id:3000
AND (+Title:() +Classification:() +Contributors:() +text:())*
Am I missing something
Sam,
I would highly suggest counting the words in your external pipeline and sending
that value in as a specific field. It can then be queried quite simply with a:
wordcount:{80 TO *]
(Note the { next to 80, excluding the value of 80)
Jason
On May 22, 2013, at 11:37 AM, Sam Lee
From the response you've mentioned it appears to me that the query term TX
is searched against sa_site_city instead of mm_state_code.
Can you try your query like below:
http://xx.xx.xx.xx/solr/collection1/select?q=*mm_state_code:(**TX)*
wt=xmlindent=truefacet=truefacet.field=sa_site_citydebug=all
Thank you guys, particularly James, very much. I just imported 200K documents
in a little more than 2 mins – which is great for me :-). Thank you Stefan.
I did not realize that it was not a syntax error and hence no error. Thank
you for clearing that up.
O. O.
--
View this message in context:
You will need to edit it manually and upload using a zookeeper client, you can
use kazoo, it's very easy to use.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Wednesday, May 22, 2013 at 10:04 AM, Arkadi Colson wrote:
clusterstate.json is now reporting shard3 as
Just an update for others reading this thread: I had some
CachedSqlEntityProcessor and had it addressed in the thread How do I use
CachedSqlEntityProcessor?
(http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-td4064919.html)
I basically had to declare the child entities in
Shawn,
Thank you!
Just some quick responses:
On your overflow theory, why would this impact the client? Is is possible
that a write attempt to Solr would block indefinitely while the Solr server
is running wild or in a bad state due to the overflow?
We attempt to set the BinaryRequestWriter
When I use your query, I get :
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeader
int name=status400/int
int name=QTime12/int
lst name=params
str name=facettrue/str
str name=dfmm_state_code/str
str name=indenttrue/str
str name=q*mm_state_code:(**TX)*/str
On 5/22/2013 11:25 AM, Justin Babuscio wrote:
On your overflow theory, why would this impact the client? Is is possible
that a write attempt to Solr would block indefinitely while the Solr server
is running wild or in a bad state due to the overflow?
That's the general notion. I could be
I'm a developing a recommendation feature in our app using the
MoreLikeThisHandler http://wiki.apache.org/solr/MoreLikeThisHandler, and
so far it is doing a great job. We're using a user's competency keywords
as the MLT field list and the user's corresponding document in Solr as the
comparison
Logging/UI used to show hostname in 4.0 in 4.1+ it switched to ip addresses
is this by design or a bug/side effect ?
its pretty painful to look at ip addresses, I am planning to change.
let me know if you have any concerns
--
Anirudha
: Subject: solr starting time takes too long
: In-Reply-To: 519c6cd6.90...@smartbit.be
: Thread-Topic: shard splitting
https://people.apache.org/~hossman/#threadhijack
-Hoss
On 5/22/2013 12:53 PM, Anirudha Jadhav wrote:
Logging/UI used to show hostname in 4.0 in 4.1+ it switched to ip addresses
is this by design or a bug/side effect ?
If you are talking about SolrCloud, this was an intentional change. By
including a host property either on the Solr startup
After taking your advice on profiling, I didn't see any memory issues. I
wanted to verify this with a small set of data. So I created a new
sandbox core with the exact same schema and config file settings. I
indexed only 25 PDF documents with an average size of 2.8 MB, the
largest is approx 5 MB
Answered my own question...
mlt.mintf: Minimum Term Frequency - the frequency below which terms will be
ignored in the source doc
Our source doc is a set of limited terms...not a large content field. So
in our case I need to set that value to 1 (rather than the default of 2).
Now I'm getting
: NOTE: make sure norms are enabled (omitNorms=false in the schema.xml) for
: any fields where the index-time boost should be stored.
:
: In my case where I only need to boost the whole document (not a specific
: field), do I have to activate the omitNorms=false for all the fields
: in the
Hi,
I am new to Solr and recently started exploring it for search/sort needs in
our webapp.
I have couple of questions as below, (I am using solr 4.2.1 with default
core named collection1)
1. We have a use case where we would like to index data every 10 mins (avg).
Whats the best way to
Very sorry about hijacking existing thread (I thought it would be OK
if I just change the title and content, but still wrong).
It will never happen again.
Lisheng
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Wednesday, May 22, 2013 11:58 AM
To:
I'm encountering the same issue, but, my Russian stopwords.txt IS encoded in
UTF-8.
I verified the encoding using EmEditor (I've used it for years, and I use it
for the existing English, French, Spanish, Portuguese and German Solr
configurations, without issues).
Just to make extra sure, I
API doc says that:
Lucene supports regular expression searches matching a pattern between
forward slashes /. The syntax may change across releases, but the current
supported syntax is documented in the RegExp class. For example to find
documents containing moat or boat:
/[mb]oat/
I think that
If the indexed data includes positions, it should be possible to
implement ^ and $ as the first and last positions.
On 05/22/2013 04:08 AM, Oussama Jilal wrote:
There is no ^ or $ in the solr regex since the regular expression will
match tokens (not the complete indexed text). So the results
Hi All,
We can use lukeall4.0 for reading Solr3.x index . Like that do we have
anything to read solr 4.x index. Please help.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Tool-to-read-Solr4-2-index-tp4065448.html
Sent from the Solr - User mailing list archive at
How is the format of utc string? Example?
thx
-Ursprüngliche Nachricht-
Von: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Gesendet: Mittwoch, 22. Mai 2013 00:03
An: solr-user@lucene.apache.org
Betreff: Re: Date Field
: 2) Chain TemplateTransformer either by itself or before the
:
This might help
http://wiki.apache.org/solr/LukeRequestHandler
--
Shreejay Nair
Sent from my mobile device. Please excuse brevity and typos.
On Wednesday, May 22, 2013 at 13:47, gpssolr2020 wrote:
Hi All,
We can use lukeall4.0 for reading Solr3.x index . Like that do we have
anything to
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Tool-to-read-Solr4-2-index-tp4065448p4065453.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi
i am trying to apply filtering on non-indexed double field .But its not
returning any results. So cant we do fq on non-indexed field?
can not use FieldCache on a field which is neither indexed nor has doc
values: EXCH_RT_AMT
/str
int name=code400/int
We are using Solr4.2.
Thanks.
--
Hi All,
Not really a pressing need for this at all, but having worked through a few
tutorials, I was wondering if there was any work being done to incorporate
Lucene Facets into solr:
http://lucene.apache.org/core/4_3_0/facet/org/apache/lucene/facet/doc-files/userguide.html
Brendan
On first, the cron job that hits the DIH trigger URL will probably be
the easiest way.
Not sure I understood the second question. How do you store/know that
the entries expire. And how do you pull for those specific entries?
Regards,
Alex.
Personal blog: http://blog.outerthoughts.com/
The topic has come up, but nobody has expressed a sense of urgency.
It actually has a placeholder Jira:
https://issues.apache.org/jira/browse/SOLR-4774
Feel free to add your encouragement there.
-- Jack Krupansky
-Original Message-
From: Brendan Grainger
Sent: Wednesday, May 22,
Our prod environment is going to be on Azure. As such, I want our index to
live on the Azure VM's local storage rather than the default VM disk (blob
storage).
Normally, I just use /var/opt/tomcat7/PORT/solr/collection1/data, but I
want to use something else.
I am also using the Collections API
Hello all,
I am facing a need to store and retrieve json string in a
field.
eg. Imagine a schema like below.
[Please note that this is just an example but not actual specification.]
str name=carName type=string indexed=true stored=false
str name=carDescription type=string
Yes, the quotes need to be escaped - since they are contained within a
quoted string, which you didn't show. That is the proper convention for
representing strings in JSON. Are you familiar with the JSON format? If not,
try XML - it won't have to represent a string as a quoted JSON string.
If
Hello,
I have just created a new JIRA issue, if you are interested in trying out
the new query parser, please visit:
https://issues.apache.org/jira/browse/LUCENE-5014
Thanks,
roman
On Mon, May 6, 2013 at 5:36 PM, Jan Høydahl jan@cominvent.com wrote:
Added. Please try editing the page now.
Hello Folks,
I have a question about coordination factor to ensure my understanding of this
value is correct.
If I have documents that contain some keywords like the following:Doc1: A, B,
CDoc2: A, CDoc3: B, C
And my query is A OR B OR C OR D.
In this case, Coord factor value for each documents
1 - 100 of 117 matches
Mail list logo