Hi James,
How to see the suggestions of
spellcheck.alternativeTermCount ?
On Wed, Feb 18, 2015 at 11:09 AM, Nitin Solanki nitinml...@gmail.com
wrote:
Thanks James,
I tried the same thing
spellcheck.count=10spellcheck.alternativeTermCount=5. And I got 5
Hi Everyone,
I have set the value of spellcheck.count = 0 and
spellcheck.alternativeTermCount = 0. Even though collations are coming when
I search any query which is misspelled. Why so?
I also set the value of spellcheck.maxCollations = 100 and
spellcheck.maxCollationTries =
Hi Toke,
Thank you for your response.
Here is some precisions.
- The same terms will occurs several time for a given field (from 10
to 100.000)
Do you mean that any term is only present in a limited number (up to
about 100K) of documents or do you mean that some documents has fields
with
On Wed, 2015-02-18 at 01:40 +0100, Dominique Bejean wrote:
(I reordered the requirements)
- Collection size : 15 billions document
- Document size is nearly 300 bytes
- 1 billion documents indexed = 5Gb index size
- Collection update : 8 millions new documents / days + 8 millions
Hi,
Is there a way to read the internal document once solr does the indexing ?
Also is there a possibility to store this internal document in xml format ?
--
Best Regards,
Dinesh Naik
Hi,
How can I place whole indexed data on cache by which if I will
search any query then I will get response, suggestions, collations rapidly.
And also how to view that which documents are on cache and how to verify it?
On 2/18/2015 9:22 AM, Abdelali AHBIB wrote:
with Collections API, they still some config files in
/solr/config/Xunused_collection, I deleted them also manualy
2015-02-18 16:16 GMT+00:00 Dominique Bejean dominique.bej...@eolya.fr:
When you say I renamed some cores, cleaned other unused ones
Sorry I was missing the actual part that is without parsing the json output.
I was looking in to Solrj
QueryReponse.getBeans(Syndrome.class) , but how do I embed highlighting
snippet inside each of the Syndrome object itself.
Thanks
meena
--
View this message in context:
I need to create custom json format of solr output for a specific UI. I was
wondering if there is a way to embed highlighting portion inside docs
itself.
Thanks
Meena
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-output-in-custom-json-format-tp4187200.html
Sent from
I think what ideally is needed here is an implementation for this open issue:
https://issues.apache.org/jira/browse/SOLR-3479
https://issues.apache.org/jira/browse/SOLR-3479
—
Erik Hatcher, Senior Solutions Architect
http://www.lucidworks.com http://www.lucidworks.com/
On Feb 18, 2015,
Hi Jack,
We are looking for something like this-
For example if you search for a text -go
We should also get other forms of this text like going,gone,goes etc.
This is not being achieved via stemming.
-Original Message-
From: Jack Krupansky jack.krupan...@gmail.com
Sent: 18-02-2015
Yes. I did it. Bu it doesn’t work.
New Example;
TSTLookup
doc 1 : shoe adidas 2 hiking
doc 2 : galaxy samsung s5 phone
doc 3 : shakeology sample packets
http://localhost:8983/solr/solr/suggest?q=samsung+hi
response
lst name=responseHeader
int name=status0/int
int name=QTime1/int
/lst
lst
Thank you Dominique and Shawn, now I see that clusterstate.json does not
reflect current number of cores in shard2, there are duplicated cores in
all collections like this, how can I edit clusterstate.json :
[image: Images intégrées 1]
2015-02-18 16:54 GMT+00:00 Shawn Heisey
Hi,
As Shawn said, install enough memory in order that all free direct memory
(non heap memory) be used as disk cache.
Use 40% maximum of the available memory for heap memory (Xmx JVM
parameter), but never more than 32 Gb
And avoid your server to swap.
For most Linux systems, this is configured
I went through the Solr documentation, and it seemed pretty good. However, I
have a different requirement. I have this scenario - I will provides list of
words, each corresponds to a particular position. Say this array of tuples,
where each tuple consists of the word and its position (the position
It will try to give you suggestions up to the number you specify, but if fewer
are available it will not give you any more.
James Dyer
Ingram Content Group
-Original Message-
From: Nitin Solanki [mailto:nitinml...@gmail.com]
Sent: Tuesday, February 17, 2015 11:40 PM
To:
I think when you set count/alternativeTermCount to zero, the defaults (10?)
are used instead. Instead of setting these to zero, just use
spellcheck=false. These 2 parameters control suggestions, not collations.
To turn off collations, set spellcheck.collate=false. Also, I wouldn't set
Hi,
When you say I renamed some cores, cleaned other unused ones that we don't
need anymore etc, how did you do this ?
With Cores or Collections API or by deleting core's directories in Solr
Home ?
Dominique
http://www.eolya.fr
2015-02-18 17:04 GMT+01:00 Abdelali AHBIB alifar...@gmail.com:
Please provide a few examples that illustrate your requirements.
Specifically, requirements that are not met by the existing Solr stemming
filters. What is your specific goal?
-- Jack Krupansky
On Wed, Feb 18, 2015 at 10:50 AM, dinesh naik dineshkumarn...@gmail.com
wrote:
Hi,
IS there a way
with Collections API, they still some config files in
/solr/config/Xunused_collection, I deleted them also manualy
2015-02-18 16:16 GMT+00:00 Dominique Bejean dominique.bej...@eolya.fr:
Hi,
When you say I renamed some cores, cleaned other unused ones that we don't
need anymore etc, how did
Hi,
It sounds like Solr simply could not index some docs. The index is not
corrupt, it's just that indexing was failing while disk was full. You'll
need to re-send/re-add/re-index the missing docs (or simply all of them if
you don't know which ones are missing).
Otis
--
Monitoring * Alerting *
On 02/17/2015 03:46 AM, Volkan Altan wrote:
First of all thank you for your answer.
You're welcome - thanks for sending a more complete example of your
problem and expected behavior.
I don’t want to use KeywordTokenizer. Because, as long as the compound words
written by the user are
Hi,
I never used map-reduce indexing.
My understanding is that map-reduce tasks generate one or more Solr
indices, then the golive tool is used in order to merge these indices at
core level to one or more shards (the shard's leaders) in a Solrcloud
collection. After merge occurs in leaders the
Hello!
You can try luke's export feature:
https://github.com/DmitryKey/luke/wiki/Exporting-index-to-xml
On Wed, Feb 18, 2015 at 12:57 PM, dinesh naik dineshkumarn...@gmail.com
wrote:
Hi,
Is there a way to read the internal document once solr does the indexing ?
Also is there a possibility
On 2/18/2015 4:20 AM, Nitin Solanki wrote:
How can I place whole indexed data on cache by which if I will
search any query then I will get response, suggestions, collations rapidly.
And also how to view that which documents are on cache and how to verify it?
Simply install enough
Hi,
IS there a way to achieve lemmatization in Solr? Stemming option is not
meeting the requirement.
--
Best Regards,
Dinesh Naik
On 2/18/2015 8:17 AM, Nitin Solanki wrote:
I have created 4 nodes having 8 shards. Now, I want to divide those
4 Nodes into 100 Nodes without any failure/ or re-indexing the data. Any
help please?
I think your only real option within a strict interpretation of your
requirements is
No,
SPLIT operation doesn’t destroy the data. When the SPLIT operation is finished,
the PARENT is deactivate and you can remove it.
More info:
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api3
—
/Yago Riveiro
On Wed, Feb 18, 2015 at 3:39 PM, Nitin
Hello,
We use solrcloud with two shards (no replication for now), zookeeper is in
a separate machine and it works well until yesterday when I renamed some
cores, cleaned other unused ones that we don't need anymore etc... then I
got tons of these errors when I try to put docs into my core
Hello,
I have some basic questions for the group. I would appreciate any advice
you can give me.
We have an Oracle RAC database that has a number of schemas on it. Various
things query the structured data stored in these schemas, 10s of
thousands of times per day. Two of these schemas in
Hi Dinesh,
solr.KStemFilterFactory is dictionary based. E.g. Produced outputs are
valid/legitimate English words.
If you mean finding dictionary entries by saying lemmatizer.
Ahmet
On Wednesday, February 18, 2015 5:51 PM, dinesh naik
dineshkumarn...@gmail.com wrote:
Hi,
IS there a way to
You can try the SPLIT command
—
/Yago Riveiro
On Wed, Feb 18, 2015 at 3:19 PM, Nitin Solanki nitinml...@gmail.com
wrote:
Hi,
I have created 4 nodes having 8 shards. Now, I want to divide those
4 Nodes into 100 Nodes without any failure/ or re-indexing the data. Any
help please?
Okay, It will destroy/harm my indexed data. Right?
On Wed, Feb 18, 2015 at 9:01 PM, Yago Riveiro yago.rive...@gmail.com
wrote:
You can try the SPLIT command
—
/Yago Riveiro
On Wed, Feb 18, 2015 at 3:19 PM, Nitin Solanki nitinml...@gmail.com
wrote:
Hi,
I have created 4 nodes
Guys,
1. Can anyone suggest what would be the best platform to host Solr on any
Unix or windows server?
2. All I will be doing is importing lots of PDF documents into Solr. I
believe Solr will automatically build the schema for imported documents.
3. Can someone suggest what
Hello,
I want to retrieve only top- five suggestions for any
phrase/query searching. How to do that?
Assume, If I search like ?q=the bark night then I need suggestion/
collation like the dark knight.
How to get nearby suggestion/ terms of the phrase?
Hi,
I have created 4 nodes having 8 shards. Now, I want to divide those
4 Nodes into 100 Nodes without any failure/ or re-indexing the data. Any
help please?
sorry, no rename operation happens, juste delete (manualy from solr home
and config) and duplicate a core manualy also (this core duplicated is the
same core that don't have a problem), then I run the zkcli upconfig command
2015-02-18 16:22 GMT+00:00 Abdelali AHBIB alifar...@gmail.com:
with
Great help and thanks to you, Alex.
On Wed, Feb 18, 2015 at 2:48 PM, Alexandre Rafalovitch arafa...@gmail.com
wrote:
Like I mentioned before. You could use string type if you just want
title it is. Or you can use a custom type to normalize the indexed
value, as long as you end up with a
Perhaps try quotes around the url you are providing to curl. It's not
complaining about the http method - Solr has historically always taken
simple GET's for http - for good or bad, you pretty much only post
documents / updates.
It's saying the name param is required and not being found and since
Hi,
Can we please document which HTTP method is supposed to be used with each
of these APIs?
https://cwiki.apache.org/confluence/display/solr/Collections+API
I am trying to invoke following API
curl http://
hostname:8983/solr/admin/collections?action=CLUSTERPROPname=urlSchemeval=https
This
Dmitry, that would be great.
CP
On Thu, Feb 12, 2015 at 5:35 AM, Dmitry Kan solrexp...@gmail.com wrote:
Hi,
Looks like I'll be there. So if you want to discuss luke / lucene / solr,
will be happy to de-virtualize.
Dmitry
On Mon, Jan 12, 2015 at 6:32 PM, CP Mishra mishr...@gmail.com
David,
I just subscriped to the solr list..lets see if that will allow me to
posting this.
I will write a Custom ValueSource. I tried the map function that you
suggested, it works but it is not so great on performance.
I will try referring funtion query as a sort instead of bq..may be it
42 matches
Mail list logo