Re: Using different queries for q and facet.query parameters

2012-09-26 Thread Kiran Jayakumar
Thank you very much Hoss, thats exactly what I was looking for ! :)


On Wed, Sep 26, 2012 at 2:12 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:


 I think you are missunderstanding the purpose of 'facet.query', it doesn't
 change the base query used when faceting, it gives you a count of
 documents that match an arbitrary query.

 what you are looking for is a way to exclude the fq used to drill down
 on the facet when computing facet results...


 https://wiki.apache.org/solr/SimpleFacetParameters#Tagging_and_excluding_Filters


 -Hoss



Re: how to boosting fisrt substring in a string solr.

2012-09-13 Thread Kiran Jayakumar
http://lucene.apache.org/core/3_6_1/queryparsersyntax.html#Boosting a Term

How about this:

Yoga^2 Yoga Teacher


On Thu, Sep 13, 2012 at 5:32 AM, Tanguy Moal tanguy.m...@gmail.com wrote:

 Hi,
 Did you try issuing a query like : +Yoga Teacher (without the
 double-quotes) ?

 See http://lucene.apache.org/core/3_6_1/queryparsersyntax.html#Boolean
 operators for more details one lucene's query parser's syntax.

 Hope this helps,
 --
 Tanguy

 2012/9/13 veena rani veenara...@gmail.com

  Hi ,
 
  In solr, If i m search for for string like Yoga Teacher, it should search
  for yoga as well as yoga teacher,
  but it should not search for just teacher like maths teacher or science
  teacher and any other teacher.
  How can i do this in solr.
  Please any one can give me solution to this question.
 
  --
  Regards,
  Veena.
  Banglore.
 



Re: How to preserve source column names in multivalue catch all field

2012-09-07 Thread Kiran Jayakumar
Thank you Erick. I think #2 is the best for me because I have more than
hundred fields  dont want to construct a huge query each time.

On Thu, Sep 6, 2012 at 9:38 PM, Erick Erickson erickerick...@gmail.comwrote:

 Try using edismax to distribute the search across the fields rather
 than using the catch-all field. There's no way that I know of to
 reconstruct what field the source was.

 But storing the source fields without indexing them is OK too, it won't
 affect
 searching speed noticeably...

 Best
 Erick

 On Tue, Sep 4, 2012 at 11:52 AM, Kiran Jayakumar kiranjuni...@gmail.com
 wrote:
  Hi everyone,
 
  I have got a multivalue catch all field which captures all the text
 fields.
  Whats the best way to preserve the column information also ? In the UI, I
  need to show field : value type output. Right now, I am storing the
  source fields without indexing. Is there a better way to do it ?
 
  Thanks



Re: Problem with verifying signature ?

2012-09-07 Thread Kiran Jayakumar
Thank you.

On Thu, Sep 6, 2012 at 9:51 AM, Chris Hostetter hossman_luc...@fucit.orgwrote:


 : gpg: Signature made 08/06/12 19:52:21 Pacific Daylight Time using RSA key
 : ID 322
 : D7ECA
 : gpg: Good signature from Robert Muir (Code Signing Key) 
 rm...@apache.org
 : *gpg: WARNING: This key is not certified with a trusted signature!*
 : gpg:  There is no indication that the signature belongs to the
 : owner.
 : Primary key fingerprint: 6661 9BA3 C030 DD55 3625  1303 817A E1DD 322D
 7ECA
 :
 : Is this acceptable ?

 I guess it depends on what you mean by acceptible?

 I'm not an expert on this, but as i understand it...

 gpg is telling you that it confirmed the signature matches a known key
 named Robert Muir (Code Signing Key) which is in your keyring, but that
 there is no certified level of trust association with that key.

 Key Trust is a personal thing, specific to you, your keyring, and how you
 got the keys you put in that ring.  if you trust that the KEYS file you
 downloaded from apache.org is legitimate, and that all the keys in it
 should be trusted, you can tell gpg that.  (using the trust
 interactive command when using --edit-key)

 Alternatively, you could tell gpg that you have a high level of trust in
 the key of some other person you have met personally -- ie: if you met Uwe
 at a confernce and he physically handed you his key on a USB drive -- and
 then if Uwe has signed Robert's key with his own (i think it has, not sure
 off the top of my head), then gpg would extend an implicit transitive
 trust to Robert's key...

 http://www.gnupg.org/gph/en/manual.html#AEN335


 -Hoss



Re: Solr search not working after copying a new field to an existing Indexed Field

2012-09-07 Thread Kiran Jayakumar
Do you have the unique key set up in your schema.xml ? It should be
automatic if you have the ID field and define it as the unique key.

 uniqueKeyID/uniqueKey

On Thu, Sep 6, 2012 at 11:50 AM, Mani mehamba...@art.com wrote:

 I have a made a schema change to copy an existing field name (Source
 Field)
 to an existing search field text (Destination Field).

 Since I made the schema change, I updated all the documents thinking the
 new
 source field will be clubbed together with the text field.  The search
 for
 a specific name does not return results.

 If I delete the document and then adding the document back works just fine.

 I thought Add command with default override option will work as Delete and
 Add.

 Is this the only way to reindex the text field? Is there anyother method?

 I really appreciate your help on this!



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Solr-search-not-working-after-copying-a-new-field-to-an-existing-Indexed-Field-tp4005993.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: N-gram ranking based on term position

2012-09-07 Thread Kiran Jayakumar
Since Edge N-gram tokens are a subset of N-gram tokens, I was wondering if
I could be a bit more space efficient.

On Fri, Sep 7, 2012 at 3:07 PM, Amit Nithian anith...@gmail.com wrote:

 I think your thought about using the edge ngram as a field and
 boosting that field in the qf/pf sections of the dismax handler sounds
 reasonable. Why do you have qualms about it?

 On Fri, Sep 7, 2012 at 12:28 PM, Kiran Jayakumar kiranjuni...@gmail.com
 wrote:
  Hi,
 
  Is it possible to score documents with a match early in the text higher
  than later in the text ? I want to boost begin with matches higher
 than
  the contains matches. I can define a copy field and analyze it as edge
  n-gram and boost it. I was wondering if there was a better way to do it.
 
  Thanks



Re: Problem with verifying signature ?

2012-09-05 Thread Kiran Jayakumar
Thank you Hoss. I imported the KEYS file using *gpg --import KEYS.txt*.
Then I did the *--verify* again. This time I get an output like this:

gpg: Signature made 08/06/12 19:52:21 Pacific Daylight Time using RSA key
ID 322
D7ECA
gpg: Good signature from Robert Muir (Code Signing Key) rm...@apache.org
*gpg: WARNING: This key is not certified with a trusted signature!*
gpg:  There is no indication that the signature belongs to the
owner.
Primary key fingerprint: 6661 9BA3 C030 DD55 3625  1303 817A E1DD 322D 7ECA

Is this acceptable ?

Thanks

On Wed, Sep 5, 2012 at 5:38 PM, Chris Hostetter hossman_luc...@fucit.orgwrote:

 : I download solr 4.0 beta and the .asc file. I use gpg4win and type this
 in
 : the command line:
 :
 : gpg --verify file.zip file.asc
 :
 : I get a message like this:
 :
 : *gpg: Can't check signature: No public key*

 you can verify the asc sig file using the public KEYS file hosted on the
 main apache download site (do not trust asc or KEYS from a download
 mirror, that defeats the point)


 https://www.apache.org/dist/lucene/solr/KEYS



 -Hoss



Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-09-04 Thread Kiran Jayakumar
I wonder why. I had a similar use case  works great for me. If you can
send the snapshot of analysis for a sample string (say hello world  for
indexing, hel - positive case, wo - negative case for querying), then
we can see whats going on. Also the debug query output would be helpful.


On Fri, Aug 31, 2012 at 10:28 PM, aniljayanti anil.jaya...@gmail.comwrote:

 Hi,

 Thanks,

 As i already used KeywordTokenizerFactory in my earlier posts.

 fieldType name=edgytext class=solr.TextField
 positionIncrementGap=100
 omitNorms=true
 analyzer type=index
   *tokenizer class=solr.KeywordTokenizerFactory /
   filter class=solr.LowerCaseFilterFactory /
   filter class=solr.PatternReplaceFilterFactory pattern=\s+
 replacement=  replace=all/
   filter class=solr.EdgeNGramFilterFactory minGramSize=1
 maxGramSize=15 side=front /
 *   /analyzer
analyzer type=query
  *tokenizer class=solr.KeywordTokenizerFactory /
  filter class=solr.LowerCaseFilterFactory /
  filter class=solr.PatternReplaceFilterFactory pattern=\s+
 replacement=  replace=all/
 *   /analyzer
   /fieldType

 getting same results.

 AnilJayanti




 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4004871.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-08-31 Thread Kiran Jayakumar
Try this:

tokenizer class=solr.KeywordTokenizerFactory/

http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.KeywordTokenizerFactory


On Thu, Aug 30, 2012 at 9:07 PM, aniljayanti anil.jaya...@gmail.com wrote:

 Hi,

 thanks,

 I checked with given changes, getting below error saying that SOLR is not
 allowing without tokenizer.

 org.apache.solr.common.SolrException: analyzer without class or tokenizer 
 filter list at
 org.apache.solr.schema.IndexSchema.readAnalyzer(IndexSchema.java:914) at
 org.apache.solr.schema.IndexSchema.access$100(IndexSchema.java:62) at
 ...

 can u tell me wht to do?

 AnilJayanti



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4004605.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Query Time problem on Big Index Solr 3.5

2012-08-30 Thread Kiran Jayakumar
Have you tried sharding ? The best practices recommend sharding when your
index grows too large and query time is slow.


On Tue, Aug 28, 2012 at 2:02 AM, mpcmarcos mpcmar...@gmail.com wrote:

 Hello,

 I have a problem, I'm working with Solr 3.5, with a index that has
 8.000.000
 of documents (13Gb), each document has a lot of fields, I include the
 schema
 at bottom the message for more information.

 The query time is very high, a simple query has a query time of 300-1.000
 ms, and a complex query to 10.000 ms. I have a master, and 6 slaves, they
 are been syncronized every 10 minutes. And the index always is optimized.

 What can I do?
 - I think that cache system is working ok, when I do the same query two
 times, the query time decrease to 0 ms.


 Here a example of query, there are any incorrect o anything that can I
 change?

 http://xxx:8893/solr/candidate/select/?q=+(IdCandidateStatus:2)+(IdCobranded:3)+(IdLocation1:12))+(LastLoginDate:[2011-08-26T00:00:00Z
 TO 2012-08-28T00:00:00Z])



 *Schema:*
  field name=IdCandidate type=slong indexed=true stored=true
 required=true /
 field name=IdUser type=slong indexed=true stored=true
 required=true /
 field name=Email type=string indexed=true stored=true
 required=true /
 field name=Name type=string indexed=true stored=true
 required=true /
 field name=NameFormated type=alphaOnlySort indexed=true
 stored=true/
 field name=Surname type=string indexed=true stored=true
 required=true /
 field name=SurnameFormated type=alphaOnlySort indexed=true
 stored=true/
 field name=IdSex type=string indexed=true stored=true
 required=true /
 field name=IdWorkingHours type=sint indexed=true stored=true
 required=true /
 field name=IdContractWorkType type=sint indexed=true
 stored=true required=true /
 field name=IdLocation1 type=sint indexed=true stored=true
 required=true /
 field name=IdLocation2 type=sint indexed=true stored=true
 required=true /
 field name=Location2 type=string indexed=true stored=true
 required=true /
 field name=IdLocation3 type=slong indexed=true stored=true
 required=true /
 field name=IdLocation4 type=slong indexed=true stored=true
 required=true /
 field name=Location4 type=string indexed=true stored=true
 required=true /
 field name=IdLocation5 type=slong indexed=true stored=true
 required=true /
 field name=Location5 type=string indexed=true stored=true
 required=true /
 field name=IdRegion1 type=slong indexed=true stored=true
 required=true /
 field name=Region1 type=string indexed=true stored=true
 required=true /
 field name=IdRegion2 type=slong indexed=true stored=true
 required=true /
 field name=Region2 type=string indexed=true stored=true
 required=true /
 field name=LastLoginDate type=tdate indexed=true stored=true
 required=true /
 field name=BirthDate type=tdate indexed=true stored=true
 required=true /
 field name=InsertDate type=tdate indexed=true stored=true
 required=true /
 field name=ModifyDate type=tdate indexed=true stored=true
 required=true /
 field name=IdModifyRangeDate type=sint indexed=true
 stored=true
 required=true /
 field name=Age type=sint indexed=true stored=true
 required=true /
 field name=IdAgeRange type=sint indexed=true stored=true
 required=true /
 field name=Travel type=sint indexed=true stored=true
 required=true /
 field name=ChangeResidence type=sint indexed=true stored=true
 required=true /
 field name=IdEmployed type=sint indexed=true stored=true
 required=true /
 field name=SalaryMax type=sdouble indexed=true stored=true
 required=true /
 field name=SalaryMin type=sdouble indexed=true stored=true
 required=true /
 field name=IdSalaryRange type=sint indexed=true stored=true
 required=true /
 field name=IdPreferenceManagerialLevelMin type=sint indexed=true
 stored=true required=true /
 field name=IdPreferenceManagerialLevelMax type=sint indexed=true
 stored=true required=true /
 field name=IdStudie1Max type=sint indexed=true stored=true
 required=true /
 field name=IdCategory2Last type=sint indexed=true stored=true
 required=true /
 field name=Category2Last type=string indexed=true stored=true
 required=true /
 field name=Category2LastFormated type=alphaOnlySort indexed=true
 stored=true /
 field name=IdCategory1Last type=sint indexed=true stored=true
 required=true /
 field name=IdExperienceTime type=sint indexed=true stored=true
 required=true /
 field name=IdExperienceRange type=sint indexed=true
 stored=true
 required=true /
 field name=IsDeficiency type=string indexed=true stored=true
 required=true /
 field name=IdStudie1 type=sint indexed=true stored=true
 required=false multiValued=true /
 field name=IdStudie2 type=sint indexed=true stored=true
 required=false multiValued=true /
 field name=IdStudie2Status type=sint indexed=true stored=true
 required=false multiValued=true /
 field 

Static template column in DIH

2012-08-30 Thread Kiran Jayakumar
Hi everyone,

I have defined a template column like this:

entity name=books query=select * from books
field name=Category column=Category template=Books/
/entity

I dont have a field called Category in my query, but defined in schema.xml.
I'm expecting it to populate the Category field with a fixed value Books.
It doesnt work. What am I missing ? Any help is much appreciated.

Thanks


Re: Static template column in DIH

2012-08-30 Thread Kiran Jayakumar
Thank you sir, works like a charm !

On Thu, Aug 30, 2012 at 12:06 PM, Dyer, James
james.d...@ingramcontent.comwrote:

 You might just be missing entity ... transformer=TemplateTransformer /
 .

 See http://wiki.apache.org/solr/DataImportHandler#TemplateTransformer

 James Dyer
 E-Commerce Systems
 Ingram Content Group
 (615) 213-4311


 -Original Message-
 From: Kiran Jayakumar [mailto:kiranjuni...@gmail.com]
 Sent: Thursday, August 30, 2012 1:54 PM
 To: solr-user@lucene.apache.org
 Subject: Static template column in DIH

 Hi everyone,

 I have defined a template column like this:

 entity name=books query=select * from books
 field name=Category column=Category template=Books/
 /entity

 I dont have a field called Category in my query, but defined in schema.xml.
 I'm expecting it to populate the Category field with a fixed value Books.
 It doesnt work. What am I missing ? Any help is much appreciated.

 Thanks




Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-08-29 Thread Kiran Jayakumar
You need this for both index and query:

filter class=solr.PatternReplaceFilterFactory pattern=\s+
replacement=  replace=all/

http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PatternReplaceFilterFactory


On Wed, Aug 29, 2012 at 4:55 AM, aniljayanti anil.jaya...@gmail.com wrote:

 Hi,

 thanks for ur reply,

 I donot know how to remove multiple white spaces using regax in the search
 text. Can u share me that one.

 Thanks,

 AnilJayanti



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4003991.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Problem with copyfield wild card

2012-08-29 Thread Kiran Jayakumar
Hi everyone,

I have several fields like Something_Misc_1, Something_Misc_2,
SomeOtherThing_Misc_1,... etc.

I have defined a copy field like this:

copyField source=*Misc* dest=SecondarySearch/

It doesnt capture the misc fields. Am I missing something ? Any help is
much appreciated.

Thanks


Re: Problem with copyfield wild card

2012-08-29 Thread Kiran Jayakumar
Thank you Jack.


On Wed, Aug 29, 2012 at 12:10 PM, Jack Krupansky j...@basetechnology.comwrote:

 Alas, copyField does not support full glob. Just like dynamicField, you
 can only use * at the start or end of the source field name, but not both.

 -- Jack Krupansky

 -Original Message- From: Kiran Jayakumar
 Sent: Wednesday, August 29, 2012 1:41 PM
 To: solr-user@lucene.apache.org
 Subject: Problem with copyfield wild card


 Hi everyone,

 I have several fields like Something_Misc_1, Something_Misc_2,
 SomeOtherThing_Misc_1,... etc.

 I have defined a copy field like this:

 copyField source=*Misc* dest=SecondarySearch/

 It doesnt capture the misc fields. Am I missing something ? Any help is
 much appreciated.

 Thanks



Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-08-28 Thread Kiran Jayakumar
Since you have tokenizer class=solr.KeywordTokenizerFactory /, during
indexing time, it is going split the text on white spaces and then apply
edge n-grams. If you remove this  may be replace it with a simpler regex
which does basic clean up like removing multiple white spaces etc., then it
will not match on the beginning of non-first words. You can use the
analyzer and see how your text is transformed during index/query time.

On Tue, Aug 28, 2012 at 4:57 AM, aniljayanti anil.jaya...@gmail.com wrote:

 Hi ,

 Thanks for reply,

 Now it's working for me after changing like below.

 fieldType name=edgytext class=solr.TextField
 positionIncrementGap=100
 omitNorms=true
 analyzer type=index
   tokenizer class=solr.KeywordTokenizerFactory /
   filter class=solr.LowerCaseFilterFactory /
   filter class=solr.RemoveDuplicatesTokenFilterFactory/
/analyzer
analyzer type=query
  tokenizer class=solr.KeywordTokenizerFactory /
  filter class=solr.LowerCaseFilterFactory /
  filter class=solr.EdgeNGramFilterFactory minGramSize=1
 maxGramSize=15 side=front /
/analyzer
   /fieldType


 field name=title type=edgytext indexed=true stored=true
 omitNorms=true omitTermFreqAndPositions=true/
 field name=empname type=edgytext indexed=true stored=true
 omitNorms=true omitTermFreqAndPositions=true /

 field name=autocomplete_text type=edgytext indexed=true
 stored=false  multiValued=true omitNorms=true
 omitTermFreqAndPositions=false /

 copyField source=empname dest=autocomplete_text/
 copyField source=title dest=autocomplete_text/
 **
 URL : http://localhost:8080/test/suggest/?q=michael

 Result :
  ?xml version=1.0 encoding=UTF-8 ?
 - response
 - lst name=responseHeader
   int name=status0/int
   int name=QTime1/int
   /lst
   result name=response numFound=0 start=0 /
 - lst name=spellcheck
 - lst name=suggestions
 - lst name=michael
   int name=numFound9/int
   int name=startOffset0/int
   int name=endOffset7/int
 - arr name=suggestion
   strmichael bolton/str
   strmichael foret/str
   strmichael houser/str
   strmichael o'brien/str
   strmichael penn/str
   strmichael row your boat ashore/str
   strmichael tilson thomas/str
   strmichael w. smith/str
   strmichael w. smith featuring andrae crouch/str
   /arr
   /lst
   str name=collationmichael bolton/str
   /lst
   /lst
   /response

 It's working fine for me. When im searching with michael f, getting
 response like below. (http://localhost:8080/test/suggest/?q=michael f)

 Response :

   ?xml version=1.0 encoding=UTF-8 ?
 - response
 - lst name=responseHeader
   int name=status0/int
   int name=QTime1/int
   /lst
   result name=response numFound=0 start=0 /
 - lst name=spellcheck
 - lst name=suggestions
 - lst name=michael
   int name=numFound9/int
   int name=startOffset0/int
   int name=endOffset7/int
 - arr name=suggestion
   strmichael bolton/str
   strmichael foret/str
   strmichael houser/str
   strmichael o'brien/str
   strmichael penn/str
   strmichael row your boat ashore/str
   strmichael tilson thomas/str
   strmichael w. smith/str
   strmichael w. smith featuring andrae crouch/str
   /arr
   /lst
 - lst name=f
   int name=numFound10/int
   int name=startOffset8/int
   int name=endOffset9/int
 - arr name=suggestion
   strf**k the facts/str
   strfairest lord jesus/str
   strfatboy slim/str
   strffh/str
   strfiona apple/str
   strfoo fighters/str
   strfrank sinatra/str
   strfrans bauer/str
   strfranz ferdinand/str
   strfrançois rauber/str
   /arr
   /lst
   str name=collationmichael bolton f**k the facts/str
   /lst
   /lst
   /response.

 So when i search with michael f then, i should get michael foret only.
 Data coming starts with f. Please help me on this.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4003689.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Data Import Handler - Could not load driver - com.microsoft.sqlserver.jdbc.SQLServerDriver - SOLR 4 Beta

2012-08-27 Thread Kiran Jayakumar
You need to tell solr where your .jar file is located. Something like this
will help:

lib dir=../../../dist/ regex=sqljdbc4\.jar /



On Mon, Aug 27, 2012 at 6:19 AM, awb3667 adam.bu...@peopleclick.com wrote:

 Yes it does.

 I downloaded the file 'sqljdbc4.jar' from Microsoft. I have this same jar
 working with 3.6.1.

 Thanks.
 -Adam



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Data-Import-Handler-Could-not-load-driver-com-microsoft-sqlserver-jdbc-SQLServerDriver-SOLR-4-Beta-tp4002902p4003479.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Is SpellCheck Case Sensitive in Solr3.6.1?

2012-08-24 Thread Kiran Jayakumar
You are missing query analyzer field type: add this line in your search
component.

searchComponent name=spellcheck class=solr.SpellCheckComponent
*str name=queryAnalyzerFieldTypespell/str*
lst name=spellchecker
...


On Fri, Aug 24, 2012 at 5:31 AM, mechravi25 mechrav...@yahoo.co.in wrote:

 Hi,

 Im using solr 3.6.1 version now and I configured spellcheck by making
 following changes

 Solrconfig.xml:

 searchComponent name=spellcheck class=solr.SpellCheckComponent
 lst name=spellchecker
 str name=classnamesolr.IndexBasedSpellChecker/str
 str name=spellcheckIndexDir./spellchekerIndex/str
 str name=fieldspell/str
 str name=buildOnCommittrue/str
   /lst
 /searchComponent

 and added the following in the standard handler to include the spellcheck

 arr name=last-components
 strspellcheck/str
 /arr

 Schema.xml:

 fieldType name=spell class=solr.TextField positionIncrementGap=100
   analyzer type=index
   charFilter class=solr.HTMLStripCharFilterFactory/
tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords.txt/
 filter class=solr.LowerCaseFilterFactory /
 filter class=solr.StandardFilterFactory/
 filter class=solr.RemoveDuplicatesTokenFilterFactory/
   /analyzer
   analyzer type=query
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
 ignoreCase=true expand=true/
 filter class=solr.LowerCaseFilterFactory /
 filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords.txt/
 filter class=solr.StandardFilterFactory/
 filter class=solr.RemoveDuplicatesTokenFilterFactory/
   /analyzer
 /fieldType

 field name=spell type=spell indexed=true stored=false
 multiValued=true /

 and used the copy field to copy all the other field's value to spelling
 field

 When I try to search for list, it does not return any suggestions; but
 when I try to search for List, it returns many suggestions (But in both
 the cases, Im getting the same search result count and its not zero).
 I also tried giving a different field name as spelling and tried to use
 the same in solrconfig.xml. This is also behaving like above

 Is spell check case sensitive? what I want to achieve is that I have to get
 the same  suggestions when I enter both list and as well as List

 Am I missing anything? Can some please guide me on this?

 Thanks



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Is-SpellCheck-Case-Sensitive-in-Solr3-6-1-tp4003074.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Querying top n of each facet value

2012-08-23 Thread Kiran Jayakumar
Thank you Tanguy. This seems to work:

group = true
group.field = Category
group.limit = 5

http://wiki.apache.org/solr/FieldCollapsing

group.limit

[number]

The number of results (documents) to return for each group. Defaults to 1.

On Thu, Aug 23, 2012 at 1:33 AM, Tanguy Moal tanguy.m...@gmail.com wrote:

 Hello Kiran,

 I think you can try turning grouping on and group on, and ask solr to
 group on the Category field.

 Nevertheless, this will *not* ensure you that groups are returned in facet
 counts order. This will *not* ensure you the mincount per group neither.

 Hope this helps,

 --
 Tanguy

 2012/8/23 Kiran Jayakumar kiranjuni...@gmail.com

  Hi everyone,
 
  I am building an auto complete feature, which facets by a field called
  Category. I want to return a minimum number of documents per facet (say
  min=1  max=5).
 
  The facet output is something like
 
  Category
  A: 500
  B: 10
  C: 2
 
  By default, it is returning 10 documents of category A.
 
  I want it to return a total of 10 documents, with 1 document atleast for
  each facet value. Is it possible to accomplish that with a single query ?
 
  Thanks