Rounding date indexing to minute

2014-04-06 Thread Darniz
Hello

can someone please tell me how to make sure in solr to store date only till
minute level, since i am having issues with date range query performance. i
read in forums to reduce date precision so that the queries become faster.

As of now its storing date till seconds.
date name=liveDate2014-03-11T07:00:00Z/date

i am only concerned till minute granularity. Also the i am using
solr.trieDateField
fieldType name=liveDateType class=solr.TrieDateField
precisionStep=8 sortMissingLast=true omitNorms=true/
   field name=liveDate type=liveDateType indexed=true
stored=true /

is there a provision for this
Please let me know 

thanks
darniz






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Rounding-date-indexing-to-minute-tp4129482.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Rounding date indexing to minute

2014-04-06 Thread Darniz
Just to clarify when people mention rounding date till minute they mean to
store seconds as 00

hence there is nothing like storing date in below format, or am i wrong.
date name=liveDate2014-03-11T07:00Z/date

Date are always stored in below format and by rounding people mean to store
seconds as 00 so taht there are fewer terms 
date name=liveDate-03-11Thh:mm:ssZ/date



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Rounding-date-indexing-to-minute-tp4129482p4129483.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Rounding date indexing to minute

2014-04-06 Thread Alexandre Rafalovitch
Have you tried date math formulas? Don't need to round up what's stored,
just how you query and cache.

Regards,
  Alex
On 06/04/2014 2:45 pm, Darniz rnizamud...@edmunds.com wrote:

 Hello

 can someone please tell me how to make sure in solr to store date only till
 minute level, since i am having issues with date range query performance. i
 read in forums to reduce date precision so that the queries become faster.

 As of now its storing date till seconds.
 date name=liveDate2014-03-11T07:00:00Z/date

 i am only concerned till minute granularity. Also the i am using
 solr.trieDateField
 fieldType name=liveDateType class=solr.TrieDateField
 precisionStep=8 sortMissingLast=true omitNorms=true/
field name=liveDate type=liveDateType indexed=true
 stored=true /

 is there a provision for this
 Please let me know

 thanks
 darniz






 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Rounding-date-indexing-to-minute-tp4129482.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Strange behavior of edismax and mm=0 with long queries (bug?)

2014-04-06 Thread Nils Kaiser
Actually I found why... I had and as lowercase word in my queries at the
checkbox does not seem to work in the admin UI.
adding lowercaseOperators=false made the queries work.


2014-04-04 18:10 GMT+02:00 Nils Kaiser m...@nils-kaiser.de:

 Hey,

 I am currently using solr to recognize songs and people from a list of
 user comments. My index stores the titles of the songs. At the moment my
 application builds word ngrams and fires a search with that query, which
 works well but is quite inefficient.

 So my thought was to simply use the collated comments as query. So it is a
 case where the query is much longer. I need to use mm=0 or mm=1.

 My plan was to use edismax as the pf2 and pf3 parameters should work well
 for my usecase.

 However when using longer queries, I get a strange behavior which can be
 seen in debugQuery.

 Here is an example:

 Collated Comments (used as query)

 I love Henry so much. It is hard to tear your eyes away from Maria, but
 watch just his feet. You'll be amazed.
 sometimes pure skill can will a comp, sometimes pure joy can win... put
 them both together and there is no competition
 This video clip makes me smile.
 Pure joy!
 so good!
 Who's the person that gave this a thumbs down?!? This is one of the best
 routines I've ever seen. Period. And it's a competitionl! How is that
 possible? They're so good it boggles my mind.
 It's gorgeous. Flawless victory.
 Great number! Does anybody know the name of the piece?
 I believe it's called Sunny side of the street
 Maria is like, the best 'follow' I've ever seen. She's so amazing.
 Thanks so much Johnathan!

 Song name in Index
 Louis Armstrong - Sunny Side of The Street

 parsedquery_toString:
 +(((text:I) (text:love) (text:Henry) (text:so) (text:much.) (text:It)
 (text:is) (text:hard) (text:to) (text:tear) (text:your) (text:eyes)
 (text:away) (text:from) (text:Maria,) (text:but) (text:watch) (text:just)
 (text:his) (text:feet.) (text:You'll) (text:be) (text:amazed.)
 (text:sometimes) (text:pure) (text:skill) (text:can) (text:will) (text:a)
 (text:comp,) (text:sometimes) (text:pure) (text:joy) (text:can)
 (text:win...) (text:put) (text:them) (text:both) +(text:together)
 +(text:there) (text:is) (text:no) (text:competition) (text:This)
 (text:video) (text:clip) (text:makes) (text:me) (text:smile.) (text:Pure)
 (text:joy!) (text:so) (text:good!) (text:Who's) (text:the) (text:person)
 (text:that) (text:gave) (text:this) (text:a) (text:thumbs) (text:down?!?)
 (text:This) (text:is) (text:one) (text:of) (text:the) (text:best)
 (text:routines) (text:I've) (text:ever) (text:seen.) +(text:Period.)
 +(text:it's) (text:a) (text:competitionl!) (text:How) (text:is) (text:that)
 (text:possible?) (text:They're) (text:so) (text:good) (text:it)
 (text:boggles) (text:my) (text:mind.) (text:It's) (text:gorgeous.)
 (text:Flawless) (text:victory.) (text:Great) (text:number!) (text:Does)
 (text:anybody) (text:know) (text:the) (text:name) (text:of) (text:the)
 (text:piece?) (text:I) (text:believe) (text:it's) (text:called)
 (text:Sunny) (text:side) (text:of) (text:the) (text:street) (text:Maria)
 (text:is) (text:like,) (text:the) (text:best) (text:'follow') (text:I've)
 (text:ever) (text:seen.) (text:She's) (text:so) (text:amazing.)
 (text:Thanks) (text:so) (text:much) (text:Johnathan!))~1)/str

 This query generates 0 results. The reason is it expects terms together,
 there, Period., it's to be part of the document (see parsedquery above, all
 other terms are optional, those terms are must).

 Is there any reason for this behavior? If I use shorter queries it works
 flawlessly and returns the document.

 I've appended the whole query.

 Best,

 Nils



Re: Anyone going to ApacheCon in Denver next week?

2014-04-06 Thread Siegfried Goeschl
Hi folks,

I’m already here and would love to join :-)

Cheers,

Siegfried Goeschl


On 05 Apr 2014, at 20:43, Doug Turnbull dturnb...@opensourceconnections.com 
wrote:

 I'll be there. I'd love to meet up. Let me know!
 
 Sent from my Windows Phone From: William Bell
 Sent: 4/5/2014 10:40 PM
 To: solr-user@lucene.apache.org
 Subject: Anyone going to ApacheCon in Denver next week?
 Thoughts on getting together for breakfast? a little Solr meet up?
 
 
 
 -- 
 Bill Bell
 billnb...@gmail.com
 cell 720-256-8076



Re: Combining eDismax and SpellChecker

2014-04-06 Thread Ahmet Arslan
Hi,

I would re-run the corrected query client side.  In my opinion, not all things 
must be done inside solr.

Ahmet
On Sunday, April 6, 2014 1:00 AM, simpleliving...@gmail.com 
simpleliving...@gmail.com wrote:
 
Yes, I saw that earlier in one of your other postings. Is it the case that we 
cannot use the SpellChecker with a parser like edismax by making a 
configuration change without having to go thru this commercial product?

Sent from my HTC

- Reply message -
From: Ahmet Arslan iori...@yahoo.com
To: solr-user@lucene.apache.org solr-user@lucene.apache.org
Subject: Combining eDismax and SpellChecker
Date: Sat, Apr 5, 2014 12:11 PM

There is one commercial solution 
http://www.sematext.com/products/dym-researcher/index.html



On Saturday, April 5, 2014 4:07 PM, S.L simpleliving...@gmail.com wrote:
Hi All,

I want to suggest the correct phrase if a typo is made while searching and
then search it using eDismax parser(pf,pf2,pf3), if no typo is made then
search it using eDismax parser alone.

Is there a way I can combine these two components , I have seen examples
for eDismax and also for SpellChecker , but nothing that combines these two
together.

Can you please let me know ?

Thanks.

Re: Rounding date indexing to minute

2014-04-06 Thread Erick Erickson
Is this an XY problem? You say:

bq:  ...having issues with date range query performance

What are you trying to do anyway? Add an fq clause?
facet by range? Details matter.

If you're using filter queries in conjunction with NOW, you might
be running into this:

http://searchhub.org/2012/02/23/date-math-now-and-filter-queries/

On Sun, Apr 6, 2014 at 7:29 AM, Alexandre Rafalovitch
arafa...@gmail.com wrote:
 Have you tried date math formulas? Don't need to round up what's stored,
 just how you query and cache.

 Regards,
   Alex
 On 06/04/2014 2:45 pm, Darniz rnizamud...@edmunds.com wrote:

 Hello

 can someone please tell me how to make sure in solr to store date only till
 minute level, since i am having issues with date range query performance. i
 read in forums to reduce date precision so that the queries become faster.

 As of now its storing date till seconds.
 date name=liveDate2014-03-11T07:00:00Z/date

 i am only concerned till minute granularity. Also the i am using
 solr.trieDateField
 fieldType name=liveDateType class=solr.TrieDateField
 precisionStep=8 sortMissingLast=true omitNorms=true/
field name=liveDate type=liveDateType indexed=true
 stored=true /

 is there a provision for this
 Please let me know

 thanks
 darniz






 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Rounding-date-indexing-to-minute-tp4129482.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Distributed tracing for Solr via adding HTTP headers?

2014-04-06 Thread Alexandre Rafalovitch
On the second thought,

If you are already managing to pass the value using the request
parameters, what stops you from just having a servlet filter looking
for that parameter and assigning it directly to the MDC context?

Regards,
   Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency


On Sat, Apr 5, 2014 at 7:45 AM, Alexandre Rafalovitch
arafa...@gmail.com wrote:
 I like the idea. No comments about implementation, leave it to others.

 But if it is done, maybe somebody very familiar with logging can also
 review Solr's current logging config. I suspect it is not optimized
 for troubleshooting at this point.

 Regards,
Alex.
 Personal website: http://www.outerthoughts.com/
 Current project: http://www.solr-start.com/ - Accelerating your Solr 
 proficiency


 On Sat, Apr 5, 2014 at 3:16 AM, Gregg Donovan gregg...@gmail.com wrote:
 We have some metadata -- e.g. a request UUID -- that we log to every log
 line using Log4J's MDC [1]. The UUID logging allows us to connect any log
 lines we have for a given request across servers. Sort of like Zipkin [2].

 Currently we're using EmbeddedSolrServer without sharding, so adding the
 UUID is fairly simple, since everything is in one process and one thread.
 But, we're testing a sharded HTTP implementation and running into some
 difficulties getting this data passed around in a way that lets us trace
 all log lines generated by a request to its UUID.



Re: Does sorting skip everything having to do with relevancy?

2014-04-06 Thread Mikhail Khludnev
Arhgh... It seems like Functions Queries (obviously) never throw an
exception.

I have to scratch my own which throws always.
Here is the proof that boost is lazy
https://gist.github.com/m-khl/10010541




On Sun, Apr 6, 2014 at 12:54 AM, Shawn Heisey s...@elyograg.org wrote:

 On 4/5/2014 1:21 PM, Mikhail Khludnev wrote:
  I suppose e yields syntax error. Therefore, this case doesn't prove
  anything yet.
  Haven't you tried sqrt(-1) or log(-1) ?

 Using boost=sqrt(-1) is error-free whether I include the sort parameter
 or not.  That seems like a bug.

 Thanks,
 Shawn




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
 mkhlud...@griddynamics.com


ngramfilter minGramSize problem

2014-04-06 Thread Andreas Owen
i have the a fieldtype that uses ngramfilter whle indexing. is there a  
setting that can force the ngramfilter to index smaller words then the  
minGramSize? Mine is set to 3 and the search wont find word that are only  
1 or 2 chars long. i would like to not set minGramSize=1 because the  
results would be to diverse.


fieldtype:

fieldType name=text_de class=solr.TextField  
positionIncrementGap=100

  analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
		!-- filter class=solr.WordDelimiterFilterFactory  
types=at-under-alpha.txt/ --
		filter class=solr.StopFilterFactory ignoreCase=true  
words=lang/stopwords_de.txt format=snowball  
enablePositionIncrements=true/ !-- remove common words --

filter class=solr.GermanNormalizationFilterFactory/
		filter class=solr.SnowballPorterFilterFactory language=German/  
!-- remove noun/adjective inflections like plural endings --
		filter class=solr.WordDelimiterFilterFactory generateWordParts=1  
generateNumberParts=1 catenateWords=1 catenateNumbers=1  
catenateAll=0 splitOnCaseChange=1/
		filter class=solr.NGramFilterFactory minGramSize=3  
maxGramSize=50/


   /analyzer
   analyzer type=query
tokenizer class=solr.WhiteSpaceTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
			filter class=solr.StopFilterFactory ignoreCase=true  
words=lang/stopwords_de.txt format=snowball  
enablePositionIncrements=true/ !-- remove common words --

filter class=solr.GermanNormalizationFilterFactory/
filter class=solr.SnowballPorterFilterFactory 
language=German/
			filter class=solr.WordDelimiterFilterFactory generateWordParts=1  
generateNumberParts=1 catenateWords=1 catenateNumbers=1  
catenateAll=0 splitOnCaseChange=1/

  /analyzer
/fieldType


Re: ngramfilter minGramSize problem

2014-04-06 Thread Furkan KAMACI
Hi Andreas;

I've implemented a similar feature into EdgeNgramFilter due to some Solr
users wants it. My patch is here:
https://issues.apache.org/jira/browse/SOLR-5332 However if you read the
conversation below the issue you will realize that you can do it with
another way.

Thanks;
Furkan KAMACI


2014-04-06 23:24 GMT+03:00 Andreas Owen ao...@swissonline.ch:

 i have the a fieldtype that uses ngramfilter whle indexing. is there a
 setting that can force the ngramfilter to index smaller words then the
 minGramSize? Mine is set to 3 and the search wont find word that are only 1
 or 2 chars long. i would like to not set minGramSize=1 because the results
 would be to diverse.

 fieldtype:

 fieldType name=text_de class=solr.TextField
 positionIncrementGap=100
   analyzer type=index
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory/
 !-- filter class=solr.WordDelimiterFilterFactory
 types=at-under-alpha.txt/ --
 filter class=solr.StopFilterFactory ignoreCase=true
 words=lang/stopwords_de.txt format=snowball 
 enablePositionIncrements=true/
 !-- remove common words --
 filter class=solr.GermanNormalizationFilterFactory/
 filter class=solr.SnowballPorterFilterFactory
 language=German/ !-- remove noun/adjective inflections like plural
 endings --
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=1
 catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
 filter class=solr.NGramFilterFactory minGramSize=3
 maxGramSize=50/

/analyzer
analyzer type=query
 tokenizer class=solr.
 WhiteSpaceTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.StopFilterFactory
 ignoreCase=true words=lang/stopwords_de.txt format=snowball
 enablePositionIncrements=true/ !-- remove common words --
 filter class=solr.GermanNormalizationFilterFacto
 ry/
 filter class=solr.SnowballPorterFilterFactory
 language=German/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=1
 catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
   /analyzer
 /fieldType



Re: Rounding date indexing to minute

2014-04-06 Thread Jack Krupansky
If indeed you do wish to round dates at index time, there is an update 
request processor for that that is in my book (look up round date in the 
index.) it lets you specify a unit of rounding, such as minute, hour, day, 
month, year, etc.


It is actually a JavaScript script that uses the Solr stateless script 
update processor.


-- Jack Krupansky

-Original Message- 
From: Alexandre Rafalovitch

Sent: Sunday, April 6, 2014 8:29 AM
To: solr-user@lucene.apache.org
Subject: Re: Rounding date indexing to minute

Have you tried date math formulas? Don't need to round up what's stored,
just how you query and cache.

Regards,
 Alex
On 06/04/2014 2:45 pm, Darniz rnizamud...@edmunds.com wrote:


Hello

can someone please tell me how to make sure in solr to store date only 
till
minute level, since i am having issues with date range query performance. 
i

read in forums to reduce date precision so that the queries become faster.

As of now its storing date till seconds.
date name=liveDate2014-03-11T07:00:00Z/date

i am only concerned till minute granularity. Also the i am using
solr.trieDateField
fieldType name=liveDateType class=solr.TrieDateField
precisionStep=8 sortMissingLast=true omitNorms=true/
   field name=liveDate type=liveDateType indexed=true
stored=true /

is there a provision for this
Please let me know

thanks
darniz






--
View this message in context:
http://lucene.472066.n3.nabble.com/Rounding-date-indexing-to-minute-tp4129482.html
Sent from the Solr - User mailing list archive at Nabble.com.





Re: Anyone going to ApacheCon in Denver next week?

2014-04-06 Thread Jack Krupansky

I'm here as well, representing DataStax for Apache Cassandra and Solr.

The reception on Tuesday evening is for committers only, so that might be a 
good time to meet up, maybe over dinner. Of course, I'm sure some of us will 
run into each other at the main conference reception on Monday evening.


-- Jack Krupansky

-Original Message- 
From: Siegfried Goeschl

Sent: Sunday, April 6, 2014 9:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Anyone going to ApacheCon in Denver next week?

Hi folks,

I’m already here and would love to join :-)

Cheers,

Siegfried Goeschl


On 05 Apr 2014, at 20:43, Doug Turnbull 
dturnb...@opensourceconnections.com wrote:



I'll be there. I'd love to meet up. Let me know!

Sent from my Windows Phone From: William Bell
Sent: 4/5/2014 10:40 PM
To: solr-user@lucene.apache.org
Subject: Anyone going to ApacheCon in Denver next week?
Thoughts on getting together for breakfast? a little Solr meet up?



--
Bill Bell
billnb...@gmail.com
cell 720-256-8076 




Re: ngramfilter minGramSize problem

2014-04-06 Thread Andreas Owen
i thought i cound use filter class=solr.LengthFilterFactory min=1  
max=2/ to index and search words that are only 1 or 2 chars long. it  
seems to work but i have to test it some more



On Sun, 06 Apr 2014 22:24:20 +0200, Andreas Owen ao...@swissonline.ch  
wrote:


i have the a fieldtype that uses ngramfilter whle indexing. is there a  
setting that can force the ngramfilter to index smaller words then the  
minGramSize? Mine is set to 3 and the search wont find word that are  
only 1 or 2 chars long. i would like to not set minGramSize=1 because  
the results would be to diverse.


fieldtype:

fieldType name=text_de class=solr.TextField  
positionIncrementGap=100

   analyzer type=index
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory/
		!-- filter class=solr.WordDelimiterFilterFactory  
types=at-under-alpha.txt/ --
		filter class=solr.StopFilterFactory ignoreCase=true  
words=lang/stopwords_de.txt format=snowball  
enablePositionIncrements=true/ !-- remove common words --

 filter class=solr.GermanNormalizationFilterFactory/
		filter class=solr.SnowballPorterFilterFactory language=German/  
!-- remove noun/adjective inflections like plural endings --
		filter class=solr.WordDelimiterFilterFactory generateWordParts=1  
generateNumberParts=1 catenateWords=1 catenateNumbers=1  
catenateAll=0 splitOnCaseChange=1/
		filter class=solr.NGramFilterFactory minGramSize=3  
maxGramSize=50/


   /analyzer
   analyzer type=query
tokenizer class=solr.WhiteSpaceTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
			filter class=solr.StopFilterFactory ignoreCase=true  
words=lang/stopwords_de.txt format=snowball  
enablePositionIncrements=true/ !-- remove common words --

filter class=solr.GermanNormalizationFilterFactory/
filter class=solr.SnowballPorterFilterFactory 
language=German/
			filter class=solr.WordDelimiterFilterFactory generateWordParts=1  
generateNumberParts=1 catenateWords=1 catenateNumbers=1  
catenateAll=0 splitOnCaseChange=1/

   /analyzer
 /fieldType



--
Using Opera's mail client: http://www.opera.com/mail/


Re: ngramfilter minGramSize problem

2014-04-06 Thread Furkan KAMACI
Correction: My patch is at SOLR-5152
7 Nis 2014 01:05 tarihinde Andreas Owen ao...@swissonline.ch yazdı:

 i thought i cound use filter class=solr.LengthFilterFactory min=1
 max=2/ to index and search words that are only 1 or 2 chars long. it
 seems to work but i have to test it some more


 On Sun, 06 Apr 2014 22:24:20 +0200, Andreas Owen ao...@swissonline.ch
 wrote:

  i have the a fieldtype that uses ngramfilter whle indexing. is there a
 setting that can force the ngramfilter to index smaller words then the
 minGramSize? Mine is set to 3 and the search wont find word that are only 1
 or 2 chars long. i would like to not set minGramSize=1 because the results
 would be to diverse.

 fieldtype:

 fieldType name=text_de class=solr.TextField
 positionIncrementGap=100
analyzer type=index
  tokenizer class=solr.StandardTokenizerFactory/
  filter class=solr.LowerCaseFilterFactory/
 !-- filter class=solr.WordDelimiterFilterFactory
 types=at-under-alpha.txt/ --
 filter class=solr.StopFilterFactory ignoreCase=true
 words=lang/stopwords_de.txt format=snowball 
 enablePositionIncrements=true/
 !-- remove common words --
  filter class=solr.GermanNormalizationFilterFactory/
 filter class=solr.SnowballPorterFilterFactory
 language=German/ !-- remove noun/adjective inflections like plural
 endings --
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=1
 catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
 filter class=solr.NGramFilterFactory minGramSize=3
 maxGramSize=50/

/analyzer
analyzer type=query
 tokenizer class=solr.
 WhiteSpaceTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.StopFilterFactory
 ignoreCase=true words=lang/stopwords_de.txt format=snowball
 enablePositionIncrements=true/ !-- remove common words --
 filter class=solr.
 GermanNormalizationFilterFactory/
 filter class=solr.SnowballPorterFilterFactory
 language=German/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=1
 catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
/analyzer
  /fieldType



 --
 Using Opera's mail client: http://www.opera.com/mail/



Commit Within and /update/extract handler

2014-04-06 Thread Jamie Johnson
I'm running solr 4.6.0 and am noticing that commitWithin doesn't seem to
work when I am using the /update/extract request handler.  It looks like a
commit is happening from the logs, but the documents don't become available
for search until I do a commit manually.  Could this be some type of
configuration issue?


Re: How to reduce the search speed of solrcloud

2014-04-06 Thread Sathya
Hi,

I use this link to setup a solrcloud
http://myjeeva.com/solrcloud-cluster-single-collection-deployment.html And
i use 5 different machine to setup this cloud. I use Unique id.


On Sat, Apr 5, 2014 at 6:30 AM, Alexandre Rafalovitch [via Lucene] 
ml-node+s472066n4129333...@n3.nabble.com wrote:

 And 50 million records of 3 fields each should not become 50Gb of
 data. Something smells wrong there. Do you have unique IDs setup?

 Regards,
Alex.
 Personal website: http://www.outerthoughts.com/
 Current project: http://www.solr-start.com/ - Accelerating your Solr
 proficiency


 On Sat, Apr 5, 2014 at 12:48 AM, Anshum Gupta [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=4129333i=0
 wrote:

  I am not sure if you setup your SolrCloud right. Can you also provide me
  with the version of Solr that you're running.
  Also, if you could tell me about how did you setup your SolrCloud
 cluster.
  Are the times consistent? Is this the only collection on the cluster?
 
  Also, if I am getting it right, you have 15 ZKs running. Correct me if
 I'm
  wrong, but if I'm not, you don't need that kind of a zk setup.
 
 
  On Fri, Apr 4, 2014 at 9:39 AM, Sathya [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=4129333i=1
 wrote:
 
  Hi shawn,
 
  I have indexed 50 million data in 5 servers. 3 servers have 8gb ram.
 One
  have 24gb and another one have 64gb ram. I was allocate 4 gb ram to
 solr in
  each machine. I am using solrcloud. My total index size is 50gb
 including 5
  servers. Each server have 3 zookeepers. Still I didnt check about OS
 disk
  cache and heap memory. I will check and let u know shawn. If anything,
 pls
  let me know.
 
  Thank u shawn.
 
  On Friday, April 4, 2014, Shawn Heisey-4 [via Lucene] 
  [hidden email] http://user/SendEmail.jtp?type=nodenode=4129333i=2
 wrote:
   On 4/4/2014 1:31 AM, Sathya wrote:
   Hi All,
  
   Hi All, I am new to Solr. And i dont know how to increase the search
  speed
   of solrcloud. I have indexed nearly 4 GB of data. When i am
 searching a
   document using java with solrj, solr takes more 6 seconds to return
 a
  query
   result. Any one please help me to reduce the search query time to
 less
  than
   500 ms. i have allocate the 4 GB ram for solr. Please let me know
 for
   further details about solrcloud config.
  
   How much total RAM do you have on the system, and how much total
 index
   data is on that system (adding up all the Solr cores)?  You've
 already
   said that you have allocated 4GB of RAM for Solr.
  
   Later you said you had 50 million documents, and then you showed us a
   URL that looks like SolrCloud.
  
   I suspect that you don't have enough RAM left over to cache your
 index
   effectively -- the OS Disk Cache is too small.
  
   http://wiki.apache.org/solr/SolrPerformanceProblems
  
   Another possible problem, also discussed on that page, is that your
 Java
   heap is too small.
  
   Thanks,
   Shawn
  
  
  
   
   If you reply to this email, your message will be added to the
 discussion
  below:
  
 
 
 http://lucene.472066.n3.nabble.com/How-to-reduce-the-search-speed-of-solrcloud-tp4129067p4129150.html
   To unsubscribe from How to reduce the search speed of solrcloud,
 click
  here.
   NAML
 
 
 
 
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/How-to-reduce-the-search-speed-of-solrcloud-tp4129067p4129173.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 
 
 
 
  --
 
  Anshum Gupta
  http://www.anshumgupta.net


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://lucene.472066.n3.nabble.com/How-to-reduce-the-search-speed-of-solrcloud-tp4129067p4129333.html
  To unsubscribe from How to reduce the search speed of solrcloud, click
 herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=4129067code=c2F0aGlhLmJsYWNrc3RhckBnbWFpbC5jb218NDEyOTA2N3wtMjEyNDcwMTI5OA==
 .
 NAMLhttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-reduce-the-search-speed-of-solrcloud-tp4129067p4129564.html
Sent from the Solr - User mailing list archive at Nabble.com.