Re: Soft commit and reading data just after the commit

2016-12-19 Thread Ere Maijala
Hi, so, the app already has a database connection because it updates the READ flag when the user clicks an entry, right? If you only need the flag for display purposes, it sounds like it would make sense to also fetch it directly from the database when displaying the listing. Of course if

Re: Soft commit and reading data just after the commit

2016-12-19 Thread Walter Underwood
You probably need a database instead of a search engine. What requirement makes you want to do this with a search engine? wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Dec 19, 2016, at 6:34 PM, Lasitha Wattaladeniya wrote: >

Re: Soft commit and reading data just after the commit

2016-12-19 Thread Lasitha Wattaladeniya
Hi Hendrik, Thanks for your input. Previously I was using the hard commit (SolrClient.commit()) but then I got some error when there are concurrent real time index requests from my app. The error was "Exceeded limit of maxWarmingSearchers=2, try again later", then i changed the code to use only

Re: Soft commit and reading data just after the commit

2016-12-19 Thread Lasitha Wattaladeniya
Hi Shawn, Thanks for your well detailed explanation. Now I understand, I won't be able to achieve the 100ms softcommit timeout with my hardware setup. However let's say someone has a requirement as below (quoted from my previous mail) *Requirement *is, we are showing a list of entries on a

Re: Stop Solr Node (in distress)?

2016-12-19 Thread Erick Erickson
The first question is _why_ is your disk full? Older versions of Solr could, for instance, accumulate solr console log files forever. If that's the case, just stop the Solr instance on that node, remove the Solr log files, fix the log4j.properties file to not append to console forever and you're

Re: Solr on HDFS: Streaming API performance tuning

2016-12-19 Thread Joel Bernstein
I took another look at the stack trace and I'm pretty sure the issue is with NULL values in one of the sort fields. The null pointer is occurring during the comparison of sort values. See line 85 of:

Re: Solr on HDFS: Streaming API performance tuning

2016-12-19 Thread Chetas Joshi
Hi Joel, I don't have any solr documents that have NULL values for the sort fields I use in my queries. Thanks! On Sun, Dec 18, 2016 at 12:56 PM, Joel Bernstein wrote: > Ok, based on the stack trace I suspect one of your sort fields has NULL > values, which in the 5x

Re: Stats component's percentiles are incorrect

2016-12-19 Thread John Blythe
very good point, walter. i think we could find some cool ways to leverage this intelligence for our users after serving up the flattened version based on the simple range that they're expecting to see. the clarity is helpful in getting some creative ideas moving, so thanks. best, -- *John

Re: Stats component's percentiles are incorrect

2016-12-19 Thread Walter Underwood
Percentiles are far more useful than that linear approximation. That is just slope and intercept, basically two numbers. With percentiles, I can answer the question “how fast is the search for 95% of my visitors?” With that linear interpolation, I don’t know anything about my customers.

Re: Stats component's percentiles are incorrect

2016-12-19 Thread John Blythe
gotcha. yup, that was the back up plan so i think i'll go that route for now. thanks for the info! best, -- *John Blythe* Product Manager & Lead Developer 251.605.3071 | j...@curvolabs.com www.curvolabs.com 58 Adams Ave Evansville, IN 47713 On Mon, Dec 19, 2016 at 3:41 PM, Toke Eskildsen

Re: Stats component's percentiles are incorrect

2016-12-19 Thread Toke Eskildsen
John Blythe wrote: > if the range is 0 to 100 then, for my current purposes, i don't care if the > vast majority of the values are 92, i would want 25%=>25, 50%=>50, and > 75%=>75. so is there an out-of-the-box way to get the percentiles to > correspond to the range itself

Re: Stats component's percentiles are incorrect

2016-12-19 Thread John Blythe
mm, i was afraid something like that might be the case. if the range is 0 to 100 then, for my current purposes, i don't care if the vast majority of the values are 92, i would want 25%=>25, 50%=>50, and 75%=>75. so is there an out-of-the-box way to get the percentiles to correspond to the range

Re: Stats component's percentiles are incorrect

2016-12-19 Thread Toke Eskildsen
John Blythe wrote: > 102 ... > 6 102 values, but only 6 distinct (aka unique): 3900, 3998, 4098, 4200, 4305 and 4413. > > 4305.0 > 4413.0 > 4413.0 > - the 50th and 75% are the same value as the max > - the 50th and 75th % are the same number as one another That is

How to clear the Collection Creation failed errors?

2016-12-19 Thread srinalluri
I tried to create two collections but failed due to some known reasons. I want to clear the errors which you see the attached screenshot. Please see the screenshot. As the collections failed to create, there is no point to delete the collection.

Stats component's percentiles are incorrect

2016-12-19 Thread John Blythe
hi, all. i've begun recruiting solr stats for some nifty little insights for our users' data. it seems to be running just fine in most cases, but i have noticed that there is a fringe group of results that seem to have incorrect data. for instance, one query returns the following output;

Re: Soft commit and reading data just after the commit

2016-12-19 Thread Hendrik Haddorp
Hi, the SolrJ API has this method: SolrClient.commit(String collection, boolean waitFlush, boolean waitSearcher, boolean softCommit). My assumption so far was that when you set waitSearcher to true that the method call only returns once a search would find the new data, which sounds what you

Re: Confusing debug=timing parameter

2016-12-19 Thread Walter Underwood
One other thing. How many results are being requested? That is, what is the “rows” parameter? Time includes query time. It does not include networking time for sending 10,000 huge results to the client. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog)

Re: ttl on merge-time possible somehow ?

2016-12-19 Thread Chris Hostetter
: So, the other way this can be made better in my opinion is (if the : optimization is not already there) : Is to make the 'delete-query' on ttl-documents operation on translog to not : be forced to fsync to disk (so still written to translog, but no fsync). : The another index/delete

Trying to figure out a solution for a problem

2016-12-19 Thread Shankar Krish
Hello, I am trying to find a solution for a specific search context that is not working the way I expect it to work. Let me explain in detail: The setup I have a Solr instance setup with data (about 3.5 million documents). The schema has been setup with searchable text fields etc; One of the

Re: Confusing debug=timing parameter

2016-12-19 Thread Chris Hostetter
SG: IIRC, when doing a distributed/cloud search, the timing info returned for each stage is the *cummulative* time spent on that stage in all shards -- so if you have 4 shards, the "process" time reported could be 4x as much as the actual process time spent. The QTime reported back (in a

Re: Stable releases of Solr

2016-12-19 Thread Andrea Gazzarini
Hi Deepak, the latest version is the 6.3.0 and I guess it is the best to pick up. Keep in mind that 3.6.1 => 6.3.0 is definitely a big jump. In general, I think once a version is made available, that means it is (hopefully) stable. Best, Andrea On 16/12/16 08:10, Deepak Kumar Gupta wrote:

DIH caching URLDataSource/XPath entity (not root)

2016-12-19 Thread Chantal Ackermann
Hi there, my index is created from XML files that are downloaded on the fly. This also includes downloading a mapping file that is used to resolve IDs in the main file (root entity) and map them onto names. The basic functionality works - the supplier_name is set for each document. However, the

Specifying field in Child as query field

2016-12-19 Thread Navin Kulkarni
Hi, I plan to use nested document structure for our index and would like to know how to pick field from child documents as query fields. Normally we do this using "qf" parameter in solr query and specify search key word in query field. I tried to mention child field using "qf" but this did not

Re: Soft commit and reading data just after the commit

2016-12-19 Thread Shawn Heisey
On 12/18/2016 7:09 PM, Lasitha Wattaladeniya wrote: > @eric : thanks for the lengthy reply. So let's say I increase the > autosoftcommit time out to may be 100 ms. In that case do I have to > wait much that time from client side before calling search ?. What's > the correct way of achieving this?

Re: [ANN] InvisibleQueriesRequestHandler

2016-12-19 Thread Mikhail Khludnev
> It has an interesting failure mode. If the user misspells a word (about > 10% of > queries do), and the misspelling matches a misspelled document, then you > are stuck. It will never show the correctly-spelled document. > FWIW (and I'm sorry for hijacking) I've faced this challenge too, and