Hmmm, I think you missed my implied point. How are these metrics collected
and compared? There are about a dozen different machines running various op
systems etc. For these measurements to spot regressions and/or
improvements, they need to have a repository where the results get
published. So a report like "build XXX took YYY seconds to index ZZZ
documents" doesn't tell us anything. You need to gather then for a
_specific_ machine.

As for whether they should be run or not, an annotation could help here,
there are already @Slow, @Nightly, @Weekly and @Performance could be added.
Mike McCandless has some of these kinds of things already for Lucene, I
htink the first thing would be to check whether they are already done, it's
possible you'd be reinventing the wheel.

Best,
Erick

On Mon, Jan 8, 2018 at 11:45 AM, S G <sg.online.em...@gmail.com> wrote:

> We can put some lower limits on CPU and Memory for running a performance
> test.
> If those lower limits are not met, then the test will just skip execution.
>
> And then we put some lower bounds (time-wise) on the time spent by
> different parts of the test like:
>  - Max time taken to index 1 million documents
>  - Max time taken to query, facet, pivot etc
>  - Max time taken to delete 100,000 documents while read and writes are
> happening.
>
> For all of the above, we can publish metrics like 5minRate, 95thPercent
> and assert on values lower than a particular value.
>
> I know some other software compare CPU cycles across different runs as
> well but not sure how.
>
> Such tests will give us more confidence when releasing/adopting new
> features like pint compared to tint etc.
>
> Thanks
> SG
>
>
>
> On Sat, Jan 6, 2018 at 9:59 AM, Erick Erickson <erickerick...@gmail.com>
> wrote:
>
>> Not sure how performance tests in the unit tests would be interpreted. If
>> I run the same suite on two different machines how do I compare the
>> numbers?
>>
>> Or are you thinking of having some tests so someone can check out
>> different versions of Solr and run the perf tests on a single machine,
>> perhaps using bisect to pinpoint when something changed?
>>
>> I'm not opposed at all, just trying to understand how one would go about
>> using such tests.
>>
>> Best,
>> Erick
>>
>> On Fri, Jan 5, 2018 at 10:09 PM, S G <sg.online.em...@gmail.com> wrote:
>>
>>> Just curious to know, does the test suite include some performance test
>>> also?
>>> I would like to know the performance impact of using pints vs tints or
>>> ints etc.
>>> If they are not there, I can try to add some tests for the same.
>>>
>>> Thanks
>>> SG
>>>
>>>
>>> On Fri, Jan 5, 2018 at 5:47 PM, Đạt Cao Mạnh <caomanhdat...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I will work on SOLR-11771
>>>> <https://issues.apache.org/jira/browse/SOLR-11771> today, It is a
>>>> simple fix and will be great if it get fixed in 7.2.1
>>>>
>>>> On Fri, Jan 5, 2018 at 11:23 PM Erick Erickson <erickerick...@gmail.com>
>>>> wrote:
>>>>
>>>>> Neither of those Solr fixes are earth shatteringly important, they've
>>>>> both been around for quite a while. I don't think it's urgent to include
>>>>> them.
>>>>>
>>>>> That said, they're pretty simple and isolated so worth doing if Jim is
>>>>> willing. But not worth straining much. I was just clearing out some 
>>>>> backlog
>>>>> over vacation.
>>>>>
>>>>> Strictly up to you Jim.
>>>>>
>>>>> Erick
>>>>>
>>>>> On Fri, Jan 5, 2018 at 6:54 AM, David Smiley <david.w.smi...@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> https://issues.apache.org/jira/browse/SOLR-11809 is in progress,
>>>>>> should be easy and I think definitely worth backporting
>>>>>>
>>>>>> On Fri, Jan 5, 2018 at 8:52 AM Adrien Grand <jpou...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> +1
>>>>>>>
>>>>>>> Looking at the changelog, 7.3 has 3 bug fixes for now: LUCENE-8077,
>>>>>>> SOLR-11783 and SOLR-11555. The Lucene change doesn't seem worth
>>>>>>> backporting, but maybe the Solr changes should?
>>>>>>>
>>>>>>> Le ven. 5 janv. 2018 à 12:40, jim ferenczi <jim.feren...@gmail.com>
>>>>>>> a écrit :
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> We discovered a bad bug in 7x that affects indices created in 6x
>>>>>>>> with Lucene54DocValues format. The SortedNumericDocValues created with 
>>>>>>>> this
>>>>>>>> format have a bug when advanceExact is used, the values retrieved for 
>>>>>>>> the
>>>>>>>> docs when advanceExact returns true are invalid (the pointer to the 
>>>>>>>> values
>>>>>>>> is not updated):
>>>>>>>> https://issues.apache.org/jira/browse/LUCENE-8117
>>>>>>>> This affects all indices created in 6x with sorted numeric doc
>>>>>>>> values so I wanted to ask if anyone objects to a bugfix release for 7.2
>>>>>>>> (7.2.1). I also volunteer to be the release manager for this one if it 
>>>>>>>> is
>>>>>>>> accepted.
>>>>>>>>
>>>>>>>> Jim
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>>>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>>>>> http://www.solrenterprisesearchserver.com
>>>>>>
>>>>>
>>>>>
>>>
>>
>

Reply via email to