git bisect points to https://github.com/JuliaLang/julia/commit/5cb2835 as 
the first bad commit.

The results are apparently not as bad as I posted above but it is at least 
an overall 30% performance hit.

On Tuesday, June 23, 2015 at 12:11:09 PM UTC+2, Kristoffer Carlsson wrote:
>
> I have found a quite severe performance hit in my KNN searches in 
> https://github.com/KristofferC/KDTrees.jl
>
> Earlier results: knn / sec: 730922, now; knn / sec: 271581
>
> I will bisect it later today and see if there are major performance hits 
> somewhere.
>
>
> On Tuesday, June 23, 2015 at 1:54:09 AM UTC+2, David Anthoff wrote:
>>
>> I also saw a huge performance drop for a pretty involved piece of code 
>> when 
>> I tried a pre-release version of 0.4 a couple of weeks ago, compared to 
>> running things on 0.3. 
>>
>> My plan was to wait for a feature freeze of 0.4 and then investigate and 
>> report these things, I don't have the bandwidth to track these things 
>> ongoing. Maybe that would be a good rule of thumb, to ask people to look 
>> for 
>> performance regressions with existing code once there is a feature freeze 
>> on 
>> 0.4? 
>>
>> Also, are there ongoing performance tests? Especially tests that don't 
>> micro-benchmark, but test runtimes of relatively complex pieces of code? 
>>
>> > -----Original Message----- 
>> > From: [email protected] [mailto:julia- 
>> > [email protected]] On Behalf Of Tim Holy 
>> > Sent: Sunday, June 14, 2015 4:36 AM 
>> > To: [email protected] 
>> > Subject: Re: [julia-users] Current Performance w Trunk Compared to 0.3 
>> > 
>> > git bisect? 
>> > 
>> > Perhaps the leading candidate is 
>> > https://github.com/JuliaLang/julia/issues/11681 
>> > which may be fixed by 
>> > https://github.com/JuliaLang/julia/pull/11683 
>> > 
>> > --Tim 
>> > 
>> > On Sunday, June 14, 2015 02:58:19 AM Viral Shah wrote: 
>> > > FWIW, I have seen a 25% regression from 0.3 to 0.4 on a reasonably 
>> > > complex codebase, but haven't been able to isolate the offending 
>> code. 
>> > > GC time in the 0.4 run is significantly smaller than 0.3, which means 
>> > > that if you discount GC, the difference is more like 40%. I wonder if 
>> > > this is some weird interaction with the caching in the new GC, or it 
>> > > is the quality of generated code. 
>> > > 
>> > > I didn't report it yet, since it wouldn't be useful without narrowing 
>> > > down 
>> > > - but since this thread came up, I at least thought I'd register my 
>> > > observations. 
>> > > 
>> > > -viral 
>> > > 
>> > > On Friday, June 12, 2015 at 12:21:46 PM UTC-4, Tim Holy wrote: 
>> > > > Just a reminder: if anyone still sees this kind of performance 
>> > > > regression, please do provide some more detail, as it's impossible 
>> to 
>> fix 
>> > without it. 
>> > > > It's 
>> > > > really as simple as this: 
>> > > > 
>> > > > run_my_workload()  # run once to force compilation @profile 
>> > > > run_my_workload() using ProfileView 
>> > > > ProfileView.view() 
>> > > > 
>> > > > and then hover over any big (especially, red) boxes along the top 
>> row. 
>> > > > Right- 
>> > > > clicking will put details into the REPL command line---if the 
>> > > > problematic 
>> > > > line(s) are indeed in base julia, and you can copy/paste them into 
>> > > > an email or issue report. You can also paste the output of 
>> > > > Profile.print(), if more detail about the full backtrace seems 
>> > > > useful (and if that output isn't too long). 
>> > > > 
>> > > > --Tim 
>> > > > 
>> > > > On Thursday, June 11, 2015 11:09:13 AM Sebastian Good wrote: 
>> > > > > I've seen the same. Looked away for a few weeks, and my code got 
>> > > > > ~5x slower. There's a lot going on so it's hard to say without 
>> > > > > detailed testing. However this code was always very sensitive to 
>> > > > > optimization to 
>> > > > 
>> > > > be 
>> > > > 
>> > > > > able to specialize code which read data of different types. I got 
>> > > > 
>> > > > massive 
>> > > > 
>> > > > > increases in memory allocations. I'll try to narrow it down, but 
>> > > > > it 
>> > > > 
>> > > > seems 
>> > > > 
>> > > > > like perhaps something was done with optimization passes or type 
>> > > > 
>> > > > inference? 
>> > > > 
>> > > > > On Wednesday, June 10, 2015 at 9:31:59 AM UTC-4, Kevin Squire 
>> wrote: 
>> > > > > > Short answer: no, poor performance across the board is not a 
>> > > > > > known 
>> > > > 
>> > > > issue. 
>> > > > 
>> > > > > > Just curious, do you see these timing issues locally as well? 
>> > > > > > In 
>> > > > 
>> > > > other 
>> > > > 
>> > > > > > words, is it a problem with Julia, or a problem with Travis 
>> (the 
>> > > > > > continuous integration framework)? 
>> > > > > > 
>> > > > > > It might be the case that some changes in v0.4 have (possibly 
>> > > > > > inadvertantly) slowed down certain workflows compared with 
>> v0.3, 
>> > > > 
>> > > > whereas 
>> > > > 
>> > > > > > others are unchanged or even faster. 
>> > > > > > 
>> > > > > > Could you run profiling and see what parts of the code are the 
>> > > > 
>> > > > slowest, 
>> > > > 
>> > > > > > and then file issues for any slowdowns, with (preferably 
>> > > > > > minimal) examples? 
>> > > > > > 
>> > > > > > Cheers, 
>> > > > > > 
>> > > > > >    Kevin 
>> > > > > > 
>> > > > > > On Wed, Jun 10, 2015 at 9:10 AM, andrew cooke <
>> [email protected] 
>> > > > > > 
>> > > > > > <javascript:>> wrote: 
>> > > > > >> Is it the current poor performance / allocation a known issue? 
>> > > > > >> 
>> > > > > >> I don't know how long this has been going on, and searching 
>> for 
>> > > > > >> "performance" in issues gives a lot of hits, but I've been 
>> > > > 
>> > > > maintaining 
>> > > > 
>> > > > > >> some 
>> > > > > >> old projects and noticed that timed tests are running 
>> > > > > >> significant;y slower with trunk than 0.3.  CRC.jl was 40x 
>> > > > > >> slower - I ended up cancelling 
>> > > > 
>> > > > the 
>> > > > 
>> > > > > >> Travis build, and assumed it was a weird glitch that would be 
>> fixed. 
>> > > > 
>> > > >  But 
>> > > > 
>> > > > > >> now I am seeing slowdowns with IntModN.jl too (factor more 
>> like 
>> > > > > >> 4x as slow). 
>> > > > > >> 
>> > > > > >> You can see this at 
>> > > > > >> https://travis-ci.org/andrewcooke/IntModN.jl 
>> > > > > >> (compare the timing results in the two jobs) and at 
>> > > > > >> https://travis-ci.org/andrewcooke/CRC.jl/builds/66140801 (i 
>> > > > > >> have 
>> > > > 
>> > > > been 
>> > > > 
>> > > > > >> cancelling jobs there, so the examples aren't as complete). 
>> > > > > >> 
>> > > > > >> Andrew 
>>
>>

Reply via email to