Just a reminder: if anyone still sees this kind of performance regression, 
please do provide some more detail, as it's impossible to fix without it. It's 
really as simple as this:

run_my_workload()  # run once to force compilation
@profile run_my_workload()
using ProfileView
ProfileView.view()

and then hover over any big (especially, red) boxes along the top row. Right-
clicking will put details into the REPL command line---if the problematic 
line(s) are indeed in base julia, and you can copy/paste them into an email or 
issue report. You can also paste the output of Profile.print(), if more detail 
about the full backtrace seems useful (and if that output isn't too long).

--Tim

On Thursday, June 11, 2015 11:09:13 AM Sebastian Good wrote:
> I've seen the same. Looked away for a few weeks, and my code got ~5x
> slower. There's a lot going on so it's hard to say without detailed
> testing. However this code was always very sensitive to optimization to be
> able to specialize code which read data of different types. I got massive
> increases in memory allocations. I'll try to narrow it down, but it seems
> like perhaps something was done with optimization passes or type inference?
> 
> On Wednesday, June 10, 2015 at 9:31:59 AM UTC-4, Kevin Squire wrote:
> > Short answer: no, poor performance across the board is not a known issue.
> > 
> > Just curious, do you see these timing issues locally as well?  In other
> > words, is it a problem with Julia, or a problem with Travis (the
> > continuous
> > integration framework)?
> > 
> > It might be the case that some changes in v0.4 have (possibly
> > inadvertantly) slowed down certain workflows compared with v0.3, whereas
> > others are unchanged or even faster.
> > 
> > Could you run profiling and see what parts of the code are the slowest,
> > and then file issues for any slowdowns, with (preferably minimal)
> > examples?
> > 
> > Cheers,
> > 
> >    Kevin
> > 
> > On Wed, Jun 10, 2015 at 9:10 AM, andrew cooke <[email protected]
> > 
> > <javascript:>> wrote:
> >> Is it the current poor performance / allocation a known issue?
> >> 
> >> I don't know how long this has been going on, and searching for
> >> "performance" in issues gives a lot of hits, but I've been maintaining
> >> some
> >> old projects and noticed that timed tests are running significant;y
> >> slower
> >> with trunk than 0.3.  CRC.jl was 40x slower - I ended up cancelling the
> >> Travis build, and assumed it was a weird glitch that would be fixed.  But
> >> now I am seeing slowdowns with IntModN.jl too (factor more like 4x as
> >> slow).
> >> 
> >> You can see this at https://travis-ci.org/andrewcooke/IntModN.jl
> >> (compare the timing results in the two jobs) and at
> >> https://travis-ci.org/andrewcooke/CRC.jl/builds/66140801 (i have been
> >> cancelling jobs there, so the examples aren't as complete).
> >> 
> >> Andrew

Reply via email to