Here's the scene I initially used to test this: IMAGE >> http://fdpl.foundation/prog-vs-nonprog-suzanne-volumetric-dof-displacement.png (10MB)
BLEND >> http://fdpl.foundation/prog-vs-nonprog-suzanne-volumetric-dof-displacement.blend (<1MB) I rendered on CPU - i7-2920XM quad-core (2.50GHz, 8MB cache), fedora 20 64 bit, blender 2.72b official release Non-Progressive: 61 seconds Progressive: 165 seconds Super coincidentally also 2.7x slower :D Cheers, Simon On 11/13/2014 11:49 AM, Sergey Sharybin wrote: > Brecht, it's multiple factors being involved here, starting from CPU cache > coherence issues (which we can't easily predict) ending with less efficient > buffers update (we might be saving all the passes after each of the samples > in case of progressive refine, which we can fix). > > But yeah, should have asked for a demo file in the original file :) So do > you guys have a demo .blend file which demonstrates such a major slowdown? > > Brecht, we can (and actually should) do several samples per tile, but i'd > prefer log(). not sqrt() perhaps. Other idea to improve cache coherence > would be to use left-to-right, bottom-to-top tile scheduling. That gives > few %% of speedup, and AFAIR we don't fallback to such a scheduling in > progressive refine. > > And even after the tweaks, we should probably replace "somewhat" to > something more deterministic in the tooltip. > > On Thu, Nov 13, 2014 at 10:58 AM, Brecht Van Lommel < > [email protected]> wrote: > >> If it is really 100% or 300% slower then that sounds like a fixable issue. >> >> I'm not sure why it would be this slow, it would be good to find out why, >> but one thing that would speed it up is to render more samples at a the >> time and redraw less often as the current sample increases. Due to the way >> Monte Carlo integration works, one sample barely makes any visible >> difference in noise after a while anyway. >> >> Maybe something like: >> num_samples_at_once = sqrt(max(current_sample - 4, 1)) >> On Nov 13, 2014 10:13 AM, "Greg Zaal" <[email protected]> wrote: >> >> Just by the way, it's even more noticeable with GPU rendering - I've seen >> it roughly 300% slower often. >> >> "could lead to significant slowdown" sounds good to me. >> >> On 13 November 2014 11:06, Sergey Sharybin <[email protected]> wrote: >> >>> The issue here is that basically slowdown depends on particular hardware >>> configuration, tile settings and device used to render (GPU/CPU). >> Meaning, >>> on modern CPU i've noticed around 20% slowdown peak, which is not that >> bad >>> as 100%. So what i'm trying to say here, is that if we'll provide >>> information "up to 100% slower" it might just scary artists and they >>> wouldn't use the option at all, even though for their configuration >>> slowdown wouldn't be so bad. >>> >>> What about more neutral (in my opinion): "could lead to significant >>> slowdown"? >>> >>> On Thu, Nov 13, 2014 at 12:22 AM, Simon Repp <[email protected]> >>> wrote: >>> >>>> Dear Renderistas, >>>> >>>> I only recently found out that progressive refinement in Cycles >>>> rendering (which the corresponding checkbox's tooltip describes to be >>>> "somewhat slower" than bucket rendering) in fact can impose performance >>>> penalties of over 100% (aka the same amount of samples takes more than >>>> twice as long to render). >>>> >>>> Now I don't know if this is just a personal flawed interpretation of >> the >>>> english language on my part, but when reading "somewhat slower" I >> didn't >>>> realize what I was really in for, and in retrospect I'd rather not >>>> reconstruct how many days my poor laptop spent in excess to render some >>>> projects I did in the past. >>>> >>>> I'd hereby like to propose a change of this tooltip to something less >>>> ambiguous, lest anyone else falls into that same trap that I have - My >>>> proposal for this would be to include actual figures describing the >>>> possible speed penalty that progressive refinement can impose, that is, >>>> something along the lines of "renders [a]% to [b]% slower depending on >>>> the scene", where figures [a] and [b] are ideally derived from real >>>> world data we gather (or already have?) about how much speed penalty >>>> progressive refinement can impose in different scenes. Alternatively, >>>> only stating "up to [x]% slower" would work as well I guess, as the >> main >>>> point is to make people aware that it _can_ possibly affect render >> times >>>> _significantly_. >>>> >>>> If the proposal to include figures is not agreeable for some reason, I >>>> would at least ask for a more indicative wording than "somewhat >> slower", >>>> which even after consulting multiple dictionaries I'm not sure if there >>>> is an official interpretation for. (One dictionary suggests "quite" as >> a >>>> synoym, another "slightly" ...) I'd still prefer the figures though, no >>>> one looks up terms in the dictionary while using Blender and I am >>>> probably not the worst offender at massacring and misunderstanding the >>>> english language in the Blender community, and it doesn't get less >>>> ambiguous than numbers anyway, so I say we use them here? :) >>>> >>>> Best, >>>> Simon >>>> _______________________________________________ >>>> Bf-committers mailing list >>>> [email protected] >>>> http://lists.blender.org/mailman/listinfo/bf-committers >>>> >>> >>> >>> >>> -- >>> With best regards, Sergey Sharybin >>> _______________________________________________ >>> Bf-committers mailing list >>> [email protected] >>> http://lists.blender.org/mailman/listinfo/bf-committers >>> >> _______________________________________________ >> Bf-committers mailing list >> [email protected] >> http://lists.blender.org/mailman/listinfo/bf-committers >> _______________________________________________ >> Bf-committers mailing list >> [email protected] >> http://lists.blender.org/mailman/listinfo/bf-committers >> > > > _______________________________________________ Bf-committers mailing list [email protected] http://lists.blender.org/mailman/listinfo/bf-committers
