On Mar 20, 2015, at 11:29 PM, Vasco Alexandre da Silva Costa
<vasco.co...@gmail.com> wrote:
> I see. Then I need to examine this part better and propose a plan focused on
> this. I only have CS grad student like knowledge of CSG since most of my
> experience in RT is with acceleration structures and ray processing so I am
> interested in seeing how you do the weaving. If there are any specific
> optimizations, etc.
Old school optimizations. Complex goto branching and highly tuned tolerance
testing/merging/splitting. For a first take, though, a simpler approach could
be taken since the performance considerations will be considerably different
with coherency and/or GPU involved.
> So the only planning concern is that this isn’t an incremental plan. It’s
> always a big red flag when a proposal has a big “test and document” at the
> end. If anything goes wrong or takes a couple weeks longer than planned with
> one of your early steps, we usually end up with something incomplete that
> cannot be shipped to users.
>
> I'm used to doing incremental development with continuous integration. I
> don't know how to do it any other way. I always test every step in order to
> not break the code. But in my experience something always needs fine tuning
> or debugging in the end. Also the docs always lag behind everything. Not
> dedicating time specifically in the plan for docs usually means next to no
> docs will be produced.
Coding complete isn’t about incremental development or continuous testing.
Those are good too. Taken to extreme, complete means you develop in a manner
such that even if tragedy struck (e.g., you got hit by a bus tomorrow), there’s
actually nothing anyone would have to undo/fix/finish/revert. You focus on
making sure your code is always “done”, in a production-ready state.
If writing code were a graph traversal, it’s about going depth-first instead of
breadth-first. Not leaving TODO and FIXME notes for issues directly related to
your implementation, making sure your code is always fully compliant to
standards, not getting something “good enough” before moving on to the next
piece, etc. If you’re complete, docs never lag, implementations are verified
as they’re implemented, and reverified as they become integrated with other
components.
If a callout is warranted in the schedule, there should be several instances of
“test, debug, document” throughout your timeline, but still not one big 2.5
week one at the end. That’d be breadth-first waterfall-style cleanup. It’s
just not acceptable from a risk-management perspective, and essentially
something we’ve learned to not even allow for GSoC.
> I was an OSS project maintainer (Freeciv) for close to 4 years. You never
> ever break the code with a commit. BTW what's your process for patch code
> reviews? You discuss patches on the mailing-list?
Ah, a fellow game dev and with OSS experience — fantastic. Please read our
developer HACKING guide (in source tree). It covers patches, commit
expectations, code reviews. It’s an informal process primarily in the hands of
existing devs to decide what to review. What is deemed acceptable is discussed
throughout HACKING, automated tests, and core developer mindshare.
As for commits breaking code, HACKING also talks about that topic. We actually
allow a (temporarily) unstable trunk. We have STABLE and RELEASE branches that
have much different quality and commit requirements.
> I basically wanted to have a whole (optional) accelerated rendering path from
> end to end so we could then work on the primitives afterwards when this thing
> ends. But if weaving is that compute intensive in BRL-CAD you are correct
> that it may make more sense to initially focus on just that. I have not
> profiled BRL-CAD librt to know where the perf bottlenecks are but you guys
> have actual experience with this.
I completely understood why you proposed it that way. :)
The problem is that approach means that you could be 100% successful in your
project, even ahead of schedule, and users could end up with no benefit when
GSoC is over until enough primitives have a shot routine transcoded, tested,
and verified. Some of them will be exceptionally complex to get working in
OpenCL, months of effort each. There are about 10 primitives that are in
pervasive use — some incredibly complex. Why would we want optional
acceleration? By simply shifting the order/focus around a little bit, we can
avoid optional. It will ensure the focus remains on the user, not the status
of implementation.
If you actually accomplish accelerating the whole pipeline by the end, that
would be fantastic. If anything takes longer than expected, and it will,
everything will still be fine. Remember to account for time throughout to
provably demonstrate consistency with the existing implementation, to document
progress as you go, to engage in discussions, towards answering questions, etc.
As for performance, weaving can be a big or small factor, as it’s heavily
dependent on the geometry being rendered, number of primitives, number of
regions, depth of hierarchy, number of operations, etc. A traditional model
like havoc.g or m35.g in our sample db directory should benefit greatly while a
pure NURBS model (e.g., no Boolean operations) won’t benefit nearly as much.
We’re not going to see huge performance until the whole pipeline is connected,
and that’s okay if users get speedups for some geometry situations. Generally
speaking, dispatched traversal is roughly usually 5-15%, prep is 0-20% (except
brep), shot intersection testing is 30-70%, weaving is 5-30%, and shading is
0-25%. Models with complex entities spend more time in prep and shot; models
with complex hierarchies spend more time in weaving and traversal.
> http://viennacl.sourceforge.net
> http://ddemidov.github.io/vexcl/
>
> Nah. Don't want to add more build dependencies and I have experience with
> OpenCL. :-)
> OpenACC might be an alternative though.
ViennaCL is interesting since it supports both OpenCL, CUDA, and OpenMP. It
would let us perform driver and hardware comparison studies without any added
code complexity. That said, OpenCL is definitely the priority interest. Just
a thought.
Cheers!
Sean
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
BRL-CAD Developer mailing list
brlcad-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/brlcad-devel