On Oct 15, 2014, at 6:06 PM, Ștefan-Gabriel Mirea 
<mireastefangabr...@gmail.com> wrote:

>> The downside is that it’s more work and we may have to follow our 
>> deprecation policy to change user-visible options if they were documented 
>> somewhere.
> 
> Nevertheless, it looks like a better solution, so I will read the
> CHANGES file and implement it in the next days.

I don’t know for sure that the deprecation policy must be followed in this 
instance.  It depends, for each of the user tools you identified, whether their 
tie option was documented anywhere (a NEWS file mention or anything under 
doc/docbook for example).  If it wasn’t documented, those userland options can 
just be removed.

>> Those applications are really just using the rt_bot_prep() interface and — 
>> in that function — a heuristic is calculated to estimate which one is better 
>> to use, TIE or traditional.  An initial heuristic could be as simple as if 
>> #triangles >= MIN_TRIANGLES_FOR_TIE, run bottie_prep_double(), else 
>> rt_bot_prep_*().  A more involved one might entail creating a single 
>> triangle, running prep and running shot for TIE and traditional, and 
>> calculating an estimate for when one outperforms the other.  Lots of 
>> possibilities.
> 
> I made a little research to find the best value for
> MIN_TRIANGLES_FOR_TIE[1]. I made a C++ program that generates a BoT
> with a specified number of triangles, whose vertices are organized
> like the nodes of the latitude-longitude grid of a sphere. The output
> is in the form of a Tcl script which is then converted to a .g
> database using asc2g.

This is totally awesome!  LOVE seeing data like this...

> I ran rt for a huge number of files generated in
> this way both with LIBRT_BOT_MINTIE unset and set to 1 and I logged
> the execution times. Then I plotted the results in Octave and I got
> these functions[2], which are a bit curious as I expected some
> polynomial-like graphs which I could have approximated to find their
> intersection. Anyway, it comes out that the best value is around 32
> triangles.

Very interesting and great data to see!  The conclusion is a little premature, 
though… their are several discrepancies in the data that raise questions and 
(more importantly) they don’t reflect observed performance behavior, but I 
think it’s mostly because of the limited range of testing.  Try powers of 2 
instead of incrementing by 2, take it up to at least 20M triangles (so 25+ 
runs).

You’re looking for an expensive prep overhead, which is going to be tricky with 
your sphere model because that’s essentially a best case for TIE’s spatial 
partitioning method.  TIE’s prep will, however, still get exponentially slower 
whereas without-tie’s prep should get slower linearly.  TIE may still always 
win that case because, like I said, it’s a best case.

Another useful test model is to just create random triangles that span the 
sphere’s interior (i.e., pick any three distinct points, make it a triangle, 
repeat).  That should be essentially a worst case for both with-tie (and 
without-tie) and have a fairly different performance profile.

Another issue is that you create the .g file and then run the without-tie test 
and then the with-tie test every time.  This will cause without-tie to always 
pay a disk read overhead and with-tie will might potentially be gaining a 
slight advantage from the filesystem cache.  Maybe run without-tie twice, and 
only look at the second run time.  That might be what’s causing the spiking of 
without-tie’s performance (sometimes the .g got loaded to cache by the time rt 
ran, sometimes it didn’t).

Yet another factor is the number of rays being fired.  You just run rt, so 
that’s running prep once and shooting rays 262144 times.  I don’t think that’s 
unfair, but you’ll see without-tie fare much better as the number of rays 
decreases (e.g., if you shot only a single ray).  I don’t think you need to 
actually test this unless you’re curious, but being aware of this bias (in 
TIE’s favor) is important for interpreting the results. 

Lastly, I can’t say for certain whether without-tie and with-tie prep overhead 
is linear with the number of objects.  That is to say, what happens as the 
number of objects increases?  If I have 1 sphere vs 1024 spheres  does it get 
linearly slower?  It almost certainly won’t even if you tried to compensate for 
the same number of pixels hitting triangles.  I don’t think you need to 
actually test this aspect too, but it’s also worth noting as real-world models 
are rarely ever single meshes but collections of thousands or tens of thousands.

Cheers!
Sean


------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
BRL-CAD Developer mailing list
brlcad-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/brlcad-devel

Reply via email to