> Fair enough. My next goal will be to make our initial minimal feature 
> working. In this case this means having a way to let the user define points 
> that will be used for query_density(), instead of hardcoding them. I guess 
> for now I will use the .density file as I have no idea how the whole 
> "material objects" thing should work, yet.

If you have a specific idea/plan in mind, I’d love to hear it.  However, I 
can’t really envision a good way to extend the tabular .density file format 
with the information you need without introducing a problem.

Two options that come to mind would be to modify rtweight to take a secondary 
input file (temporary) or implement a material object stub that has what you 
need (more complicated, but in the right direction) [1].  For time, I think the 
second option is the way to go given the time remaining and that’ll keep you 
directly working with the rtweight sources.

[1] http://brlcad.org/wiki/Material_and_Shader_Objects 
<http://brlcad.org/wiki/Material_and_Shader_Objects>

> I guess it's up to you to decide what I should focus on. I could work on the 
> basics of this new approach so that future work on it can then build on it 
> and eventually finish it, but that would mean quite a change to what our 
> initial goal for the socis period was. I'm totally fine with either way.

If we knowingly plan to extend the current materialID system to later point to 
an object, then you’ll just have to make sure whatever you read in from file 
fits that mental model.  Directly modifying rtweight to take a second 
specification file will more easily allow for testing N-point cases, so we’ll 
just have to be careful it can easily translate into an object.

> I think this [checking if there are overlapping points] can also be handled 
> by changing the way vectors are being defined (or perhaps what they mean). 
> Instead of having a concept of origins and detachment of point densities, let 
> density_vector actually refer to a density_point and let the other struct 
> fields encode the nature of the contribution (effectively combining your 
> struct combination data into one concept).
> 
> I'm afraid I'm not following you here. Could you detail this out a bit more? 
> The referenced density_point would be the origin of the vector? (if not, what 
> is it?).

No, at least not necessarily.  Density points would be simply a density 
specification at a specific point in space.  How that value is interpreted or 
used nearby would be a separate concept (and handled in code in a separate 
struct).  In fact, it becomes possible to define reasonable behavior without 
any vector information.

Consider two side by side boxes (with a gap between them) that are in the same 
region.  Under the current system, there’s a materialID=3 which maps to a 
density of 7.75 g/cm^3.  If I specified a density point somewhere in one of the 
boxes as being 8.05 g/cm^3, I would intuitively expect that to be the density 
of that box (and only that box).  No falloff, no smooth transition to 7.75, no 
override of the universe to 8.05.

How to achieve that in code is going to be really tricky for concave shapes, 
but I think it’s the simplest and most natural behavior a user would expect.  
(keep the simple easy)

> The 'nature' of the contribution would be something like rate: linear, 
> quadratic, etc.?

It’s only once we add a second point or define a transition vector (which 
implicitly entails two points) that we have to care about this, but essentially 
yes.

> And how does this change help us check if there are overlapping points?

So again consider defining points but not defining vectors.  If we have one box 
and define two density points in the box, that is the density at those points.  
Defining the same point as having two separate densities would be a definition 
error.  Separate them by any meaningful distance and we’ve effectively defined 
a density “binds" that maps well to a Voronoi tessellation of space, like this:

http://alexbeutel.com/webgl/voronoi.html 
<http://alexbeutel.com/webgl/voronoi.html>

With that conceptualization, adding in vectors from one point to another merely 
changes the space definition to being a continuous smooth transition between 
pairs of points.  Make sense?  That becomes a fully generalize solution that 
should capture nearly any material definition while keeping the simple easy, 
and complex possible.

> From the rest of the answers in your emails I think I got the idea that you 
> want to let the user specify both points and vectors as if they were two 
> means of describing density. Then a vector should be something like a 
> superset of the point, where we can specify some more information about it to
> describe the continuous transition between the two points the vector spans.

I think you got it.  More specifically, vectors augment/extend the limited info 
of a density point, describing how it transitions out from a point.

> Don't we need to reference two density_points then?

Either as two points (origin pt1 to destination pt2 with an implicit dir) or as 
one point and a direction vector, not unitized (so adding pt1+dir gives pt2).  
Either should work fine.

> Doesn't this method depend too much on the direction from which the rays 
> come? For example, ptA could be really far from 'IN' if you shoot from above 
> but really close to it if you shoot from the side. If ptA was the heaviest 
> point, then shooting from above will turn a huge part of the material into 
> really heavy, and shooting from the side should weight much less. I'm not 
> sure if I'm talking nonsense here, I'll check with an actual example to see 
> it working, but this was my first thought when I saw it.

Hopefully the Voronoi webgl demo helps clear that up.  There was only weighting 
when we were assuming linear interpolation or vectors to/from all points.  
Using the Voronoi /  halfspace approach is direction invariant.

> For now I'll make my code fully 'complete', as you mention. Then I will get 
> rid of my 'origins' and make projections onto the shotline (but this will 
> require some more discussions as I'm not 100% sure how this would all work 
> and fit together, especially points with no vectors [the question above]). At 
> some point in the future we will need to take the distance to the shotline 
> into consideration to make a 'fair' average of contributions, as you 
> suggested.

It’s also okay if you need to move back to rtexample to prove the concept 
first.  It’s when the feature is introduced into a production facility that it 
should be done “complete”, one bit at a time.

For what it’s worth, projecting onto the shotline is probably only going to 
work on convex geometry.  For concave shapes, it’s a bit more complex to know 
where a given density point applies.  It’s certainly resolvable with the 
neighboring ray information (because they represent a quasi-voxelization of 
space).

Cheers!
Sean

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BRL-CAD Developer mailing list
brlcad-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/brlcad-devel

Reply via email to