Note:
Abbreviations:
Density Point: DP
Projection: PR
Boundary: B
So I would do the following:
Once we have the projections, we can draw two triangles:
-DP1, PR1, B
-DP2, PR2, B
We want to obtain, for example, the distance from projection1 to boundary,
so that we can correctly know the percentage of density of region1 that
there is between projection1 and projection2.
We know that both distances DP1B and DP2B have to be the same, per
definition of voronoi tesselation.
If we use Pitagoras, we can say that:
DP1B^2 = DP1PR1^2 + PR1B^2
DP2B^2 = DP2PR2^2 + PR2B^2
Knowing DP1B = DP2B we can say that:
DP1PR1^2 + PR1B^2 = DP2PR2^2 + PR2B^2
We know two of these values and don't know the other two. But the two
unknowns have a relation: PR1B + PR2B = PR1-PR2, which we know.
We could say that PR2B = PR2PR1 - PR1B
So lets substitute:
DP1PR1^2 + PR1B^2 = DP2PR2^2 + (PR2PR1 - PR1B)^2
Unfold the parenthesis:
DP1PR1^2 + PR1B^2 = DP2PR2^2 +
PR2PR1^2 + PR1B^2 - 2 * PR2PR1 * PR1B
Now we clear the variable we want to compute:
PR1B^2 - PR1B^2 + 2 * PR2PR1 * PR1B = - DP1PR1^2 + DP2PR2^2 +
PR2PR1^2
PR1B^2 = (- DP1PR1^2 + DP2PR2^2 + PR2PR1^2) / (2 * PR2PR1)
If I didn't do anything wrong (I'll later double check on paper as well to
be sure...), this should be it. We can calculate where the boundary happens
between all the projection points we obtained. I had assumed its the middle
point by previous discussion we had but now we saw that it can greatly
vary.
I think it's easier this way (it's a quite simple O(1) operation) than to
compute the actual mesh and then evaluate against it. It probably would me
more elegant and maybe more efficient but it seems much more complex to me.
Let me know what you think, I'll start working on it.
Mario.
On 29 August 2017 at 15:44, Mario Meissner <mr.rash....@gmail.com> wrote:
> Hi Sean!
> Back from my trip now.
>
> The reason why I worked on projections is because you told me to do so:
>
> Not the distance between points, but distance to their transition
>> vectors. From the prior e-mail, the way this should fully generalize can
>> be handled by projecting points onto the ray, and projecting any transition
>> vectors onto the ray. Then for a given ray, it can fully handle just about
>> any transition type.
>>
>> To see how this will work, it will likely be necessary to redo rtexample
>> and get 2-points working without any vectors. You project each point onto
>> the ray.
>>
>> ptA ptB
>> | |
>> IN v v OUT
>> ray o--ptA'----ptB'---o
>>
>> Density from IN to ptA’ is density(ptA).
>> Density from ptB’ to OUT is density(ptB).
>> Density from ptA’ to ptB’ is density(ptA) until the midpoint between ptA’
>> and ptB’. Thus the section’s average density would be density(ptA)/2 +
>> density(ptB)/2But
>>
>
> But I now realize as well that taking the mid-point between projections
> does not give us the actual boundary between the polygons.
> We could work with the mesh like you mention, or we could do a slight
> modification to the current code to compute the actual boundaries while
> walking through the points (probably our already available projections,
> we'll see). It's probably a simple trigonometry problem, and I'll try to
> come up with the operation we need to do to obtain the boundaries in a
> segment.
> In the attached picture we can see that we need the two blue lines to be
> equal in length. Aligning some equations to fit that property may easily
> give us the distance from each projected point where the boundary lays.
> Unless I'm overseeing something important I think this way would be
> sufficient and really easily implementable. The example I'll send this
> afternoon may prove me wrong, in which case I'll consider the mesh.
> I think I can implement this and leave the code clean before leaving.
> Thank you for your feedback!
> Mario.
>
> On 20 August 2017 at 00:03, Christopher Sean Morrison <brl...@mac.com>
> wrote:
>
>>
>> > On Aug 17, 2017, at 6:28 AM, Mario Meissner <mr.rash....@gmail.com>
>> wrote:
>> >
>> > So I think now that n-points are working for convex regions, I think
>> the next steps would be:
>> > • Integrate vectors into existing point system.
>> > • Let the user input vectors through the existing input
>> interface.
>> > • Consider one of the vectors and make it work.
>> > • Consider all vectors. N-point and N-vectors working.
>> > • Check that all edge cases work correctly (no points,
>> no vectors, many of everything, etc.) and clean up code.
>> > • Modify implementation so that it properly works in all
>> situations (i.e. concave shapes).
>>
>> Again, I don’t think you should introduce a notion of vectors. For now,
>> points will be adequate and present plenty of challenges.
>>
>> Consider shooting a ray along the X-axis (pnt -1000 0 0; dir 1,0,0) with
>> a density point at 0,10,0 and 10,-5,0. Say you enter geometry at 1,0,0 and
>> exit at 9,0,0
>>
>> density
>> point A
>> o (0,10,0) ___(9,0,0) ray exits
>> | /
>> | v
>> o-> o—x========x-o
>> ray (1,0,0) |
>> ray o (10,-5,0)
>> enters B density point
>>
>>
>>
>> With just those two points (A & B), the voronoi density field splits
>> halfway between A and B, which has a midpoint above the x-axis and runs
>> diagonally across the ray path.
>>
>> If we simply project, it’ll be the wrong contribution. That’s where I
>> was suggesting in the prior e-mail to actually construct the voronoi mesh
>> and you’ll get that actual edge between A and B, and calculating where the
>> ray intersects that edge becomes trivial.
>>
>> > • Is it a good idea to compute the distance to the
>> previous point and store it into the structure in a loop before starting
>> with density evaluation? I think it would clean up the last part of the
>> code considerably, but also uses more memory. I like the idea so I probably
>> will do so.
>>
>> We are in no way constrained by memory.
>>
>> > • Are we really good to go using an array? I decided to do so
>> because I needed the sort function, but there might be alternatives I'm not
>> aware of.
>>
>> Shouldn’t need to sort with a voronoi mesh, but nothing wrong with using
>> plain arrays + size variables.
>>
>> > • As mentioned, everything in my code is relative to the inhit
>> point. Is this a good approach? For example, projection struct has a vector
>> that goes from inhit to the projection point, but I don't store the origin
>> of the vector anywhere, so someone using the vector that does not know this
>> fact may not know what it's origin is. My thought is that these variables
>> will not leave my environment so a simple comment explaining that
>> everything is relative should be enough.
>>
>> If you eliminate vectors and projections, I think this one becomes moot.
>> That said, definitely comment the code.
>>
>> > I will assume that what I propose doing is good unless I hear feedback
>> telling me otherwise.
>>
>> Your e-mail that followed this was golden, as you essentially
>> demonstrated that straight-up projections weren’t going to work for even a
>> simple 3 point case without changes. That implies you went in the right
>> direction, learned something, and now need to adjust accordingly. ;)
>>
>> Cheers!
>> Sean
>>
>>
>>
>> ------------------------------------------------------------
>> ------------------
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> _______________________________________________
>> BRL-CAD Developer mailing list
>> brlcad-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/brlcad-devel
>>
>
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BRL-CAD Developer mailing list
brlcad-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/brlcad-devel