While I have not used Blender, I have done a bit of 3D point cloud
generation with Structure from Motion techniques, particularly using
VisualSfM.  I'd guess that your needs would have two stages: 1 capture a 3D
point cloud, then 2. use the point cloud to develop a 3D solid model.

SfM is getting a lot of research interest lately in the areas of terrain
modeling and geomorphology, as a cheaper and more flexible alternative to
aerial lidar and terrestrial laser scanning.  Accuracy under the right
conditions can be almost as good as laser scanning.  If you go with SfM
though, spend some time trying different photographic angles, distances,
and take far more photos than you think you would need.

One area that both laser scanning and SfM have challenges with is fine
branching detail.  For SfM, points are generated from automatic matching of
objects between multiple photographs and the algorithms have a hard time
identifying some things like the same branch of a shrub between photos.
For laser scanning, small objects like branches may not often get hit by
laser pulses, or the pulse may hit the edge causing returns from both the
branch and whatever is behind, which can cause a smear of erroneous points
in between.

Most of my experience is in trying to model river banks and both techniques
work great at oblique angles from about 20 meters.  In both cases, tree
trunks show up well, along with some larger branches.  Smaller branches and
twigs are mostly missed.  On the other hand, I've also used SfM a little
with lichens, photographing from only a couple centimeters away and gotten
good detail.  As one would expect, scale in the resulting data is dependent
on scale in the data collection method.  With SfM, don't expect points from
objects less than 5 pixels across in your photos.

-Eric Peterson, Ph.D.
Landscape Ecologist and Collaborative Restoration Data Steward


On Wed, Jul 30, 2014 at 9:00 PM, ECOLOG-L automatic digest system <
[email protected]> wrote:

>
> Date:    Wed, 30 Jul 2014 17:23:30 -0500
> From:    Eric Ogdahl <[email protected]>
> Subject: Modeling plants with 3D open source software?
>
> Greetings,
>
> I'm wondering if anyone has used 3D open source software, such as Blender,
> to create 3D models for plants, based on 2D images and/or field
> measurements.
>
> I'm working on a project assessing the use of different shrub-willow
> species for living snow fences. One of the goals of my project is to
> predict how well these shrubs capture snow.
>
> Traditionally, the height and porosity (percentage of open space in a
> linear shrub row when faced at a right angle) of a snow fence are used to
> predict its snow storage capacity. A few windbreak scientists have
> criticized this porosity method, however, as it is only a 2D view of the
> snow fence and does not accurately describe how wind, and thus snow, move
> through a 3D plant. I'm only aware of a few papers describing 3D models for
> windbreaks, for which the measurements are often destructive to the
> planting and species-specific.
>
> I've recently become aware of 3D imaging with open source software and am
> curious if anyone has experience using this for estimating plant 3D
> structure.
>
> I appreciate any comments/suggestions.
>
> Thank you,
>
> Eric Ogdahl
> Research Assistant
> Natural Resources Science & Management
> University of Minnesota-Extension
> [email protected] | 651-319-1022
>
>

Reply via email to