That's all good points. However, all scanned meshes I've seen so far had such
irregular topology, holes (Concave areas like behind the ears or around the
neck behind collars and other clothing), and other errors (hair, beard, eyes)
that excessive
editing was required to make it look good, and to deform well, retopologizing
was almost a must.
I think in order to directly use scanned data for deforming objects, the
scanning process needs to produce clean enough results first before you can
start worrying about how to envelope, rig and animate all those points
efficiently, and I don't see technology delivering such data yet.
that you'd usually can't avoid fixing those anyway, depending on what you want to
use it for of course> > What would be a typical scenario for this? The point
count is adjustable (at the expense of detail),
> but the topology will aways be a mess unless properly retopo-ed, wouldn't it?
That is what I meant to suggest when I wrote:
"It is at hand that the more complex, raw 3D point cloud data will need new and
abstracted ways
of handling and manipulation, filtering options and adaptive control layers for
approximated data"
Basically, a rigged lower density mesh and a displacement trying to capture
small detail
while at the same time loosing control over how that detail will actually
react, that´s
sort of the established standard for working with high detail. Reduce detail
until you
can handle it and hope nobody will notice. Depending on personality, wave away
concerns.
The reason why I suggest to go and skin/rig/wheight a raw 3D scan mesh directly
to bones:
That is the data you want to animate, everything else is already yet another
degraded derrivate.
Going and trying it will show the limitations of current toolsets. Then do some
cloth sim ontop
and fix interpenetration issues. Or first of all wait for collision simulation
to finish.
Like using *.jpg´s with lossy compression as the input for color grading, then
re-compressing again
as a lossy *.jpg and wondering why there´s block artifacts.
Cheers,
tim
On 09.01.2014 14:34, Stefan Kubicek wrote:
go and skin/rig/wheight a raw 3D scan mesh directly to bones.
What would be a typical scenario for this? The point count is adjustable (at
the expense of detail),
but the topology will aways be a mess unless properly retopo-ed, wouldn't it?
I agree to the rigging paradigm needing some rethinking. I grew up with black
box systems like Character Studio and CAT. Creating a rig based on those takes
only minutes to hours,
not days, but they lack customizability. Yet, results were good enough that I
was constantly asking myself as to why anyone could possible want to use
anything else for 90% of the
work you see being produced anywhere. It's such a huge cost factor, both in
terms of time it takes to create the rig and time it takes to trouble shoot and
maintain it if it breaks
(which the black box systems next to never do) or needs extensions. Autoriggers
(Gear etc) reduce the creation time factor at the expense of flexibility, yet
the maintenance aspect
stays to a certain degree. What I also miss in them is the ability to have a mesh
enveloped to joints and just "put the rig on top", allowing to test
deformations directly by
posing the envelope rig without having to create a control rig - a given with
the black box systems because the control rig _is_ the envelope rig. The only
thing I know that works
in a similar way is Motion Builder, in that you import your enveloped mesh and
joints and apply rigging solvers to it, again at the expense of flexibility -
it only supports
humanoid and 4-legged creatures.
Fabric/Osirirs looks like it could deliver such a paradigm change - a modular
rigging system where the building blocks are encapsulated and the asset that
the user interacts with
in the scene is light-weight, fast and easy to manipulate, and hard to break.
I'm really looking forward to that, even though flexibility beyond a certain
point will probably need
to be paid for with programming knowledge and -time again.
Look at what comes in terms of animation and skeleton recognition
in the xbox kinect sdk and the xbox one.
Cheers,
tim
On 09.01.2014 13:09, Guillaume Laforge wrote:
I didn't read every posts so maybe my understanding is wrong but based in last
replies from Luc-Eric and Tim Leydecker, it sounds like point cloud scanning is
a rigging feature.
It is not, so lets return to the subject please :).
That illustrate well that it is much more easy to put money on new techs (like
point cloud scanning, web based applications, etc...) than to think about how
to improve/re-design an
existing workflow like character rigging ! We saw some new systems in modeling
(ZBrush etc...) and rendering (Katana) some years ago, but still nothing in the
rigging area. It make
sense as rigging is really a different culture. You need to be a good character
rigger to understand and build a good rigging system. But being a good
character rigger means spend
a lot of time on existing tools like Maya or XSI. At the end you think only
through the proposed tools of your app. If you are a developer interested in
designing a rigging system,
it is the opposite problem, you can have a fresh new vision but you can miss
important concepts of character rigging in your tool.
Interesting subject, if you forget about Maya and XSI :)
Cheers,
Guillaume Laforge
On Thu, Jan 9, 2014 at 4:18 AM, Tim Leydecker <[email protected]
<mailto:[email protected]>> wrote:
Autodesk is doing a lot of development in the area of 3D scan data handling.
If you look into what is going on in the area of topology data aquisition
for
architecture, engineering and the military, there is a shift towards 3D
pointcloud
data which imho is compareable to what 2D tracking as a concept brought us
in the 90s.
(Facial recognition and finally image based modeling and camera positional
data)
It is at hand that the more complex, raw 3D point cloud data will need new
and abstracted ways
of handling and manipulation, filtering options and adaptive control layers
for approximated data.
The implication such data for 3D animation brings is that the concept of
wheighting a fixed number
of vertices to a bone may have to be extended beyond a fixed number of
polygons.
Unfortunately, taking fall-off based volume wheighting as in it´s current
level of finesse
may give worse results than before, especially if your shape options for
the influence volume
are limited to capsules, boxes or spheres.
I am a bit worried that the process of rigging&wheighting an organic
character will become even
more frustrating and stiff or at least will need even more steps, like
creating an extra controlsurface
with a fixed number of points and wrapping it around the high-density data.
Such a wrap-deformer takes away control. It´s always the rims and little
caveats that need extra care.
Cheers,
tim
On 09.01.2014 02:13, Guillaume Laforge wrote:
On Wed, Jan 8, 2014 at 7:55 PM, Luc-Eric Rousseau <[email protected]
<mailto:[email protected]> <mailto:[email protected]
<mailto:[email protected]>>> wrote:
In the new future ( not talking about autodesk here) I think
workflows will standards will be Gator-like tools to deal with topo
changes (point clouds tools as necessary also ptex-based workflows)
and katana-like proceduralism for render passes-like workflow.
I'm still wondering if a company ( not talking about Autodesk here )
will do anything new like that for our little world. Money for such large dev
projects is just not
in the
animation/vfx world anymore. I'm not sarcastic, just realist. So lets
embrace old techs like Maya or XSI. They won't evolve too much but won't
disappear before many (many)
years.
Btw, Katana is not the futur, it is now :).
--
---------------------------------------------
Stefan Kubicek [email protected]
---------------------------------------------
keyvis digital imagery
Alfred Feierfeilstraße 3
A-2380 Perchtoldsdorf bei Wien
Phone: +43 (0) 699 12614231
www.keyvis.at
-- This email and its attachments are --
-- confidential and for the recipient only --
---
This email is free from viruses and malware because avast! Antivirus protection
is active.
http://www.avast.com