Hi
Jennica,
The
Hough transform could certainly be part of such a
system.
But in
2D image processing, the way I've seen the Hough transform used is
to implement some edge detection algorithm and plug its output into a Hough
transform in order to fit lines to the output of the edge
detector.
I'm
not sure exactly how one would do something analysis for the use of the Hough
transform to construct 3D wireframe models based on perceptual data of the same
scene from slightly different perspectives (or based, say, on camera input
together with Lidar input for depth perception). Some kind of
preprocessing would be required, similar to edge detection in the 2D case. This
would be "surface detection", I suppose, which can be carried out by a variety
of algorithms, e.g.
cvlab.epfl.ch/publications/publications/2005/PiletLF05.pdf
for the case of
non-rigid surfaces and a lot of more standard approaches for the case of rigid
surfaces...
So, yeah, I guess
"piping a surface detection algorithm into a multi-D Hough transform" would be a
reasonable approach to mapping visual scenes into wireframe
models.
What I'm
wondering is if someone has already done this and packaged it up nicely. I
don't want to have to build it, not because I think it's fundamentally difficult
(it doesn't seem trivial, but it seems a lot easier than problems in cognition
that we're trying to tackle), but because it seems like a lot of
work!
--Ben
-----Original Message-----Hello,
From: Jennica Humphrey [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 27, 2005 8:22 PM
To: [email protected]
Subject: Re: [agi] Image to wireframe model conversion
There are many multidimensional adaptions of the Hough transform. Might that be a starting point?
Regards,
JJ
On 9/27/05, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Hi all,
This list has been way too quiet lately! I plan to post something on
Piagetan learning and AGI and try to get an interesting discussion going,
but I'll do that next week because I'm away from home right now.
In the meantime, I have a question that someone on this list may be able to
answer.
In our work connecting Novamente with the AGI-SIM simulation world, we have
decided to have Novamente visually perceive *polygons* rather than pixels or
objects. (The reason is that giving it objects as percepts is too much
cheating and denies the system the ability to learn on its own what an
"object" is; whereas, giving it pixels as objects would require us to
basically build a visual cortex for Novamente, which would be a lot of work
and is something I'd like to avoid for the time being.)
This is fine for AGI-SIM in which all objects are explicitly made of
polygons, but doesn't do us much good in terms of hooking Novamente up to
actual real-world visual stimuli.
Unless ... and here is the point of this email ... someone has a nice
approach to computer vision that produces polygonal models from 3D scenes!
I am moderately sure that such things exist, and I could explore the
research area via the usual Googling, but I wonder if someone on this list
has more knowledge of this domain than I do, and could simply point me to
the good stuff.
Thanks,
Ben Goertzel
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
