On Thu, Nov 14, 2013 at 10:18 PM, Dennis Stark <[email protected]> wrote:
>
>
> That's what I figured. Once I have boundaries for the object, I can make
> sure that it's scaled properly. Looks like just rotating it 360 degrees
> then flip and rotate again would ensure invariance to the input.
>
I can't envision this for a 2D image, rotation 360o seems identical, doesnt
it? But you could eg present a photo from multiple angles and for all say,
this is 'A', CLA will train some "perspective for the object".
>
>
> Once I got my image through OpenCV, I can fix the size of the object, but
>> not the rotation, so in my case I need HTM to remember rotation invariant
>> representation of the object. HTM should also know that this is the same
>> object.
>>
> you can train HTM/CLA to "understand" rotation (scale, etc..) on the
> objects. You'll do it like this:
> Rotation:
> { <)|||<<, FISH}
> {>>|||(>, FISH}
> {O_o, ZOMBIE}
>
> Translation:
> {O....., ball}, {.O...., ball}, {..O..., ball}, ....., {....O, ball} that
> way, you;ve just learned ball O is invariant to translation to the
> left-to-right.
>
>
> The biggest problem I see that for both scale and rotation invariance (not
> even talking about 3D) I will need to feed all possible combinations of
> size and angles, which can be a HUGE number for a single image.
>
It's not so many if you rotate by eg 5deg. What is the problem is 'for
every single image'. That's why I said you could train a sence for
geometry/perspective. You've trained it as a baby on everyday objects..and
now you can tell in what way a girraffe is moving, even if it's the first
time you see one.
>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
--
Marek Otahal :o)
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org