Hey guys!
I'm just getting started with my BBB, and the possibility to integrate 
camera vision in robotic project is what I'm most looking forward to get 
from BBB.

Appologise first, I'm not very experienced with Linux and c++ at all -- 
 and I guess this is why I found it so hard to get started with OpenCV for 
BBB, even though there are a lot of examples online already.
Ok, let me try my best to explain the issue.

According to beaglebone.org, we can use BoneScript to control PWM pins 
allowing us to drive servos.
On the other hand, we can use OpenCV c++ to detect face positions.
The question is, what should I do if I want to drive the servos based on 
the face detected with OpenCV?

Does that mean I have to write the entire application in c++? 
Or is there a way to just pass data from the c++ program with OpenCV (only 
the face position) into a servo driving application written in BoneScript?

I guess my options are:

1.  c++ face detect position with OpenCV + BoneScript application driving 
servo.

2. Python: OpenCV in python + Python IO BBIO + driving servo with py 
application

3. c++: c++ OpenCV, c++ drive servo

I have NO experience with c++, little bit experience with python, ok with 
Javascript and java? 

What do you guys think? Which options are more feasible?
Any better suggestions?

Sorry I'm not a computer scientist so I apologies if anything I'm saying 
sounds silly or doesn't make sense.
Thank you guys in advance!
Shanshan

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to