Hi,

I'm working on a project whose objective is to show synthesized video messages 
to the user. The first approach was to do the video / image processing directly 
on the device. But I'm not sure anymore that this is good idea. The basic 
workflow for the service (and the activity that shows the final video) I want 
to create is to generate a set of images based on speech parameters, using a 
c++ library (AAM Library: http://code.google.com/p/aam-library). That works so 
far, takes a bit time but this step is not so time critical. For the 
presentation to the user I figured that it would need a video file in order to 
show a proper animation. At the beginning I did the image animation with the 
animation class on the java side, but since all images need to be decoded into 
bitmaps this is hitting the memory limit very fast (working on the emulator), 
after approximately 40 to 50 images. So that seems to be the wrong way. The 
next idea was to use ffmpeg to generate the video on the c / c++ side of the 
application. But I suppose (without experience) that using ffmpeg would hit 
memory limits as well.

What would be the alternatives to the described approach? Video processing on a 
server and streaming of the generated video to the device?
I would be thankful about any hints, experience or general suggestions.

best regards,
berliner

-- 
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en

Reply via email to