The internet is changing rapidly again. AGIs taking input off of the internet have to deal with text, audio, images, various symbologies e.g. mathematics representations, music, different types of protocalled or standardly formatted information e.g. autocad files, RFC'd communications, XML markup languages, network transmitted rendering instructions such as in games and Second Life, etc. etc. but a big one that is happening now rapidly more and more is VIDEO. Video in various formats - SWF, mpeg, Windows media streaming, etc.. Video from what I understand is typically based on a frame rate, one frame placed on top of another basically sampleable as an ordered set of images per time unit. Some video is based on changes between images depending on compression protocol but it is ultimately a frame rate and a max visible frame rate to the eye based on monitor refresh frequency.
Dealing with non-videoed information is very well doable, straight forward in general; naturally there are still many issues. Image analysis technology is there in many ways, audio speech recognition/analysis is getting pretty good. But video is looking real nasty. Lots of resources, slow, messy, yuk. Many robots have to do video analysis real-time with cameras but this usually is very specialized like in speech recognition where you specify what to look for while it is looking or train it beforehand. It seems like there is a whole world of issues to deal with in Video Mining/Analysis and any pointers in the area especially open source projects would be interesting to take a look at. John ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&id_secret=28328272-ee06a6
