Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-11-25 Thread Anusha Jayasundara
Hi Sameera,

We just detect the humans and get their count. I used hog pedestrian
detection cascade for human detection. Now we are working on Aircraft
detection system. I'm trying to train a cascade using opencv-traincascade.

Thank You.

On Mon, Sep 19, 2016 at 6:40 PM, Sameera Ramasinghe 
wrote:

> Hi Anusha,
>
> I've been working on human activity recognition for some time now, and
> might be able to give some guide lines. I am not very clear on the final
> goal though. Are you trying to classify actions, or to detect objects?. I
> think it's important to differentiate between these two as they are
> different research areas.  Action classification is a red hot research
> topic these days and quite complex due to the high dimensionality of data.
> You need to consider temporal evolution and the dependency for that.
>
> On Mon, Sep 19, 2016 at 6:43 AM, Geesara Prathap  wrote:
>
>> Hi Anusha,
>>
>> Number of positive and negative samples are dependent upon  an object
>> which is trying to train. Becuase if you try to train small object like
>> rectangle then few positive samples would be enough. If there are more
>> variation in the object which requires hundreds and even thousands of
>> positive samples for like humans. Also, Intel Threading Building Blocks
>> (Intel® TBB) needs to be enabled when building OpenCV  so as to  optimize
>> and parallelize some of the functions which help to make haar cascade
>> classifier in an optimal way.
>>
>> Thanks,
>> Geesara
>>
>> On Thu, Sep 15, 2016 at 9:36 AM, Anusha Jayasundara 
>> wrote:
>>
>>> Hi Geesara,
>>>
>>>
>>> I used opencv-trainecascade function to train a cascade, and there is a
>>> function called opencv-createsamples, we can use this to create positive
>>> data set  by giving only one positive image. I used this method to create
>>> the positive data set. but the accuracy of the detection is very low, I
>>> think it is because I used very low-resolution image set. Now I'm trying to
>>> train using mid-resolution image set(300-150).
>>>
>>> Thanks
>>>
>>> On Wed, Sep 14, 2016 at 4:31 PM, Sameera Gunarathne 
>>> wrote:
>>>
 Hi Geesara,

 +1 for suggesting a cascade classifier(haar-cascade) for this
 implementation. With a sufficient number of samples for train using haar
 features would provide lesser rate of false positive results. AFAIK using
 of a background subtraction[1] method for pre processing can be used to
 reduce false positive results for the classification.

 [1] http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html

 Thanks,
 Sameera.

 On Tue, Sep 13, 2016 at 10:18 PM, Geesara Prathap 
 wrote:

> Hi Srinath,
>
> OpenCV provides us set of  interesting tools which can be used to
> train classifiers based on our requirements. Some time ago I trained
> a classifier[1] and controlled drone in real time. This article 
> explains[1]
> in a proper way how to train our own model using haar classifier based on
>  Adaboost which OpenCV provide.
>
> 1.https://github.com/GPrathap/opencv-haar-classifier-training
> 2.http://coding-robin.de/2013/07/22/train-your-own-opencv-ha
> ar-classifier.html
>
> Thanks,
> Geesara
>
> On Tue, Sep 13, 2016 at 10:40 AM, Srinath Perera 
> wrote:
>
>> Anusha, we should try Adaboost as Geesara mentioned ( when we done
>> with what we are doing).
>>
>> --Srinath
>>
>> On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara <
>> anus...@wso2.com> wrote:
>>
>>> Hi Sumedha,
>>>
>>> I just detect the face. I went through few articles about face
>>> recognition, and I have a sample code also, but it is not that much
>>> accurate.
>>>
>>> Thanks,
>>>
>>>
>>> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe <
>>> sume...@wso2.com> wrote:
>>>
 On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara <
 anus...@wso2.com> wrote:

> Hi Geesara,
>
> I used Haar full body cascade and HoG pedestrian detection
> cascade, In Haar full body cascade they have mentioned that, upper 
> body
> detection, lower body detection and full body detection is there in 
> the
> cascade. even thought it is there, once I tried to use separate upper 
> body
> detection cascade with full body detection cascade. but when it is
> implemented system took long time to process even a simple video with 
> two
> person.
> I'll upload my code to Github repo.
> I still didn't work with real-time CCTV videos ,but I was able to
> build a real-time face detection system using the web cam of my 
> laptop and
> it had issues on processing as the machine couldn't handle 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-20 Thread Sameera Ramasinghe
Hi Anusha,

If you got good accuracies with HOG features, it's quite ok. But generally
OpenCV implementation does not give good detection rates according to my
experience. If you are not satisfied with the accuracies, I'd recommend
using deep features for recognitions, as they have given me good results in
the past. Since your dataset is relatively small, I'd recommend the
following approach. Train a deep network on a large dataset like ImageNet.
Then input your images to the network. Then the feature vectors generated
at the higher layers as your feature vector. Then use a method like
principle component analysis to reduce the dimensionality of the vectors.
After that, you can use a supervised learning technique like SVM to
categorize the dataset. This is bound to give better results. If you need
to do counting, just use a image pyramid to detect each human. You can use
a pre-trained model for this also.

If you also need to do localization, and since you are dealing with videos,
this becomes relatively simple. You can focus on the regions where the
movement is higher. You can do simply do optic flows clustering, or there
is an algorithm I designed recently, which gave me great results.  I
created a feature called Trajectory Motion Tubes to focus on important
motion events occurring in a video. You can localize moving objects based
on that. I uploaded the code  to github [1]. You can use that if needed.
It's quite slow though as I coded it in Matlab. I am working on the C++
version but still its not completed.

Alternatively, if you have time, best method is to implement a state of the
art algorithm. There are many great algorithms which are being published
currently in this area, which yields excellent results. [2] (CVPR is 'THE'
conference for computer vision). I'll be able to give some hand, if you are
willing to do this, but it will take some time.

Again, all these is, if you are not satisfied with the current results.
Thanks.

[1]- https://github.com/samgregoost/MotionTubes
[2]- http://www.cv-foundation.org/openaccess/CVPR2015.py

On Tue, Sep 20, 2016 at 2:02 PM, Anusha Jayasundara 
wrote:

> Hi Sameera,
>
> We just detect the humans and get their count. I used hog pedestrian
> detection cascade for human detection. Now we are working on Aircraft
> detection system. I'm trying to train a cascade using opencv-traincascade.
>
> Thank You.
>
> On Mon, Sep 19, 2016 at 6:40 PM, Sameera Ramasinghe 
> wrote:
>
>> Hi Anusha,
>>
>> I've been working on human activity recognition for some time now, and
>> might be able to give some guide lines. I am not very clear on the final
>> goal though. Are you trying to classify actions, or to detect objects?. I
>> think it's important to differentiate between these two as they are
>> different research areas.  Action classification is a red hot research
>> topic these days and quite complex due to the high dimensionality of data.
>> You need to consider temporal evolution and the dependency for that.
>>
>> On Mon, Sep 19, 2016 at 6:43 AM, Geesara Prathap 
>> wrote:
>>
>>> Hi Anusha,
>>>
>>> Number of positive and negative samples are dependent upon  an object
>>> which is trying to train. Becuase if you try to train small object like
>>> rectangle then few positive samples would be enough. If there are more
>>> variation in the object which requires hundreds and even thousands of
>>> positive samples for like humans. Also, Intel Threading Building Blocks
>>> (Intel® TBB) needs to be enabled when building OpenCV  so as to  optimize
>>> and parallelize some of the functions which help to make haar cascade
>>> classifier in an optimal way.
>>>
>>> Thanks,
>>> Geesara
>>>
>>> On Thu, Sep 15, 2016 at 9:36 AM, Anusha Jayasundara 
>>> wrote:
>>>
 Hi Geesara,


 I used opencv-trainecascade function to train a cascade, and there is a
 function called opencv-createsamples, we can use this to create positive
 data set  by giving only one positive image. I used this method to create
 the positive data set. but the accuracy of the detection is very low, I
 think it is because I used very low-resolution image set. Now I'm trying to
 train using mid-resolution image set(300-150).

 Thanks

 On Wed, Sep 14, 2016 at 4:31 PM, Sameera Gunarathne 
 wrote:

> Hi Geesara,
>
> +1 for suggesting a cascade classifier(haar-cascade) for this
> implementation. With a sufficient number of samples for train using haar
> features would provide lesser rate of false positive results. AFAIK using
> of a background subtraction[1] method for pre processing can be used to
> reduce false positive results for the classification.
>
> [1] http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtr
> action.html
>
> Thanks,
> Sameera.
>
> On Tue, Sep 13, 2016 at 10:18 PM, Geesara Prathap 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-19 Thread Sameera Ramasinghe
Hi,

I've been working on human activity recognition for some time now, and
might be able to give some guide lines. I am not very clear on the goal
though. Are you trying to classify actions, or to detect objects?. Action
classification is a red hot research topic these days and quite complex due
to the high dimensionality of data. You need to consider temporal evolution
and the dependency for that. In the doc, it is not really clear what is the
final goal of the project.

On Mon, Sep 19, 2016 at 6:43 AM, Geesara Prathap  wrote:

> Hi Anusha,
>
> Number of positive and negative samples are dependent upon  an object
> which is trying to train. Becuase if you try to train small object like
> rectangle then few positive samples would be enough. If there are more
> variation in the object which requires hundreds and even thousands of
> positive samples for like humans. Also, Intel Threading Building Blocks
> (Intel® TBB) needs to be enabled when building OpenCV  so as to  optimize
> and parallelize some of the functions which help to make haar cascade
> classifier in an optimal way.
>
> Thanks,
> Geesara
>
> On Thu, Sep 15, 2016 at 9:36 AM, Anusha Jayasundara 
> wrote:
>
>> Hi Geesara,
>>
>>
>> I used opencv-trainecascade function to train a cascade, and there is a
>> function called opencv-createsamples, we can use this to create positive
>> data set  by giving only one positive image. I used this method to create
>> the positive data set. but the accuracy of the detection is very low, I
>> think it is because I used very low-resolution image set. Now I'm trying to
>> train using mid-resolution image set(300-150).
>>
>> Thanks
>>
>> On Wed, Sep 14, 2016 at 4:31 PM, Sameera Gunarathne 
>> wrote:
>>
>>> Hi Geesara,
>>>
>>> +1 for suggesting a cascade classifier(haar-cascade) for this
>>> implementation. With a sufficient number of samples for train using haar
>>> features would provide lesser rate of false positive results. AFAIK using
>>> of a background subtraction[1] method for pre processing can be used to
>>> reduce false positive results for the classification.
>>>
>>> [1] http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html
>>>
>>> Thanks,
>>> Sameera.
>>>
>>> On Tue, Sep 13, 2016 at 10:18 PM, Geesara Prathap 
>>> wrote:
>>>
 Hi Srinath,

 OpenCV provides us set of  interesting tools which can be used to train
 classifiers based on our requirements. Some time ago I trained
 a classifier[1] and controlled drone in real time. This article explains[1]
 in a proper way how to train our own model using haar classifier based on
  Adaboost which OpenCV provide.

 1.https://github.com/GPrathap/opencv-haar-classifier-training
 2.http://coding-robin.de/2013/07/22/train-your-own-opencv-ha
 ar-classifier.html

 Thanks,
 Geesara

 On Tue, Sep 13, 2016 at 10:40 AM, Srinath Perera 
 wrote:

> Anusha, we should try Adaboost as Geesara mentioned ( when we done
> with what we are doing).
>
> --Srinath
>
> On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara  > wrote:
>
>> Hi Sumedha,
>>
>> I just detect the face. I went through few articles about face
>> recognition, and I have a sample code also, but it is not that much
>> accurate.
>>
>> Thanks,
>>
>>
>> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe > > wrote:
>>
>>> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara <
>>> anus...@wso2.com> wrote:
>>>
 Hi Geesara,

 I used Haar full body cascade and HoG pedestrian detection cascade,
 In Haar full body cascade they have mentioned that, upper body 
 detection,
 lower body detection and full body detection is there in the cascade. 
 even
 thought it is there, once I tried to use separate upper body detection
 cascade with full body detection cascade. but when it is implemented 
 system
 took long time to process even a simple video with two person.
 I'll upload my code to Github repo.
 I still didn't work with real-time CCTV videos ,but I was able to
 build a real-time face detection system using the web cam of my laptop 
 and
 it had issues on processing as the machine couldn't handle it.

>>>
>>> Anusha,
>>> Did you just detect the face or associated that with a name as well?
>>>
>>>
>>>
 We thought of doing video processing out side of the CEP and send
 the process data in to the CEP.(i.e human count, time_stamp, frame rate
 ,etc..). For now I send those data into CEP as a Json POST request.


 Thank You,




 On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-18 Thread Geesara Prathap
Hi Anusha,

Number of positive and negative samples are dependent upon  an object which
is trying to train. Becuase if you try to train small object like rectangle
then few positive samples would be enough. If there are more variation in
the object which requires hundreds and even thousands of positive samples
for like humans. Also, Intel Threading Building Blocks (Intel® TBB) needs
to be enabled when building OpenCV  so as to  optimize and parallelize some
of the functions which help to make haar cascade classifier in an optimal
way.

Thanks,
Geesara

On Thu, Sep 15, 2016 at 9:36 AM, Anusha Jayasundara 
wrote:

> Hi Geesara,
>
>
> I used opencv-trainecascade function to train a cascade, and there is a
> function called opencv-createsamples, we can use this to create positive
> data set  by giving only one positive image. I used this method to create
> the positive data set. but the accuracy of the detection is very low, I
> think it is because I used very low-resolution image set. Now I'm trying to
> train using mid-resolution image set(300-150).
>
> Thanks
>
> On Wed, Sep 14, 2016 at 4:31 PM, Sameera Gunarathne 
> wrote:
>
>> Hi Geesara,
>>
>> +1 for suggesting a cascade classifier(haar-cascade) for this
>> implementation. With a sufficient number of samples for train using haar
>> features would provide lesser rate of false positive results. AFAIK using
>> of a background subtraction[1] method for pre processing can be used to
>> reduce false positive results for the classification.
>>
>> [1] http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html
>>
>> Thanks,
>> Sameera.
>>
>> On Tue, Sep 13, 2016 at 10:18 PM, Geesara Prathap 
>> wrote:
>>
>>> Hi Srinath,
>>>
>>> OpenCV provides us set of  interesting tools which can be used to train
>>> classifiers based on our requirements. Some time ago I trained
>>> a classifier[1] and controlled drone in real time. This article explains[1]
>>> in a proper way how to train our own model using haar classifier based on
>>>  Adaboost which OpenCV provide.
>>>
>>> 1.https://github.com/GPrathap/opencv-haar-classifier-training
>>> 2.http://coding-robin.de/2013/07/22/train-your-own-opencv-ha
>>> ar-classifier.html
>>>
>>> Thanks,
>>> Geesara
>>>
>>> On Tue, Sep 13, 2016 at 10:40 AM, Srinath Perera 
>>> wrote:
>>>
 Anusha, we should try Adaboost as Geesara mentioned ( when we done with
 what we are doing).

 --Srinath

 On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara 
 wrote:

> Hi Sumedha,
>
> I just detect the face. I went through few articles about face
> recognition, and I have a sample code also, but it is not that much
> accurate.
>
> Thanks,
>
>
> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
> wrote:
>
>> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara > > wrote:
>>
>>> Hi Geesara,
>>>
>>> I used Haar full body cascade and HoG pedestrian detection cascade,
>>> In Haar full body cascade they have mentioned that, upper body 
>>> detection,
>>> lower body detection and full body detection is there in the cascade. 
>>> even
>>> thought it is there, once I tried to use separate upper body detection
>>> cascade with full body detection cascade. but when it is implemented 
>>> system
>>> took long time to process even a simple video with two person.
>>> I'll upload my code to Github repo.
>>> I still didn't work with real-time CCTV videos ,but I was able to
>>> build a real-time face detection system using the web cam of my laptop 
>>> and
>>> it had issues on processing as the machine couldn't handle it.
>>>
>>
>> Anusha,
>> Did you just detect the face or associated that with a name as well?
>>
>>
>>
>>> We thought of doing video processing out side of the CEP and send
>>> the process data in to the CEP.(i.e human count, time_stamp, frame rate
>>> ,etc..). For now I send those data into CEP as a Json POST request.
>>>
>>>
>>> Thank You,
>>>
>>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
>>> wrote:
>>>
 Hi Anusha,

 A few suggestions to improve your implementation.
 Haar and HoG  are used to get visual descriptors which can be used
 to describe an image. Then both of them are using boosting 
 classification
 like AdaBoost to tune up its performance. When you are using haar-like
 feature extraction method you need to use more that one model in order 
 to
 make the final decision. Let's say you are using  full body classifier 
 for
 human detection. Along with this classifier,  can’t detect  upper body
 properly. When haar-like feature extraction 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-14 Thread Anusha Jayasundara
Hi Geesara,


I used opencv-trainecascade function to train a cascade, and there is a
function called opencv-createsamples, we can use this to create positive
data set  by giving only one positive image. I used this method to create
the positive data set. but the accuracy of the detection is very low, I
think it is because I used very low-resolution image set. Now I'm trying to
train using mid-resolution image set(300-150).

Thanks

On Wed, Sep 14, 2016 at 4:31 PM, Sameera Gunarathne 
wrote:

> Hi Geesara,
>
> +1 for suggesting a cascade classifier(haar-cascade) for this
> implementation. With a sufficient number of samples for train using haar
> features would provide lesser rate of false positive results. AFAIK using
> of a background subtraction[1] method for pre processing can be used to
> reduce false positive results for the classification.
>
> [1] http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html
>
> Thanks,
> Sameera.
>
> On Tue, Sep 13, 2016 at 10:18 PM, Geesara Prathap 
> wrote:
>
>> Hi Srinath,
>>
>> OpenCV provides us set of  interesting tools which can be used to train
>> classifiers based on our requirements. Some time ago I trained
>> a classifier[1] and controlled drone in real time. This article explains[1]
>> in a proper way how to train our own model using haar classifier based on
>>  Adaboost which OpenCV provide.
>>
>> 1.https://github.com/GPrathap/opencv-haar-classifier-training
>> 2.http://coding-robin.de/2013/07/22/train-your-own-opencv-ha
>> ar-classifier.html
>>
>> Thanks,
>> Geesara
>>
>> On Tue, Sep 13, 2016 at 10:40 AM, Srinath Perera 
>> wrote:
>>
>>> Anusha, we should try Adaboost as Geesara mentioned ( when we done with
>>> what we are doing).
>>>
>>> --Srinath
>>>
>>> On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara 
>>> wrote:
>>>
 Hi Sumedha,

 I just detect the face. I went through few articles about face
 recognition, and I have a sample code also, but it is not that much
 accurate.

 Thanks,


 On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
 wrote:

> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
> wrote:
>
>> Hi Geesara,
>>
>> I used Haar full body cascade and HoG pedestrian detection cascade,
>> In Haar full body cascade they have mentioned that, upper body detection,
>> lower body detection and full body detection is there in the cascade. 
>> even
>> thought it is there, once I tried to use separate upper body detection
>> cascade with full body detection cascade. but when it is implemented 
>> system
>> took long time to process even a simple video with two person.
>> I'll upload my code to Github repo.
>> I still didn't work with real-time CCTV videos ,but I was able to
>> build a real-time face detection system using the web cam of my laptop 
>> and
>> it had issues on processing as the machine couldn't handle it.
>>
>
> Anusha,
> Did you just detect the face or associated that with a name as well?
>
>
>
>> We thought of doing video processing out side of the CEP and send the
>> process data in to the CEP.(i.e human count, time_stamp, frame rate
>> ,etc..). For now I send those data into CEP as a Json POST request.
>>
>>
>> Thank You,
>>
>>
>>
>>
>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
>> wrote:
>>
>>> Hi Anusha,
>>>
>>> A few suggestions to improve your implementation.
>>> Haar and HoG  are used to get visual descriptors which can be used
>>> to describe an image. Then both of them are using boosting 
>>> classification
>>> like AdaBoost to tune up its performance. When you are using haar-like
>>> feature extraction method you need to use more that one model in order 
>>> to
>>> make the final decision. Let's say you are using  full body classifier 
>>> for
>>> human detection. Along with this classifier,  can’t detect  upper body
>>> properly. When haar-like feature extraction is used you may have to use
>>> more that one classifier and the final decision will be taken 
>>> aggregation
>>> or composition of those results. Next important thing is 
>>> pre-processing. It
>>> may be composed of color balancing, gamma correction , changing color 
>>> space
>>> and some of the factors which unique to  the environment which you're
>>> trying out. Processing model is also more important since this is to be
>>> done in real time. If you can explain your algorithm we will able to
>>> provide some guidance in order to improve your algorithm to get a better
>>> result.
>>>
>>> Since the main intention of this project is to facilitate support
>>> for images process in the WSO2 Platform. I am just 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-14 Thread Sameera Gunarathne
Hi Geesara,

+1 for suggesting a cascade classifier(haar-cascade) for this
implementation. With a sufficient number of samples for train using haar
features would provide lesser rate of false positive results. AFAIK using
of a background subtraction[1] method for pre processing can be used to
reduce false positive results for the classification.

[1] http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html

Thanks,
Sameera.

On Tue, Sep 13, 2016 at 10:18 PM, Geesara Prathap  wrote:

> Hi Srinath,
>
> OpenCV provides us set of  interesting tools which can be used to train
> classifiers based on our requirements. Some time ago I trained
> a classifier[1] and controlled drone in real time. This article explains[1]
> in a proper way how to train our own model using haar classifier based on
>  Adaboost which OpenCV provide.
>
> 1.https://github.com/GPrathap/opencv-haar-classifier-training
> 2.http://coding-robin.de/2013/07/22/train-your-own-opencv-
> haar-classifier.html
>
> Thanks,
> Geesara
>
> On Tue, Sep 13, 2016 at 10:40 AM, Srinath Perera  wrote:
>
>> Anusha, we should try Adaboost as Geesara mentioned ( when we done with
>> what we are doing).
>>
>> --Srinath
>>
>> On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara 
>> wrote:
>>
>>> Hi Sumedha,
>>>
>>> I just detect the face. I went through few articles about face
>>> recognition, and I have a sample code also, but it is not that much
>>> accurate.
>>>
>>> Thanks,
>>>
>>>
>>> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
>>> wrote:
>>>
 On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
 wrote:

> Hi Geesara,
>
> I used Haar full body cascade and HoG pedestrian detection cascade, In
> Haar full body cascade they have mentioned that, upper body detection,
> lower body detection and full body detection is there in the cascade. even
> thought it is there, once I tried to use separate upper body detection
> cascade with full body detection cascade. but when it is implemented 
> system
> took long time to process even a simple video with two person.
> I'll upload my code to Github repo.
> I still didn't work with real-time CCTV videos ,but I was able to
> build a real-time face detection system using the web cam of my laptop and
> it had issues on processing as the machine couldn't handle it.
>

 Anusha,
 Did you just detect the face or associated that with a name as well?



> We thought of doing video processing out side of the CEP and send the
> process data in to the CEP.(i.e human count, time_stamp, frame rate
> ,etc..). For now I send those data into CEP as a Json POST request.
>
>
> Thank You,
>
>
>
>
> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
> wrote:
>
>> Hi Anusha,
>>
>> A few suggestions to improve your implementation.
>> Haar and HoG  are used to get visual descriptors which can be used to
>> describe an image. Then both of them are using boosting classification 
>> like
>> AdaBoost to tune up its performance. When you are using haar-like feature
>> extraction method you need to use more that one model in order to make 
>> the
>> final decision. Let's say you are using  full body classifier for human
>> detection. Along with this classifier,  can’t detect  upper body 
>> properly.
>> When haar-like feature extraction is used you may have to use more that 
>> one
>> classifier and the final decision will be taken aggregation or 
>> composition
>> of those results. Next important thing is pre-processing. It may be
>> composed of color balancing, gamma correction , changing color space and
>> some of the factors which unique to  the environment which you're trying
>> out. Processing model is also more important since this is to be done in
>> real time. If you can explain your algorithm we will able to provide some
>> guidance in order to improve your algorithm to get a better result.
>>
>> Since the main intention of this project is to facilitate support for
>> images process in the WSO2 Platform. I am just curious to know, how do 
>> you
>> process the video stream in real-time with the help of CEP. Since you are
>> using CCTV feeds which might be using RTSP or RTMP, where do you process
>> the incoming video stream? Are you to develop RTSP or RTMP input adapters
>> so as to get input stream into CEP?
>>
>> Thanks,
>> Geesara
>>
>> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara > > wrote:
>>
>>> Hi,
>>>
>>> The Progress of the video processing project is described in the
>>> attached pdf.
>>>
>>> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
>>> wrote:

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-13 Thread Geesara Prathap
Hi Srinath,

OpenCV provides us set of  interesting tools which can be used to train
classifiers based on our requirements. Some time ago I trained
a classifier[1] and controlled drone in real time. This article explains[1]
in a proper way how to train our own model using haar classifier based on
 Adaboost which OpenCV provide.

1.https://github.com/GPrathap/opencv-haar-classifier-training
2.
http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html

Thanks,
Geesara

On Tue, Sep 13, 2016 at 10:40 AM, Srinath Perera  wrote:

> Anusha, we should try Adaboost as Geesara mentioned ( when we done with
> what we are doing).
>
> --Srinath
>
> On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara 
> wrote:
>
>> Hi Sumedha,
>>
>> I just detect the face. I went through few articles about face
>> recognition, and I have a sample code also, but it is not that much
>> accurate.
>>
>> Thanks,
>>
>>
>> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
>> wrote:
>>
>>> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
>>> wrote:
>>>
 Hi Geesara,

 I used Haar full body cascade and HoG pedestrian detection cascade, In
 Haar full body cascade they have mentioned that, upper body detection,
 lower body detection and full body detection is there in the cascade. even
 thought it is there, once I tried to use separate upper body detection
 cascade with full body detection cascade. but when it is implemented system
 took long time to process even a simple video with two person.
 I'll upload my code to Github repo.
 I still didn't work with real-time CCTV videos ,but I was able to build
 a real-time face detection system using the web cam of my laptop and it had
 issues on processing as the machine couldn't handle it.

>>>
>>> Anusha,
>>> Did you just detect the face or associated that with a name as well?
>>>
>>>
>>>
 We thought of doing video processing out side of the CEP and send the
 process data in to the CEP.(i.e human count, time_stamp, frame rate
 ,etc..). For now I send those data into CEP as a Json POST request.


 Thank You,




 On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
 wrote:

> Hi Anusha,
>
> A few suggestions to improve your implementation.
> Haar and HoG  are used to get visual descriptors which can be used to
> describe an image. Then both of them are using boosting classification 
> like
> AdaBoost to tune up its performance. When you are using haar-like feature
> extraction method you need to use more that one model in order to make the
> final decision. Let's say you are using  full body classifier for human
> detection. Along with this classifier,  can’t detect  upper body properly.
> When haar-like feature extraction is used you may have to use more that 
> one
> classifier and the final decision will be taken aggregation or composition
> of those results. Next important thing is pre-processing. It may be
> composed of color balancing, gamma correction , changing color space and
> some of the factors which unique to  the environment which you're trying
> out. Processing model is also more important since this is to be done in
> real time. If you can explain your algorithm we will able to provide some
> guidance in order to improve your algorithm to get a better result.
>
> Since the main intention of this project is to facilitate support for
> images process in the WSO2 Platform. I am just curious to know, how do you
> process the video stream in real-time with the help of CEP. Since you are
> using CCTV feeds which might be using RTSP or RTMP, where do you process
> the incoming video stream? Are you to develop RTSP or RTMP input adapters
> so as to get input stream into CEP?
>
> Thanks,
> Geesara
>
> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
> wrote:
>
>> Hi,
>>
>> The Progress of the video processing project is described in the
>> attached pdf.
>>
>> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
>> wrote:
>>
>>> Anusha has the people counting from video working through CEP and
>>> have a dashboard. ( Anusha can u send an update with screen shots?). We
>>> will also setup a meeting.
>>>
>>> Also seems new Camaras automatically do human detection etc and add
>>> object codes to videos, and if we can extract them, we can do some 
>>> analysis
>>> without heavy processing as well. Will explore this too.
>>>
>>> Also Facebook opensourced their object detection code called
>>> FaceMask https://code.facebook.com/posts/561187904071636. Another
>>> to look at.
>>>
>>> --Srinath
>>>
>>>
>>>
>>> On Mon, Aug 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-12 Thread Srinath Perera
Anusha, we should try Adaboost as Geesara mentioned ( when we done with
what we are doing).

--Srinath

On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara 
wrote:

> Hi Sumedha,
>
> I just detect the face. I went through few articles about face
> recognition, and I have a sample code also, but it is not that much
> accurate.
>
> Thanks,
>
>
> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
> wrote:
>
>> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
>> wrote:
>>
>>> Hi Geesara,
>>>
>>> I used Haar full body cascade and HoG pedestrian detection cascade, In
>>> Haar full body cascade they have mentioned that, upper body detection,
>>> lower body detection and full body detection is there in the cascade. even
>>> thought it is there, once I tried to use separate upper body detection
>>> cascade with full body detection cascade. but when it is implemented system
>>> took long time to process even a simple video with two person.
>>> I'll upload my code to Github repo.
>>> I still didn't work with real-time CCTV videos ,but I was able to build
>>> a real-time face detection system using the web cam of my laptop and it had
>>> issues on processing as the machine couldn't handle it.
>>>
>>
>> Anusha,
>> Did you just detect the face or associated that with a name as well?
>>
>>
>>
>>> We thought of doing video processing out side of the CEP and send the
>>> process data in to the CEP.(i.e human count, time_stamp, frame rate
>>> ,etc..). For now I send those data into CEP as a Json POST request.
>>>
>>>
>>> Thank You,
>>>
>>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
>>> wrote:
>>>
 Hi Anusha,

 A few suggestions to improve your implementation.
 Haar and HoG  are used to get visual descriptors which can be used to
 describe an image. Then both of them are using boosting classification like
 AdaBoost to tune up its performance. When you are using haar-like feature
 extraction method you need to use more that one model in order to make the
 final decision. Let's say you are using  full body classifier for human
 detection. Along with this classifier,  can’t detect  upper body properly.
 When haar-like feature extraction is used you may have to use more that one
 classifier and the final decision will be taken aggregation or composition
 of those results. Next important thing is pre-processing. It may be
 composed of color balancing, gamma correction , changing color space and
 some of the factors which unique to  the environment which you're trying
 out. Processing model is also more important since this is to be done in
 real time. If you can explain your algorithm we will able to provide some
 guidance in order to improve your algorithm to get a better result.

 Since the main intention of this project is to facilitate support for
 images process in the WSO2 Platform. I am just curious to know, how do you
 process the video stream in real-time with the help of CEP. Since you are
 using CCTV feeds which might be using RTSP or RTMP, where do you process
 the incoming video stream? Are you to develop RTSP or RTMP input adapters
 so as to get input stream into CEP?

 Thanks,
 Geesara

 On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
 wrote:

> Hi,
>
> The Progress of the video processing project is described in the
> attached pdf.
>
> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
> wrote:
>
>> Anusha has the people counting from video working through CEP and
>> have a dashboard. ( Anusha can u send an update with screen shots?). We
>> will also setup a meeting.
>>
>> Also seems new Camaras automatically do human detection etc and add
>> object codes to videos, and if we can extract them, we can do some 
>> analysis
>> without heavy processing as well. Will explore this too.
>>
>> Also Facebook opensourced their object detection code called FaceMask
>> https://code.facebook.com/posts/561187904071636. Another to look at.
>>
>> --Srinath
>>
>>
>>
>> On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana <
>> sanj...@wso2.com> wrote:
>>
>>> Looks good!
>>>
>>> In terms of test data we can take the video cameras in the LK Palm
>>> Grove lobby as an input source to play around with people analysis. For
>>> vehicles we can plop a camera pointing to Duplication Road and get 
>>> plenty
>>> of data :-).
>>>
>>> I guess we should do some small experiments to see how things work.
>>>
>>> Sanjiva.
>>>
>>> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera 
>>> wrote:
>>>
 Attached document list some of the initial ideas about the topic.
 Anusha is exploring some of the 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-10 Thread Anusha Jayasundara
Hi Sumedha,

I just detect the face. I went through few articles about face recognition,
and I have a sample code also, but it is not that much accurate.

Thanks,


On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
wrote:

> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
> wrote:
>
>> Hi Geesara,
>>
>> I used Haar full body cascade and HoG pedestrian detection cascade, In
>> Haar full body cascade they have mentioned that, upper body detection,
>> lower body detection and full body detection is there in the cascade. even
>> thought it is there, once I tried to use separate upper body detection
>> cascade with full body detection cascade. but when it is implemented system
>> took long time to process even a simple video with two person.
>> I'll upload my code to Github repo.
>> I still didn't work with real-time CCTV videos ,but I was able to build a
>> real-time face detection system using the web cam of my laptop and it had
>> issues on processing as the machine couldn't handle it.
>>
>
> Anusha,
> Did you just detect the face or associated that with a name as well?
>
>
>
>> We thought of doing video processing out side of the CEP and send the
>> process data in to the CEP.(i.e human count, time_stamp, frame rate
>> ,etc..). For now I send those data into CEP as a Json POST request.
>>
>>
>> Thank You,
>>
>>
>>
>>
>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
>> wrote:
>>
>>> Hi Anusha,
>>>
>>> A few suggestions to improve your implementation.
>>> Haar and HoG  are used to get visual descriptors which can be used to
>>> describe an image. Then both of them are using boosting classification like
>>> AdaBoost to tune up its performance. When you are using haar-like feature
>>> extraction method you need to use more that one model in order to make the
>>> final decision. Let's say you are using  full body classifier for human
>>> detection. Along with this classifier,  can’t detect  upper body properly.
>>> When haar-like feature extraction is used you may have to use more that one
>>> classifier and the final decision will be taken aggregation or composition
>>> of those results. Next important thing is pre-processing. It may be
>>> composed of color balancing, gamma correction , changing color space and
>>> some of the factors which unique to  the environment which you're trying
>>> out. Processing model is also more important since this is to be done in
>>> real time. If you can explain your algorithm we will able to provide some
>>> guidance in order to improve your algorithm to get a better result.
>>>
>>> Since the main intention of this project is to facilitate support for
>>> images process in the WSO2 Platform. I am just curious to know, how do you
>>> process the video stream in real-time with the help of CEP. Since you are
>>> using CCTV feeds which might be using RTSP or RTMP, where do you process
>>> the incoming video stream? Are you to develop RTSP or RTMP input adapters
>>> so as to get input stream into CEP?
>>>
>>> Thanks,
>>> Geesara
>>>
>>> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
>>> wrote:
>>>
 Hi,

 The Progress of the video processing project is described in the
 attached pdf.

 On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
 wrote:

> Anusha has the people counting from video working through CEP and have
> a dashboard. ( Anusha can u send an update with screen shots?). We will
> also setup a meeting.
>
> Also seems new Camaras automatically do human detection etc and add
> object codes to videos, and if we can extract them, we can do some 
> analysis
> without heavy processing as well. Will explore this too.
>
> Also Facebook opensourced their object detection code called FaceMask
> https://code.facebook.com/posts/561187904071636. Another to look at.
>
> --Srinath
>
>
>
> On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana  > wrote:
>
>> Looks good!
>>
>> In terms of test data we can take the video cameras in the LK Palm
>> Grove lobby as an input source to play around with people analysis. For
>> vehicles we can plop a camera pointing to Duplication Road and get plenty
>> of data :-).
>>
>> I guess we should do some small experiments to see how things work.
>>
>> Sanjiva.
>>
>> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera 
>> wrote:
>>
>>> Attached document list some of the initial ideas about the topic.
>>> Anusha is exploring some of the ideas as an intern project.
>>>
>>> Please comment and help ( specially if you have worked on this area
>>> or has tried out things)
>>>
>>>
>>> Thanks
>>> Srinath
>>>
>>> --
>>> 
>>> Srinath Perera, Ph.D.
>>>

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-09 Thread Nirmal Fernando
*Human Action Recognition using Factorized Spatio-Temporal Convolutional
Networks *http://arxiv.org/abs/1510.00562

On Fri, Sep 9, 2016 at 1:04 PM, Rajith Roshan  wrote:

> Hi Anusha,
>
> AFAIK Haar cascade is good to detect humans in still images. It does not
> use the factors like motion of humans which will be available in a real
> time video. Applying haar cascade for the video would process single frame
> at a time which will result in delay when processing real time video. You
> can use concepts  like optical flows and its implementations like Lucas
> Kanade (not directly , changed according to your scenario) in order to
> capture the motion as well.
>
> Thanks
> Rajith
>
> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
> wrote:
>
>> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
>> wrote:
>>
>>> Hi Geesara,
>>>
>>> I used Haar full body cascade and HoG pedestrian detection cascade, In
>>> Haar full body cascade they have mentioned that, upper body detection,
>>> lower body detection and full body detection is there in the cascade. even
>>> thought it is there, once I tried to use separate upper body detection
>>> cascade with full body detection cascade. but when it is implemented system
>>> took long time to process even a simple video with two person.
>>> I'll upload my code to Github repo.
>>> I still didn't work with real-time CCTV videos ,but I was able to build
>>> a real-time face detection system using the web cam of my laptop and it had
>>> issues on processing as the machine couldn't handle it.
>>>
>>
>> Anusha,
>> Did you just detect the face or associated that with a name as well?
>>
>>
>>
>>> We thought of doing video processing out side of the CEP and send the
>>> process data in to the CEP.(i.e human count, time_stamp, frame rate
>>> ,etc..). For now I send those data into CEP as a Json POST request.
>>>
>>>
>>> Thank You,
>>>
>>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
>>> wrote:
>>>
 Hi Anusha,

 A few suggestions to improve your implementation.
 Haar and HoG  are used to get visual descriptors which can be used to
 describe an image. Then both of them are using boosting classification like
 AdaBoost to tune up its performance. When you are using haar-like feature
 extraction method you need to use more that one model in order to make the
 final decision. Let's say you are using  full body classifier for human
 detection. Along with this classifier,  can’t detect  upper body properly.
 When haar-like feature extraction is used you may have to use more that one
 classifier and the final decision will be taken aggregation or composition
 of those results. Next important thing is pre-processing. It may be
 composed of color balancing, gamma correction , changing color space and
 some of the factors which unique to  the environment which you're trying
 out. Processing model is also more important since this is to be done in
 real time. If you can explain your algorithm we will able to provide some
 guidance in order to improve your algorithm to get a better result.

 Since the main intention of this project is to facilitate support for
 images process in the WSO2 Platform. I am just curious to know, how do you
 process the video stream in real-time with the help of CEP. Since you are
 using CCTV feeds which might be using RTSP or RTMP, where do you process
 the incoming video stream? Are you to develop RTSP or RTMP input adapters
 so as to get input stream into CEP?

 Thanks,
 Geesara

 On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
 wrote:

> Hi,
>
> The Progress of the video processing project is described in the
> attached pdf.
>
> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
> wrote:
>
>> Anusha has the people counting from video working through CEP and
>> have a dashboard. ( Anusha can u send an update with screen shots?). We
>> will also setup a meeting.
>>
>> Also seems new Camaras automatically do human detection etc and add
>> object codes to videos, and if we can extract them, we can do some 
>> analysis
>> without heavy processing as well. Will explore this too.
>>
>> Also Facebook opensourced their object detection code called FaceMask
>> https://code.facebook.com/posts/561187904071636. Another to look at.
>>
>> --Srinath
>>
>>
>>
>> On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana <
>> sanj...@wso2.com> wrote:
>>
>>> Looks good!
>>>
>>> In terms of test data we can take the video cameras in the LK Palm
>>> Grove lobby as an input source to play around with people analysis. For
>>> vehicles we can plop a camera pointing to Duplication Road and get 
>>> plenty
>>> 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-09 Thread Rajith Roshan
Hi Anusha,

AFAIK Haar cascade is good to detect humans in still images. It does not
use the factors like motion of humans which will be available in a real
time video. Applying haar cascade for the video would process single frame
at a time which will result in delay when processing real time video. You
can use concepts  like optical flows and its implementations like Lucas
Kanade (not directly , changed according to your scenario) in order to
capture the motion as well.

Thanks
Rajith

On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe 
wrote:

> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
> wrote:
>
>> Hi Geesara,
>>
>> I used Haar full body cascade and HoG pedestrian detection cascade, In
>> Haar full body cascade they have mentioned that, upper body detection,
>> lower body detection and full body detection is there in the cascade. even
>> thought it is there, once I tried to use separate upper body detection
>> cascade with full body detection cascade. but when it is implemented system
>> took long time to process even a simple video with two person.
>> I'll upload my code to Github repo.
>> I still didn't work with real-time CCTV videos ,but I was able to build a
>> real-time face detection system using the web cam of my laptop and it had
>> issues on processing as the machine couldn't handle it.
>>
>
> Anusha,
> Did you just detect the face or associated that with a name as well?
>
>
>
>> We thought of doing video processing out side of the CEP and send the
>> process data in to the CEP.(i.e human count, time_stamp, frame rate
>> ,etc..). For now I send those data into CEP as a Json POST request.
>>
>>
>> Thank You,
>>
>>
>>
>>
>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap 
>> wrote:
>>
>>> Hi Anusha,
>>>
>>> A few suggestions to improve your implementation.
>>> Haar and HoG  are used to get visual descriptors which can be used to
>>> describe an image. Then both of them are using boosting classification like
>>> AdaBoost to tune up its performance. When you are using haar-like feature
>>> extraction method you need to use more that one model in order to make the
>>> final decision. Let's say you are using  full body classifier for human
>>> detection. Along with this classifier,  can’t detect  upper body properly.
>>> When haar-like feature extraction is used you may have to use more that one
>>> classifier and the final decision will be taken aggregation or composition
>>> of those results. Next important thing is pre-processing. It may be
>>> composed of color balancing, gamma correction , changing color space and
>>> some of the factors which unique to  the environment which you're trying
>>> out. Processing model is also more important since this is to be done in
>>> real time. If you can explain your algorithm we will able to provide some
>>> guidance in order to improve your algorithm to get a better result.
>>>
>>> Since the main intention of this project is to facilitate support for
>>> images process in the WSO2 Platform. I am just curious to know, how do you
>>> process the video stream in real-time with the help of CEP. Since you are
>>> using CCTV feeds which might be using RTSP or RTMP, where do you process
>>> the incoming video stream? Are you to develop RTSP or RTMP input adapters
>>> so as to get input stream into CEP?
>>>
>>> Thanks,
>>> Geesara
>>>
>>> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
>>> wrote:
>>>
 Hi,

 The Progress of the video processing project is described in the
 attached pdf.

 On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
 wrote:

> Anusha has the people counting from video working through CEP and have
> a dashboard. ( Anusha can u send an update with screen shots?). We will
> also setup a meeting.
>
> Also seems new Camaras automatically do human detection etc and add
> object codes to videos, and if we can extract them, we can do some 
> analysis
> without heavy processing as well. Will explore this too.
>
> Also Facebook opensourced their object detection code called FaceMask
> https://code.facebook.com/posts/561187904071636. Another to look at.
>
> --Srinath
>
>
>
> On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana  > wrote:
>
>> Looks good!
>>
>> In terms of test data we can take the video cameras in the LK Palm
>> Grove lobby as an input source to play around with people analysis. For
>> vehicles we can plop a camera pointing to Duplication Road and get plenty
>> of data :-).
>>
>> I guess we should do some small experiments to see how things work.
>>
>> Sanjiva.
>>
>> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera 
>> wrote:
>>
>>> Attached document list some of the initial ideas about the topic.
>>> Anusha is exploring some 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-08 Thread Sumedha Rubasinghe
On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara 
wrote:

> Hi Geesara,
>
> I used Haar full body cascade and HoG pedestrian detection cascade, In
> Haar full body cascade they have mentioned that, upper body detection,
> lower body detection and full body detection is there in the cascade. even
> thought it is there, once I tried to use separate upper body detection
> cascade with full body detection cascade. but when it is implemented system
> took long time to process even a simple video with two person.
> I'll upload my code to Github repo.
> I still didn't work with real-time CCTV videos ,but I was able to build a
> real-time face detection system using the web cam of my laptop and it had
> issues on processing as the machine couldn't handle it.
>

Anusha,
Did you just detect the face or associated that with a name as well?



> We thought of doing video processing out side of the CEP and send the
> process data in to the CEP.(i.e human count, time_stamp, frame rate
> ,etc..). For now I send those data into CEP as a Json POST request.
>
>
> Thank You,
>
>
>
>
> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap  wrote:
>
>> Hi Anusha,
>>
>> A few suggestions to improve your implementation.
>> Haar and HoG  are used to get visual descriptors which can be used to
>> describe an image. Then both of them are using boosting classification like
>> AdaBoost to tune up its performance. When you are using haar-like feature
>> extraction method you need to use more that one model in order to make the
>> final decision. Let's say you are using  full body classifier for human
>> detection. Along with this classifier,  can’t detect  upper body properly.
>> When haar-like feature extraction is used you may have to use more that one
>> classifier and the final decision will be taken aggregation or composition
>> of those results. Next important thing is pre-processing. It may be
>> composed of color balancing, gamma correction , changing color space and
>> some of the factors which unique to  the environment which you're trying
>> out. Processing model is also more important since this is to be done in
>> real time. If you can explain your algorithm we will able to provide some
>> guidance in order to improve your algorithm to get a better result.
>>
>> Since the main intention of this project is to facilitate support for
>> images process in the WSO2 Platform. I am just curious to know, how do you
>> process the video stream in real-time with the help of CEP. Since you are
>> using CCTV feeds which might be using RTSP or RTMP, where do you process
>> the incoming video stream? Are you to develop RTSP or RTMP input adapters
>> so as to get input stream into CEP?
>>
>> Thanks,
>> Geesara
>>
>> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
>> wrote:
>>
>>> Hi,
>>>
>>> The Progress of the video processing project is described in the
>>> attached pdf.
>>>
>>> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
>>> wrote:
>>>
 Anusha has the people counting from video working through CEP and have
 a dashboard. ( Anusha can u send an update with screen shots?). We will
 also setup a meeting.

 Also seems new Camaras automatically do human detection etc and add
 object codes to videos, and if we can extract them, we can do some analysis
 without heavy processing as well. Will explore this too.

 Also Facebook opensourced their object detection code called FaceMask
 https://code.facebook.com/posts/561187904071636. Another to look at.

 --Srinath



 On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana 
 wrote:

> Looks good!
>
> In terms of test data we can take the video cameras in the LK Palm
> Grove lobby as an input source to play around with people analysis. For
> vehicles we can plop a camera pointing to Duplication Road and get plenty
> of data :-).
>
> I guess we should do some small experiments to see how things work.
>
> Sanjiva.
>
> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera 
> wrote:
>
>> Attached document list some of the initial ideas about the topic.
>> Anusha is exploring some of the ideas as an intern project.
>>
>> Please comment and help ( specially if you have worked on this area
>> or has tried out things)
>>
>>
>> Thanks
>> Srinath
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-08 Thread Anusha Jayasundara
Hi Geesara,

I used Haar full body cascade and HoG pedestrian detection cascade, In Haar
full body cascade they have mentioned that, upper body detection, lower
body detection and full body detection is there in the cascade. even
thought it is there, once I tried to use separate upper body detection
cascade with full body detection cascade. but when it is implemented system
took long time to process even a simple video with two person.
I'll upload my code to Github repo.
I still didn't work with real-time CCTV videos ,but I was able to build a
real-time face detection system using the web cam of my laptop and it had
issues on processing as the machine couldn't handle it.
We thought of doing video processing out side of the CEP and send the
process data in to the CEP.(i.e human count, time_stamp, frame rate
,etc..). For now I send those data into CEP as a Json POST request.


Thank You,




On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap  wrote:

> Hi Anusha,
>
> A few suggestions to improve your implementation.
> Haar and HoG  are used to get visual descriptors which can be used to
> describe an image. Then both of them are using boosting classification like
> AdaBoost to tune up its performance. When you are using haar-like feature
> extraction method you need to use more that one model in order to make the
> final decision. Let's say you are using  full body classifier for human
> detection. Along with this classifier,  can’t detect  upper body properly.
> When haar-like feature extraction is used you may have to use more that one
> classifier and the final decision will be taken aggregation or composition
> of those results. Next important thing is pre-processing. It may be
> composed of color balancing, gamma correction , changing color space and
> some of the factors which unique to  the environment which you're trying
> out. Processing model is also more important since this is to be done in
> real time. If you can explain your algorithm we will able to provide some
> guidance in order to improve your algorithm to get a better result.
>
> Since the main intention of this project is to facilitate support for
> images process in the WSO2 Platform. I am just curious to know, how do you
> process the video stream in real-time with the help of CEP. Since you are
> using CCTV feeds which might be using RTSP or RTMP, where do you process
> the incoming video stream? Are you to develop RTSP or RTMP input adapters
> so as to get input stream into CEP?
>
> Thanks,
> Geesara
>
> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
> wrote:
>
>> Hi,
>>
>> The Progress of the video processing project is described in the attached
>> pdf.
>>
>> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera 
>> wrote:
>>
>>> Anusha has the people counting from video working through CEP and have a
>>> dashboard. ( Anusha can u send an update with screen shots?). We will also
>>> setup a meeting.
>>>
>>> Also seems new Camaras automatically do human detection etc and add
>>> object codes to videos, and if we can extract them, we can do some analysis
>>> without heavy processing as well. Will explore this too.
>>>
>>> Also Facebook opensourced their object detection code called FaceMask
>>> https://code.facebook.com/posts/561187904071636. Another to look at.
>>>
>>> --Srinath
>>>
>>>
>>>
>>> On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana 
>>> wrote:
>>>
 Looks good!

 In terms of test data we can take the video cameras in the LK Palm
 Grove lobby as an input source to play around with people analysis. For
 vehicles we can plop a camera pointing to Duplication Road and get plenty
 of data :-).

 I guess we should do some small experiments to see how things work.

 Sanjiva.

 On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera 
 wrote:

> Attached document list some of the initial ideas about the topic.
> Anusha is exploring some of the ideas as an intern project.
>
> Please comment and help ( specially if you have worked on this area or
> has tried out things)
>
>
> Thanks
> Srinath
>
> --
> 
> Srinath Perera, Ph.D.
>http://people.apache.org/~hemapani/
>http://srinathsview.blogspot.com/
>



 --
 Sanjiva Weerawarana, Ph.D.
 Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
 email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
 x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
 blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
 Lean . Enterprise . Middleware

>>>
>>>
>>>
>>> --
>>> 
>>> Srinath Perera, Ph.D.
>>>http://people.apache.org/~hemapani/
>>>http://srinathsview.blogspot.com/
>>>
>>
>>
>>
>> --
>>
>> Anusha Jayasundara
>> Intern Software Engineer
>> 

Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-07 Thread Geesara Prathap
Hi Anusha,

A few suggestions to improve your implementation.
Haar and HoG  are used to get visual descriptors which can be used to
describe an image. Then both of them are using boosting classification like
AdaBoost to tune up its performance. When you are using haar-like feature
extraction method you need to use more that one model in order to make the
final decision. Let's say you are using  full body classifier for human
detection. Along with this classifier,  can’t detect  upper body properly.
When haar-like feature extraction is used you may have to use more that one
classifier and the final decision will be taken aggregation or composition
of those results. Next important thing is pre-processing. It may be
composed of color balancing, gamma correction , changing color space and
some of the factors which unique to  the environment which you're trying
out. Processing model is also more important since this is to be done in
real time. If you can explain your algorithm we will able to provide some
guidance in order to improve your algorithm to get a better result.

Since the main intention of this project is to facilitate support for
images process in the WSO2 Platform. I am just curious to know, how do you
process the video stream in real-time with the help of CEP. Since you are
using CCTV feeds which might be using RTSP or RTMP, where do you process
the incoming video stream? Are you to develop RTSP or RTMP input adapters
so as to get input stream into CEP?

Thanks,
Geesara

On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara 
wrote:

> Hi,
>
> The Progress of the video processing project is described in the attached
> pdf.
>
> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera  wrote:
>
>> Anusha has the people counting from video working through CEP and have a
>> dashboard. ( Anusha can u send an update with screen shots?). We will also
>> setup a meeting.
>>
>> Also seems new Camaras automatically do human detection etc and add
>> object codes to videos, and if we can extract them, we can do some analysis
>> without heavy processing as well. Will explore this too.
>>
>> Also Facebook opensourced their object detection code called FaceMask
>> https://code.facebook.com/posts/561187904071636. Another to look at.
>>
>> --Srinath
>>
>>
>>
>> On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana 
>> wrote:
>>
>>> Looks good!
>>>
>>> In terms of test data we can take the video cameras in the LK Palm Grove
>>> lobby as an input source to play around with people analysis. For vehicles
>>> we can plop a camera pointing to Duplication Road and get plenty of data
>>> :-).
>>>
>>> I guess we should do some small experiments to see how things work.
>>>
>>> Sanjiva.
>>>
>>> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera 
>>> wrote:
>>>
 Attached document list some of the initial ideas about the topic.
 Anusha is exploring some of the ideas as an intern project.

 Please comment and help ( specially if you have worked on this area or
 has tried out things)


 Thanks
 Srinath

 --
 
 Srinath Perera, Ph.D.
http://people.apache.org/~hemapani/
http://srinathsview.blogspot.com/

>>>
>>>
>>>
>>> --
>>> Sanjiva Weerawarana, Ph.D.
>>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
>>> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
>>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>>> Lean . Enterprise . Middleware
>>>
>>
>>
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>
>
>
> --
>
> Anusha Jayasundara
> Intern Software Engineer
> WSO2
> +94711920369
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Geesara Prathap Kulathunga
Software Engineer
WSO2 Inc; http://wso2.com
Mobile : +940772684174
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-08-31 Thread Srinath Perera
Anusha has the people counting from video working through CEP and have a
dashboard. ( Anusha can u send an update with screen shots?). We will also
setup a meeting.

Also seems new Camaras automatically do human detection etc and add object
codes to videos, and if we can extract them, we can do some analysis
without heavy processing as well. Will explore this too.

Also Facebook opensourced their object detection code called FaceMask
https://code.facebook.com/posts/561187904071636. Another to look at.

--Srinath



On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana 
wrote:

> Looks good!
>
> In terms of test data we can take the video cameras in the LK Palm Grove
> lobby as an input source to play around with people analysis. For vehicles
> we can plop a camera pointing to Duplication Road and get plenty of data
> :-).
>
> I guess we should do some small experiments to see how things work.
>
> Sanjiva.
>
> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera  wrote:
>
>> Attached document list some of the initial ideas about the topic. Anusha
>> is exploring some of the ideas as an intern project.
>>
>> Please comment and help ( specially if you have worked on this area or
>> has tried out things)
>>
>>
>> Thanks
>> Srinath
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
> Lean . Enterprise . Middleware
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-08-15 Thread Vanjikumaran Sivajothy
On Wed, Aug 10, 2016 at 5:01 AM, Pumudu Ruhunage  wrote:

> Hi,
>
> It's better to do image/video pre-processing before extracting any usable
> information from images/videos. Reducing the amount of unnecessary
> information in a given image/video is a vital part
>

There is another catch, Video-cameras are capable of capturing more than
15Frame/Second and you do not need pre-process whole 15 frames. So sampling
rate has another big performance impact on this.


> when it comes to increasing final accuracy. (ex : grayscale transformation/
> image restoration etc.)
>
> Another important section to explore is which algorithm to use in order to
> detect shapes/features. For example, number detection program should detect
> edges, and image processing techniques used[1][2][3] should enhance edges
> in a given image to make it easy to extract these features. Often accuracy
> of  these algorithms depend on image noise level, brightness level etc.
> Therefore, we might need evaluate the accuracy of different algorithms for
> different types of images as well.
>
> [1] https://en.wikipedia.org/wiki/Prewitt_operator
> [2] https://en.wikipedia.org/wiki/Canny_edge_detector
> [3] https://en.wikipedia.org/wiki/Sobel_operator
>
>
> Thanks,
>
> On Wed, Aug 10, 2016 at 3:03 PM, Srinath Perera  wrote:
>
>> Anusha can you send notes about what we did so far to this thread.
>>
>> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera  wrote:
>>
>>> Attached document list some of the initial ideas about the topic. Anusha
>>> is exploring some of the ideas as an intern project.
>>>
>>> Please comment and help ( specially if you have worked on this area or
>>> has tried out things)
>>>
>>>
>>> Thanks
>>> Srinath
>>>
>>> --
>>> 
>>> Srinath Perera, Ph.D.
>>>http://people.apache.org/~hemapani/
>>>http://srinathsview.blogspot.com/
>>>
>>
>>
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Pumudu Ruhunage
> Software Engineer | WSO2 Inc
> M: +94 779 664493  | http://wso2.com
> 
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Vanjikumaran Sivajothy
*Associate Technical Lead*
*WSO2 Inc. http://wso2.com *
 *+1-925-464-6816*
[image: Facebook]  [image: Twitter]
 [image: LinkedIn]
 [image:
Blogger]  [image: SlideShare]


This communication may contain privileged or other confidential information
and is intended exclusively for the addressee/s. If you are not the
intended recipient/s, or believe that you may have received this
communication in error, please reply to the sender indicating that fact and
delete the copy you received and in addition, you should not print,
copy, re-transmit, disseminate, or otherwise use the information contained
in this communication. Internet communications cannot be guaranteed to be
timely, secure, error or virus-free. The sender does not accept liability
for any errors or omissions
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-08-10 Thread Pumudu Ruhunage
Hi,

It's better to do image/video pre-processing before extracting any usable
information from images/videos. Reducing the amount of unnecessary
information in a given image/video is a vital part when it comes to
increasing final accuracy. (ex : grayscale transformation/ image
restoration etc.)

Another important section to explore is which algorithm to use in order to
detect shapes/features. For example, number detection program should detect
edges, and image processing techniques used[1][2][3] should enhance edges
in a given image to make it easy to extract these features. Often accuracy
of  these algorithms depend on image noise level, brightness level etc.
Therefore, we might need evaluate the accuracy of different algorithms for
different types of images as well.

[1] https://en.wikipedia.org/wiki/Prewitt_operator
[2] https://en.wikipedia.org/wiki/Canny_edge_detector
[3] https://en.wikipedia.org/wiki/Sobel_operator


Thanks,

On Wed, Aug 10, 2016 at 3:03 PM, Srinath Perera  wrote:

> Anusha can you send notes about what we did so far to this thread.
>
> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera  wrote:
>
>> Attached document list some of the initial ideas about the topic. Anusha
>> is exploring some of the ideas as an intern project.
>>
>> Please comment and help ( specially if you have worked on this area or
>> has tried out things)
>>
>>
>> Thanks
>> Srinath
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>
>
>
> --
> 
> Srinath Perera, Ph.D.
>http://people.apache.org/~hemapani/
>http://srinathsview.blogspot.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Pumudu Ruhunage
Software Engineer | WSO2 Inc
M: +94 779 664493  | http://wso2.com

___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-08-10 Thread Srinath Perera
Anusha can you send notes about what we did so far to this thread.

On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera  wrote:

> Attached document list some of the initial ideas about the topic. Anusha
> is exploring some of the ideas as an intern project.
>
> Please comment and help ( specially if you have worked on this area or has
> tried out things)
>
>
> Thanks
> Srinath
>
> --
> 
> Srinath Perera, Ph.D.
>http://people.apache.org/~hemapani/
>http://srinathsview.blogspot.com/
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture