Hello Prasanna,

thanks for getting back.

> I was going through application guideline given at mlpack wiki page and I came
> across testing the project section. I never actually thought about testing up
> until now but now that I think about it, it is hard to test generative models.
> One direct way to do it is give it a noisy input image and perform CD to get
> sample from model distribution and make sure that reconstruction error is 
> less.
> However I think this much is not sufficient to test generative models. Also 
> this
> is applicable to RBM, DBN and DBM only. I haven't yet gone through GAN (I am
> planning on finishing it in next couple of days). So I don't have any concrete
> strategy for testing these models. I am searching for appropriate testing
> methods. May be you can help me for this?

For each model there are a couple of tests that I can think of:

RBM:
- Train the model on a subset of the MNIST dataset and make sure the trained
  filters lead to Gabor-like filters.
- Reconstruct the negative samples obtained as image and check its correlation
  with the input.

Take a look at: http://deeplearning.net/tutorial/rbm.html#tracking-progress 
<http://deeplearning.net/tutorial/rbm.html#tracking-progress> for
more information about how to test RBM's.

DBN:

- Could be tested on the validation error with respect to the classification of
  the MNIST dataset, by stacking a logistic regression layer at the top layer.
- Compare results with other existing implementations.

RBFN:
- Test on a subset of the MNIST dataset.
- Test on the NETtalk task.
- Compare results with other existing implementations.

I'm sure there are a couple more simple tests that we can come up with.

Also, it looks like I missed the previous message, I'm not sure I see a benefit
of implementing the Hopfield model, correct me if I'm wrong but isn't the RBM
similar and the superior/sucessor model?

I hope this is helpful.

Thanks,
Marcus

> On 15 Mar 2017, at 06:38, Prasanna Patil <[email protected]> wrote:
> 
> Hi Marcus, 
> 
> I was going through application guideline given at mlpack wiki page and I 
> came across testing the project section. I never actually thought about 
> testing up until now but now that I think about it, it is hard to test 
> generative models. One direct way to do it is give it a noisy input image and 
> perform CD to get sample from model distribution and make sure that 
> reconstruction error is less. However I think this much is not sufficient to 
> test generative models. Also this is applicable to RBM, DBN and DBM only. I 
> haven't yet gone through GAN (I am planning on finishing it in next couple of 
> days). So I don't have any concrete strategy for testing these models. I am 
> searching for appropriate testing methods. May be you can help me for this?
> 
> I know you are very busy and ignore this email if you don't have time for it.
> 
> Thanks,
> Prasanna
> 
> On Wed, Mar 1, 2017 at 7:25 AM, Prasanna Patil <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi Marcus,
> 
> I guess, if you find some model you think is interesting and is somewhat
> manageable to implement, you can do that. We are always open for new 
> interesting
> models/methods. That's also a great way to work with the codebase, but don't
> feel obligated.
> 
> I was thinking of implementing Hopfield network if it is not part of GSoC 
> project (?).
> 
> I have implemented it in Python here 
> <https://github.com/prasanna08/MachineLearning/blob/master/hopfield.py>. So 
> can you help me with the interface of hopfield model (functions that should 
> be visible to user. Such as get_output(input will be corrupted image), 
> compute_energy(energy associated with particular input state) etc...)
> 
> The version I have implemented is quite basic and works good for binary 
> images. I have not tried it with grayscale images (my implementation does use 
> tanh outputs rather than binary outputs, however). I will go though this 
> paper <http://page.mi.fu-berlin.de/rojas/neural/chapter/K13.pdf> you 
> mentioned for more details. Also I am having exams this week so I will 
> implement this in next week, if that's okay ? 
> 
> Initially I was thinking of implementing batch normalization layer. But I 
> found that in mlpack only one input train case is processed per iteration (am 
> I correct?) . Batch norm depends on minibatch of inputs so I don't know how 
> to do it with current scenario.
> 
> Thanks,
> Prasanna
> 

_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to