SVM is reasonable.

SGD with hand-tuning of the learning parameters may work.

With so little training data, you will have a difficult assessing whether
your system is working.

Sometimes, you can rephrase your problem so that all of your training data
across many situations can be pooled together.  There is a nice paper on
google priority mail about just such an example where Google used
meta-features so that they could train a few models across all users

On Mon, Sep 12, 2011 at 6:52 AM, Loic Descotte <[email protected]>wrote:

>
> My classification problem is very similar to the "20 newsgroups" example.
>  But I don't have the possibility to use a large quantity of data for
> training.
> ...
> I'd like to try with 10 examples by category (with 2 or 3 category),
> choosing good examples with the more frequent keywords to be sure that the
> learning phase will be efficient.
>
> Can it be relevant with so little data ?
>
>

Reply via email to