Hi Dan,

Can't you call (A, Q) as A', (A,R) as A'', and so on...and just treat them
as separate labels altogether? Your classifier can then learn using these
"fake" labels.

You can then have an in memory map of what each fake label (A'' for
example) corresponds to in reality (A'' in this case = (A, R)).

Best Regards,
Nishant Kelkar

On Thursday, February 25, 2016, Russ, Daniel (NIH/CIT) [E] <
dr...@mail.nih.gov> wrote:

> I am not sure I understand.  When I think of the kernel trick, I think of
> converting a linear decision boundary into a higher order decision
> boundary.  (i.e. r<-x^2 + y^2 giving a circular decision boundary).  Maybe
> I am missing something?  I’ll look into this a bit more.
> Dan
>
>
> > On Feb 25, 2016, at 11:11 AM, Alexander Wallin <
> alexan...@wallindevelopment.se <javascript:;>> wrote:
> >
> > Can’t you make a compounded feature (or features), i.e. use the kernel
> trick?
> >
> > Alexander
> >
> >> 25 feb. 2016 kl. 17:06 skrev Russ, Daniel (NIH/CIT) [E] <
> dr...@mail.nih.gov <javascript:;>>:
> >>
> >> Hi,
> >> Is it possible to change the prior based on a feature?
> >>
> >> For example, if I have the follow data (very simplified)
> >>
> >> Class, Predicates
> >>
> >> A, X
> >> A, X
> >> B, X
> >>
> >> You would expect class A 2/3 of the time when the feature is just
> predicate X.
> >>
> >> However, lets say I know that another feature Y that can take values
> {Q,R,S). P(A|Q)=0.8;P(A|R)=0.1;P(A|S)=0.3.
> >>
> >> Is there any way to add feature Y to the classifier taking advantage of
> this information?
> >> Thanks
> >> Dan
> >>
> >>
> >
>
>

Reply via email to