I am not sure I understand.  When I think of the kernel trick, I think of 
converting a linear decision boundary into a higher order decision boundary.  
(i.e. r<-x^2 + y^2 giving a circular decision boundary).  Maybe I am missing 
something?  I’ll look into this a bit more.
Dan


> On Feb 25, 2016, at 11:11 AM, Alexander Wallin 
> <alexan...@wallindevelopment.se> wrote:
> 
> Can’t you make a compounded feature (or features), i.e. use the kernel trick?
> 
> Alexander
> 
>> 25 feb. 2016 kl. 17:06 skrev Russ, Daniel (NIH/CIT) [E] <dr...@mail.nih.gov>:
>> 
>> Hi,
>> Is it possible to change the prior based on a feature?
>> 
>> For example, if I have the follow data (very simplified)
>> 
>> Class, Predicates
>> 
>> A, X
>> A, X
>> B, X
>> 
>> You would expect class A 2/3 of the time when the feature is just predicate 
>> X.
>> 
>> However, lets say I know that another feature Y that can take values 
>> {Q,R,S). P(A|Q)=0.8;P(A|R)=0.1;P(A|S)=0.3.
>> 
>> Is there any way to add feature Y to the classifier taking advantage of this 
>> information?
>> Thanks
>> Dan
>> 
>> 
> 

Reply via email to