On Tue, Apr 17, 2012 at 8:25 AM, Jörn Kottmann <[email protected]> wrote:

> On 04/17/2012 03:20 PM, Jason Baldridge wrote:
>
>> I haven't followed this in detail, but I do wonder why we don't have a
>> single model that just predicts all the types? That is the standard thing
>> to do...
>>
>
> We can do that. Anyway you could still end up in a situation where
> you want to merge the output of multiple name finders.
> Maybe you have a maxent name finder and a Dictionary or Regular Expression
> Name Finder.
>
> Then some application have a need to merge the names so there are no
> overlaps.
>
>
>

That's a very good point.


>
>  FWIW, integrating the output of multiple classifiers and incorporating
>> their probabilities is something that can be done quite cleanly with
>> approaches like Integer Linear Programming.
>>
>
> Sounds interesting, do you have a paper on this?
>
>
Pascal Denis and I used it for coreference:

http://www.sepln.org/revistaSEPLN/revista/42/03Articulos-p19-87a96.pdf

There is a fairly extensive body of work in NLP using integer linear
programming:

http://www.google.com/cse?cx=011664571474657673452%3A4w9swzkcxiy&cof=FORID%3A0&q=integer+linear+programming#gsc.tab=0&gsc.q=integer%20linear%20programming&gsc.page=1

One of the primary people to bring it to the attention of NLP research was
Dan Roth. Here's a tutorial he and his students did in 2010:

http://l2r.cs.uiuc.edu/%7Edanr/Talks/ILP-CCM-Tutorial-NAACL10.ppt

What do you think about having a simple baseline version?
>
>

+1



-- 
Jason Baldridge
Associate Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge

Reply via email to