Hello all, Thanks for all of the support so far! Now, I would like train a chunker model to recognize domain-specific terms and group them. However, I don't really have comprehensive corpus that deals with the rest of the English language - just the particular cases that I'm interested in. I thought it might be better to merely augment the existing chunker model. Is there a way to append to an existing model or perhaps append my training data to a chunking corpus and then train that? Has anyone tried extending the existing model - and if so, what was done?
Patrick Baggett Online Engineer - Search Team e: [email protected]<mailto:[email protected]> p: +1 (214) 202-8964 ________________________________ The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.
