Well the question is for running UIMA over hadoop? How to do that as in UIMA
there are xml descriptors which have relative urls and location? Which
throws exception
But I can probably do without that answer
Simplifying the problem
I create a jar for my application and I am trying to run a map reduce job
In the map I am trying to read an xml resource which gives this kind of
exceprion
java.io.FileNotFoundException:
/tmp/hadoop-root/mapred/local/taskTracker/jobcache/job_200806102252_0028/task_200806102252_0028_m_000000_0/./descriptors/annotators/RecordCandidateAnnotator.xml
(No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at java.io.FileInputStream.<init>(FileInputStream.java:66)
at
sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
at
sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
at java.net.URL.openStream(URL.java:1009)
at org.apache.uima.util.XMLInputSource.<init>(XMLInputSource.java:83)
I think I require to pass on the content of the jar which contains the
resource xml and classes(other than the JOB class) to each and every
taskXXXXXXX getting created
How can I do that
REgards
Rohan
On Wed, Jun 11, 2008 at 5:12 PM, Michael Baessler <[EMAIL PROTECTED]>
wrote:
> rohan rai wrote:
> > Hi
> > A simple thing such as a name annotator which has an import location of
> > type starts throwing exception when I create a jar of the application I
> am
> > developing and run over hadoop.
> >
> > If I have to do it a java class file then I can use XMLInputSource in =
> new
> >
> XMLInputSource(ClassLoader.getSystemResourceAsStream(aeXmlDescriptor),null);
> >
> > But the relative paths in annotators, analysis engines etc starts
> throwing
> > exception
> >
> > Please Help
> >
> > Regards
> > Rohan
> >
> I'm not sure I understand your question, but I think you need some help
> with the exceptions you get.
> Can you provide the exception stack trace?
>
> -- Michael
>