I keep talking to myself... hope it doesn't annoy u too much!

We thought of a solution to our problem in wich we build a new .job
file, in accordance with our crawl configuration, and then pass it to
hadoop for execution... Is there somewhere i can look for the
specification of the .job format?

Thanks again,

Pedro

I wrote:
> Hi hadoopers,
>
> I'm working on an enterprise search engine that works on an hadoop
> cluster but is controlled form the outside. I managed to implement a
> simple crawler much like Nutch's...
> Now i have a new system's requirement: the crawl process must be
> configurable outside hadoop. This means that I should be able to add
> steps to the crawling process that the cluster would execute without 
> knowing before hand what they are... since serialization if not
> possible, is there another way to achieve the same effect?
>
> Using Writable means I need implementations to be on each node so they
> can read the object data from HDFS... but then i just get the same
> object and not a new implementation, right?
>
> Any thoughts will be appreciated,
>
> Pedro
>
>   

DISCLAIMER: This message may contain confidential information or privileged 
material and is intended only for the individual(s) named. If you are not a 
named addressee and mistakenly received this message you should not copy or 
otherwise disseminate it: please delete this e-mail from your system and notify 
the sender immediately. E-mail transmissions are not guaranteed to be secure or 
without errors as information could be intercepted, corrupted, lost, destroyed, 
arrive late or incomplete or contain viruses. Therefore, the sender does not 
accept liability for any errors or omissions in the contents of this message 
that arise as a result of e-mail transmissions. Please request a hard copy 
version if verification is required. Critical Software, SA.

Reply via email to