Hello:

     I am a new user of crunch, and plan to deploy crunch for my company
for use.
     Based on my understanding of Crunch, I think crunch acts like a
pre-processor which will finally transform the users' codes to Map-Reduce
jobs, so one pipeline can be transformed to a number of map-reduce jobs. I
just want to know how crunch allocate resources for these jobs? As we can
configure memory usage for map-reduce jobs in hadoop by configuration file,
how crunch do this for the generated jobs? Are they all share a single
configuration, or have their own configuration based on the job load?
     Looking forward for your help!
     Thank you

Best Regards
Lu Heng

Reply via email to