I happened to have a copy of 18.1 lying about, and the JobConf is added to the per process runtime environment in 18.1.
The entire configuration from the JobConf object is added to the environment, with the jobconf key names being transformed slightly. Any character in the key name, that is not one of [a-zA-Z0-9] is replaced by an '_' character. On Tue, Jun 23, 2009 at 10:29 AM, Bo Shi <b...@deadpuck.net> wrote: > Jason, do you know offhand when this feature was introduced? .18.x? > > Thanks, > Bo > > On Mon, Jun 22, 2009 at 10:58 PM, jason hadoop<jason.had...@gmail.com> > wrote: > > Check the process environment for your streaming tasks, generally the > > configuration variables are exported into the process environment. > > > > The Mapper input file is normally stored as some variant of > > mapred.input.file. The reducer's input is the mapper output for that > reduce, > > so the input file is not relevant. > > > > On Mon, Jun 22, 2009 at 7:21 PM, C G <parallel...@yahoo.com> wrote: > > > >> Hi All: > >> Is there any way using Hadoop Streaming to determining the directory > from > >> which an input record is being read? This is straightforward in Hadoop > >> using InputFormats, but I am curious if the same concept can be applied > to > >> streaming. > >> The goal here is to read in data from 2 directories, say A/ and B/, and > >> make decisions about what to do based on where the data is rooted. > >> Thanks for any help...CG > >> > >> > >> > >> > > > > > > > > > > -- > > Pro Hadoop, a book to guide you from beginner to hadoop mastery, > > http://www.amazon.com/dp/1430219424?tag=jewlerymall > > www.prohadoopbook.com a community for Hadoop Professionals > > > -- Pro Hadoop, a book to guide you from beginner to hadoop mastery, http://www.amazon.com/dp/1430219424?tag=jewlerymall www.prohadoopbook.com a community for Hadoop Professionals