Wait.. this is something new to me.. This goes is driver setup??? mapper?? can you elaborate a bit on this??
On Thu, Aug 29, 2013 at 12:43 AM, Shekhar Sharma <[email protected]>wrote: > Path p = new Path("path of the file which youwould like to read from > HDFS"); > FSDataInputStream iStream = FileSystem.open(p); > String str; > while((str = iStream.readLine())!=null) > { > System.out.printn(str); > > } > Regards, > Som Shekhar Sharma > +91-8197243810 > > > On Thu, Aug 29, 2013 at 12:15 PM, jamal sasha <[email protected]> > wrote: > > Hi, > > Probably a very stupid question. > > I have this data in binary format... and the following piece of code > works > > for me in normal java. > > > > > > public classparser { > > > > public static void main(String [] args) throws Exception{ > > String filename = "sample.txt"; > > File file = new File(filename); > > FileInputStream fis = new FileInputStream(filename); > > System.out.println("Total file size to read (in bytes) : " > > + fis.available()); > > BSONDecoder bson = new BSONDecoder(); > > System.out.println(bson.readObject(fis)); > > } > > } > > > > > > Now finally the last line is the answer.. > > Now, I want to implement this on hadoop but the challenge (which I think) > > is.. that I am not reading or parsing data line by line.. rather its a > > stream of data??? right?? > > How do i replicate the above code logic.. but in hadoop? >
